Poganiatz, I; Wagner, H
2001-04-01
Interaural level differences play an important role for elevational sound localization in barn owls. The changes of this cue with sound location are complex and frequency dependent. We exploited the opportunities offered by the virtual space technique to investigate the behavioral relevance of the overall interaural level difference by fixing this parameter in virtual stimuli to a constant value or introducing additional broadband level differences to normal virtual stimuli. Frequency-specific monaural cues in the stimuli were not manipulated. We observed an influence of the broadband interaural level differences on elevational, but not on azimuthal sound localization. Since results obtained with our manipulations explained only part of the variance in elevational turning angle, we conclude that frequency-specific cues are also important. The behavioral consequences of changes of the overall interaural level difference in a virtual sound depended on the combined interaural time difference contained in the stimulus, indicating an indirect influence of temporal cues on elevational sound localization as well. Thus, elevational sound localization is influenced by a combination of many spatial cues including frequency-dependent and temporal features.
Blanks, Deidra A.; Buss, Emily; Grose, John H.; Fitzpatrick, Douglas C.; Hall, Joseph W.
2009-01-01
Objectives The present study investigated interaural time discrimination for binaurally mismatched carrier frequencies in listeners with normal hearing. One goal of the investigation was to gain insights into binaural hearing in patients with bilateral cochlear implants, where the coding of interaural time differences may be limited by mismatches in the neural populations receiving stimulation on each side. Design Temporal envelopes were manipulated to present low frequency timing cues to high frequency auditory channels. Carrier frequencies near 4 kHz were amplitude modulated at 128 Hz via multiplication with a half-wave rectified sinusoid, and that modulation was either in-phase across ears or delayed to one ear. Detection thresholds for non-zero interaural time differences were measured for a range of stimulus levels and a range of carrier frequency mismatches. Data were also collected under conditions designed to limit cues based on stimulus spectral spread, including masking and truncation of sidebands associated with modulation. Results Listeners with normal hearing can detect interaural time differences in the face of substantial mismatches in carrier frequency across ears. Conclusions The processing of interaural time differences in listeners with normal hearing is likely based on spread of excitation into binaurally matched auditory channels. Sensitivity to interaural time differences in listeners with cochlear implants may depend upon spread of current that results in the stimulation of neural populations that share common tonotopic space bilaterally. PMID:18596646
Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H
2016-08-01
To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.
The use of interaural parameters during incoherence detection in reproducible noise
NASA Astrophysics Data System (ADS)
Goupell, Matthew Joseph
Interaural incoherence is a measure of the dissimilarity of the signals in the left and right ears. It is important in a number of acoustical phenomenon such as a listener's sensation envelopment and apparent source width in room acoustics, speech intelligibility, and binaural release from energetic masking. Humans are incredibly sensitive to the difference between perfectly coherent and slightly incoherent signals, however the nature of this sensitivity is not well understood. The purpose of this dissertation is to understand what parameters are important to incoherence detection. Incoherence is perceived to have time-varying characteristics. It is conjectured that incoherence detection is performed by a process that takes this time dependency into account. Left-ear-right-ear noise-pairs were generated, all with a fixed value of interaural coherence, 0.9922. The noises had a center frequency of 500 Hz, a bandwidth of 14 Hz, and a duration of 500 ms. Listeners were required to discriminate between these slightly incoherent noises and diotic noises, with a coherence of 1.0. It was found that the value of interaural incoherence itself was an inadequate predictor of discrimination. Instead, incoherence was much more readily detected for those noise-pairs with the largest fluctuations in interaural phase and level differences (as measured by the standard deviation). Noise-pairs with the same value of coherence, and geometric mean frequency of 500 Hz were also generated for bandwidths of 108 Hz and 2394 Hz. It was found that for increasing bandwidth, fluctuations in interaural differences varied less between different noise-pairs and that detection performance varied less as well. The results suggest that incoherence detection is based on the size and the speed of interaural fluctuations and that the value of coherence itself predicts performance only in the wide-band limit where different particular noises with the same incoherence have similar fluctuations. Noise-pairs with short durations of 100, 50, and 25 ms, and bandwidth of 14 Hz, and a coherence of 0.9922 were used to test if a short-term incoherence function is used in incoherence detection. It was found that listeners could significantly use fluctuations of phase and level to detect incoherence for all three of these short durations. Therefore, a short-term coherence function is not used to detect incoherence. For the smallest duration of 25 ms, listeners' detection cue sometimes changed from a "width" cue to a lateralization cue. Modeling of the data was performed. Ten different binaural models were tested against detection data for 14-Hz and 108-Hz bandwidths. These models included different types of binaural processing: independent interaural phase and level differences, lateral position, and short-term cross-correlation. Several preprocessing features were incorporated in the models: compression, temporal averaging, and envelope weighting. For the 14-Hz bandwidth data, the most successful model assumed independent centers for interaural phase and interaural level processing, and this model correlated with detectability at r = 0.87. That model also described the data best when it was assumed that interaural phase fluctuations and interaural level fluctuations contribute approximately equally to incoherence detection. For the 108-Hz bandwidth data, detection performance varied much less among different waveforms, and the data were less able to distinguish between models.
Localizing nearby sound sources in a classroom: Binaural room impulse responses
NASA Astrophysics Data System (ADS)
Shinn-Cunningham, Barbara G.; Kopco, Norbert; Martin, Tara J.
2005-05-01
Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy. .
Localizing nearby sound sources in a classroom: binaural room impulse responses.
Shinn-Cunningham, Barbara G; Kopco, Norbert; Martin, Tara J
2005-05-01
Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy.
The effect of interaural fluctuation rate on correlation change discrimination.
Goupell, Matthew J; Litovsky, Ruth Y
2014-02-01
While bilateral cochlear implants (CIs) provide some binaural benefits, these benefits are limited compared to those observed in normal-hearing (NH) listeners. The large frequency-to-electrode allocation bandwidths (BWs) in CIs compared to auditory filter BWs in NH listeners increases the interaural fluctuation rate available for binaural unmasking, which may limit binaural benefits. The purpose of this work was to investigate the effect of interaural fluctuation rate on correlation change discrimination and binaural masking-level differences in NH listeners presented a CI simulation using a pulsed-sine vocoder. In experiment 1, correlation-change just-noticeable differences (JNDs) and tone-in-noise thresholds were measured for narrowband noises with different BWs and center frequencies (CFs). The results suggest that the BW, CF, and/or interaural fluctuation rate are important factors for correlation change discrimination. In experiment 2, the interaural fluctuation rate was systematically varied and dissociated from changes in BW and CF by using a pulsed-sine vocoder. Results indicated that the interaural fluctuation rate did not affect correlation change JNDs for correlated reference noises; however, slow interaural fluctuations increased correlation change JNDs for uncorrelated reference noises. In experiment 3, the BW, CF, and vocoder pulse rate were varied while interaural fluctuation rate was held constant. JNDs increased for increasing BW and decreased for increasing CF. In summary, relatively fast interaural fluctuation rates are not detrimental for detecting changes in interaural correlation. Thus, limiting factors to binaural benefits in CI listeners could be a result of other temporal and/or spectral deficiencies from electrical stimulation.
Kuwada, S; Yin, T C
1983-10-01
Detailed, quantitative studies were made of the interaural phase sensitivity of 197 neurons with low best frequency in the inferior colliculus (IC) of the barbiturate-anesthetized cat. We analyzed the responses of single cells to interaural delays in which tone bursts were delivered to the two ears via sealed earphones and the onset of the tone to one ear with respect to the other was varied. For most (80%) cells the discharge rate is a cyclic function of interaural delay at a period corresponding to that of the stimulating frequency. The cyclic nature of the interaural delay curve indicates that these cells are sensitive to the interaural phase difference. These cells are distributed throughout the low-frequency zone of the IC, but they are less numerous in the medial and caudal zones. Cells with a wide variety of response patterns will exhibit interaural phase sensitivities at stimulating frequencies up to 3,100 Hz, although above 2,500 Hz the number of such cells decrease markedly. Using dichotic stimuli we could study the cell's sensitivity to the onset delay and interaural phase independently. The large majority of IC cells respond only to changes in interaural phase, with no sensitivity to the onset delay. However, a small number (7%) of cells exhibit a sensitivity to the onset delay as well as to the interaural phase disparity, and most of these cells show an onset response. The effects of changing the stimulus intensity equally to both ears or of changing the interaural intensity difference on the mean interaural phase were studied. While some neurons are not affected by level changes, others exhibit systematic phase shifts for both average and interaural intensity variations, and there is a continuous distribution of sensitivities between these extremes. A few cells also showed systematic changes in the shape of the interaural delay curves as a function of interaural intensity difference, especially at very long delays. These shifts can be interpreted as a form of time-intensity trading. A few cells demonstrated orderly changes in the interaural delay curve as the repetition rate of the stimulus was varied. Some of these changes are consonant with an inhibitory effect that occurs at stimulus offset. The responses of the neurons show a strong bias for stimuli that would originate from he contralateral sound field; 77% of the responses display mean interaural phase angles that are less than 0.5 of a cycle, which are delays to the ipsilateral tone.(ABSTRACT TRUNCATED AT 400 WORDS)
Interaural time sensitivity of high-frequency neurons in the inferior colliculus.
Yin, T C; Kuwada, S; Sujaku, Y
1984-11-01
Recent psychoacoustic experiments have shown that interaural time differences provide adequate cues for lateralizing high-frequency sounds, provided the stimuli are complex and not pure tones. We present here physiological evidence in support of these findings. Neurons of high best frequency in the cat inferior colliculus respond to interaural phase differences of amplitude modulated waveforms, and this response depends upon preservation of phase information of the modulating signal. Interaural phase differences were introduced in two ways: by interaural delays of the entire waveform and by binaural beats in which there was an interaural frequency difference in the modulating waveform. Results obtained with these two methods are similar. Our results show that high-frequency cells can respond to interaural time differences of amplitude modulated signals and that they do so by a sensitivity to interaural phase differences of the modulating waveform.
Ihlefeld, Antje; Litovsky, Ruth Y
2012-01-01
Spatial release from masking refers to a benefit for speech understanding. It occurs when a target talker and a masker talker are spatially separated. In those cases, speech intelligibility for target speech is typically higher than when both talkers are at the same location. In cochlear implant listeners, spatial release from masking is much reduced or absent compared with normal hearing listeners. Perhaps this reduced spatial release occurs because cochlear implant listeners cannot effectively attend to spatial cues. Three experiments examined factors that may interfere with deploying spatial attention to a target talker masked by another talker. To simulate cochlear implant listening, stimuli were vocoded with two unique features. First, we used 50-Hz low-pass filtered speech envelopes and noise carriers, strongly reducing the possibility of temporal pitch cues; second, co-modulation was imposed on target and masker utterances to enhance perceptual fusion between the two sources. Stimuli were presented over headphones. Experiments 1 and 2 presented high-fidelity spatial cues with unprocessed and vocoded speech. Experiment 3 maintained faithful long-term average interaural level differences but presented scrambled interaural time differences with vocoded speech. Results show a robust spatial release from masking in Experiments 1 and 2, and a greatly reduced spatial release in Experiment 3. Faithful long-term average interaural level differences were insufficient for producing spatial release from masking. This suggests that appropriate interaural time differences are necessary for restoring spatial release from masking, at least for a situation where there are few viable alternative segregation cues.
2012-06-01
a listener uses to interpret the auditory environment is interaural difference cues. Interaural difference cues are perceived binaurally , and they...signal in noise is not enough for accurate localization performance. Instead, it appears that both audibility and binaural signal processing of both...be interpreted differently among researchers. 4. Conclusions Accurately processed and interpreted binaural and monaural spatial cues enable a
Xiong, Xiaorui R.; Liang, Feixue; Li, Haifu; Mesik, Lukas; Zhang, Ke K.; Polley, Daniel B.; Tao, Huizhong W.; Xiao, Zhongju; Zhang, Li I.
2013-01-01
Binaural integration in the central nucleus of inferior colliculus (ICC) plays a critical role in sound localization. However, its arithmetic nature and underlying synaptic mechanisms remain unclear. Here, we showed in mouse ICC neurons that the contralateral dominance is created by a “push-pull”-like mechanism, with contralaterally dominant excitation and more bilaterally balanced inhibition. Importantly, binaural spiking response is generated apparently from an ipsilaterally-mediated scaling of contralateral response, leaving frequency tuning unchanged. This scaling effect is attributed to a divisive attenuation of contralaterally-evoked synaptic excitation onto ICC neurons with their inhibition largely unaffected. Thus, a gain control mediates the linear transformation from monaural to binaural spike responses. The gain value is modulated by interaural level difference (ILD) primarily through scaling excitation to different levels. The ILD-dependent synaptic scaling and gain adjustment allow ICC neurons to dynamically encode interaural sound localization cues while maintaining an invariant representation of other independent sound attributes. PMID:23972599
ERIC Educational Resources Information Center
Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.
2016-01-01
Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…
NASA Astrophysics Data System (ADS)
Nur Farid, Mifta; Arifianto, Dhany
2016-11-01
A person who is suffering from hearing loss can be helped by using hearing aids and the most optimal performance of hearing aids are binaural hearing aids because it has similarities to human auditory system. In a conversation at a cocktail party, a person can focus on a single conversation even though the background sound and other people conversation is quite loud. This phenomenon is known as the cocktail party effect. In an early study, has been explained that binaural hearing have an important contribution to the cocktail party effect. So in this study, will be performed separation on the input binaural sound with 2 microphone sensors of two sound sources based on both the binaural cue, interaural time difference (ITD) and interaural level difference (ILD) using binary mask. To estimate value of ITD, is used cross-correlation method which the value of ITD represented as time delay of peak shifting at time-frequency unit. Binary mask is estimated based on pattern of ITD and ILD to relative strength of target that computed statistically using probability density estimation. Results of sound source separation performing well with the value of speech intelligibility using the percent correct word by 86% and 3 dB by SNR.
Asadollahi, Ali; Endler, Frank; Nelken, Israel; Wagner, Hermann
2010-08-01
Humans and animals are able to detect signals in noisy environments. Detection improves when the noise and the signal have different interaural phase relationships. The resulting improvement in detection threshold is called the binaural masking level difference. We investigated neural mechanisms underlying the release from masking in the inferior colliculus of barn owls in low-frequency and high-frequency neurons. A tone (signal) was presented either with the same interaural time difference as the noise (masker) or at a 180 degrees phase shift as compared with the interaural time difference of the noise. The changes in firing rates induced by the addition of a signal of increasing level while masker level was kept constant was well predicted by the relative responses to the masker and signal alone. In many cases, the response at the highest signal levels was dominated by the response to the signal alone, in spite of a significant response to the masker at low signal levels, suggesting the presence of occlusion. Detection thresholds and binaural masking level differences were widely distributed. The amount of release from masking increased with increasing masker level. Narrowly tuned neurons in the central nucleus of the inferior colliculus had detection thresholds that were lower than or similar to those of broadly tuned neurons in the external nucleus of the inferior colliculus. Broadly tuned neurons exhibited higher masking level differences than narrowband neurons. These data suggest that detection has different spectral requirements from localization.
NASA Astrophysics Data System (ADS)
Dye, Raymond H.; Stellmack, Mark A.; Jurcin, Noah F.
2005-05-01
Two experiments measured listeners' abilities to weight information from different components in a complex of 553, 753, and 953 Hz. The goal was to determine whether or not the ability to adjust perceptual weights generalized across tasks. Weights were measured by binary logistic regression between stimulus values that were sampled from Gaussian distributions and listeners' responses. The first task was interaural time discrimination in which listeners judged the laterality of the target component. The second task was monaural level discrimination in which listeners indicated whether the level of the target component decreased or increased across two intervals. For both experiments, each of the three components served as the target. Ten listeners participated in both experiments. The results showed that those individuals who adjusted perceptual weights in the interaural time experiment could also do so in the monaural level discrimination task. The fact that the same individuals appeared to be analytic in both tasks is an indication that the weights measure the ability to attend to a particular region of the spectrum while ignoring other spectral regions. .
Gordon, Karen A.; Deighton, Michael R.; Abbasalipour, Parvaneh; Papsin, Blake C.
2014-01-01
There are significant challenges to restoring binaural hearing to children who have been deaf from an early age. The uncoordinated and poor temporal information available from cochlear implants distorts perception of interaural timing differences normally important for sound localization and listening in noise. Moreover, binaural development can be compromised by bilateral and unilateral auditory deprivation. Here, we studied perception of both interaural level and timing differences in 79 children/adolescents using bilateral cochlear implants and 16 peers with normal hearing. They were asked on which side of their head they heard unilaterally or bilaterally presented click- or electrical pulse- trains. Interaural level cues were identified by most participants including adolescents with long periods of unilateral cochlear implant use and little bilateral implant experience. Interaural timing cues were not detected by new bilateral adolescent users, consistent with previous evidence. Evidence of binaural timing detection was, for the first time, found in children who had much longer implant experience but it was marked by poorer than normal sensitivity and abnormally strong dependence on current level differences between implants. In addition, children with prior unilateral implant use showed a higher proportion of responses to their first implanted sides than children implanted simultaneously. These data indicate that there are functional repercussions of developing binaural hearing through bilateral cochlear implants, particularly when provided sequentially; nonetheless, children have an opportunity to use these devices to hear better in noise and gain spatial hearing. PMID:25531107
Laumen, Geneviève; Tollin, Daniel J.; Beutelmann, Rainer; Klump, Georg M.
2016-01-01
The effect of interaural time difference (ITD) and interaural level difference (ILD) on wave 4 of the binaural and summed monaural auditory brainstem responses (ABRs) as well as on the DN1 component of the binaural interaction component (BIC) of the ABR in young and old Mongolian gerbils (Meriones unguiculatus) was investigated. Measurements were made at a fixed sound pressure level (SPL) and a fixed level above visually detected ABR threshold to compensate for individual hearing threshold differences. In both stimulation modes (fixed SPL and fixed level above visually detected ABR threshold) an effect of ITD on the latency and the amplitude of wave 4 as well as of the BIC was observed. With increasing absolute ITD values BIC latencies were increased and amplitudes were decreased. ILD had a much smaller effect on these measures. Old animals showed a reduced amplitude of the DN1 component. This difference was due to a smaller wave 4 in the summed monaural ABRs of old animals compared to young animals whereas wave 4 in the binaural-evoked ABR showed no age-related difference. In old animals the small amplitude of the DN1 component was correlated with small binaural-evoked wave 1 and wave 3 amplitudes. This suggests that the reduced peripheral input affects central binaural processing which is reflected in the BIC. PMID:27173973
Anatomical limits on interaural time differences: an ecological perspective
Hartmann, William M.; Macaulay, Eric J.
2013-01-01
Human listeners, and other animals too, use interaural time differences (ITD) to localize sounds. If the sounds are pure tones, a simple frequency factor relates the ITD to the interaural phase difference (IPD), for which there are known iso-IPD boundaries, 90°, 180°… defining regions of spatial perception. In this article, iso-IPD boundaries for humans are translated into azimuths using a spherical head model (SHM), and the calculations are checked by free-field measurements. The translated boundaries provide quantitative tests of an ecological interpretation for the dramatic onset of ITD insensitivity at high frequencies. According to this interpretation, the insensitivity serves as a defense against misinformation and can be attributed to limits on binaural processing in the brainstem. Calculations show that the ecological explanation passes the tests only if the binaural brainstem properties evolved or developed consistent with heads that are 50% smaller than current adult heads. Measurements on more realistic head shapes relax that requirement only slightly. The problem posed by the discrepancy between the current head size and a smaller, ideal head size was apparently solved by the evolution or development of central processes that discount large IPDs in favor of interaural level differences. The latter become more important with increasing head size. PMID:24592209
Laumen, Geneviève; Tollin, Daniel J; Beutelmann, Rainer; Klump, Georg M
2016-07-01
The effect of interaural time difference (ITD) and interaural level difference (ILD) on wave 4 of the binaural and summed monaural auditory brainstem responses (ABRs) as well as on the DN1 component of the binaural interaction component (BIC) of the ABR in young and old Mongolian gerbils (Meriones unguiculatus) was investigated. Measurements were made at a fixed sound pressure level (SPL) and a fixed level above visually detected ABR threshold to compensate for individual hearing threshold differences. In both stimulation modes (fixed SPL and fixed level above visually detected ABR threshold) an effect of ITD on the latency and the amplitude of wave 4 as well as of the BIC was observed. With increasing absolute ITD values BIC latencies were increased and amplitudes were decreased. ILD had a much smaller effect on these measures. Old animals showed a reduced amplitude of the DN1 component. This difference was due to a smaller wave 4 in the summed monaural ABRs of old animals compared to young animals whereas wave 4 in the binaural-evoked ABR showed no age-related difference. In old animals the small amplitude of the DN1 component was correlated with small binaural-evoked wave 1 and wave 3 amplitudes. This suggests that the reduced peripheral input affects central binaural processing which is reflected in the BIC. Copyright © 2016 Elsevier B.V. All rights reserved.
Effectiveness of Interaural Delays Alone as Cues During Dynamic Sound Localization
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)
1996-01-01
The contribution of interaural time differences (ITDs) to the localization of virtual sound sources with and without head motion was examined. Listeners estimated the apparent azimuth, elevation and distance of virtual sources presented over headphones. Stimuli (3 sec., white noise) were synthesized from minimum-phase representations of nonindividualized head-related transfer functions (HRTFs); binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; head position was tracked and stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. Two synthesis conditions were tested: (1) both interaural level differences (ILDs) and ITDs correctly correlated with source location and head motion, (2) ITDs correct, no ILDs (flat magnitude spectrum). Head movements reduced azimuth confusions primarily when interaural cues were correctly correlated, although a smaller effect was also seen for ITDs alone. Externalization was generally poor for ITD-only conditions and was enhanced by head motion only for normal HRTFs. Overall the data suggest that, while ITDs alone can provide a significant cue for azimuth, the errors most commonly associated with virtual sources are reduced by location-dependent magnitude cues.
Yin, T C; Kuwada, S
1983-10-01
We used the binaural beat stimulus to study the interaural phase sensitivity of inferior colliculus (IC) neurons in the cat. The binaural beat, produced by delivering tones of slightly different frequencies to the two ears, generates continuous and graded changes in interaural phase. Over 90% of the cells that exhibit a sensitivity to changes in the interaural delay also show a sensitivity to interaural phase disparities with the binaural beat. Cells respond with a burst of impulses with each complete cycle of the beat frequency. The period histogram obtained by binning the poststimulus time histogram on the beat frequency gives a measure of the interaural phase sensitivity of the cell. In general, there is good correspondence in the shapes of the period histograms generated from binaural beats and the interaural phase curves derived from interaural delays and in the mean interaural phase angle calculated from them. The magnitude of the beat frequency determines the rate of change of interaural phase and the sign determines the direction of phase change. While most cells respond in a phase-locked manner up to beat frequencies of 10 Hz, there are some cells tht will phase lock up to 80 Hz. Beat frequency and mean interaural phase angle are linearly related for most cells. Most cells respond equally in the two directions of phase change and with different rates of change, at least up to 10 Hz. However, some IC cells exhibit marked sensitivity to the speed of phase change, either responding more vigorously at low beat frequencies or at high beat frequencies. In addition, other cells demonstrate a clear directional sensitivity. The cells that show sensitivity to the direction and speed of phase changes would be expected to demonstrate a sensitivity to moving sound sources in the free field. Changes in the mean interaural phase of the binaural beat period histograms are used to determine the effects of changes in average and interaural intensity on the phase sensitivity of the cells. The effects of both forms of intensity variation are continuously distributed. The binaural beat offers a number of advantages for studying the interaural phase sensitivity of binaural cells. The dynamic characteristics of the interaural phase can be varied so that the speed and direction of phase change are under direct control. The data can be obtained in a much more efficient manner, as the binaural beat is about 10 times faster in terms of data collection than the interaural delay.
Haywood, Nicholas R; Undurraga, Jaime A; Marquardt, Torsten; McAlpine, David
2015-12-30
There has been continued interest in clinical objective measures of binaural processing. One commonly proposed measure is the binaural interaction component (BIC), which is obtained typically by recording auditory brainstem responses (ABRs)-the BIC reflects the difference between the binaural ABR and the sum of the monaural ABRs (i.e., binaural - (left + right)). We have recently developed an alternative, direct measure of sensitivity to interaural time differences, namely, a following response to modulations in interaural phase difference (the interaural phase modulation following response; IPM-FR). To obtain this measure, an ongoing diotically amplitude-modulated signal is presented, and the interaural phase difference of the carrier is switched periodically at minima in the modulation cycle. Such periodic modulations to interaural phase difference can evoke a steady state following response. BIC and IPM-FR measurements were compared from 10 normal-hearing subjects using a 16-channel electroencephalographic system. Both ABRs and IPM-FRs were observed most clearly from similar electrode locations-differential recordings taken from electrodes near the ear (e.g., mastoid) in reference to a vertex electrode (Cz). Although all subjects displayed clear ABRs, the BIC was not reliably observed. In contrast, the IPM-FR typically elicited a robust and significant response. In addition, the IPM-FR measure required a considerably shorter recording session. As the IPM-FR magnitude varied with interaural phase difference modulation depth, it could potentially serve as a correlate of perceptual salience. Overall, the IPM-FR appears a more suitable clinical measure than the BIC. © The Author(s) 2015.
Zheng, Jianwen; Lu, Jing; Chen, Kai
2013-07-01
Several methods have been proposed for the generation of the focused source, usually a virtual monopole source positioned in between the loudspeaker array and the listener. The problem of pre-echoes of the common analytical methods has been noticed, and the most concise method to cope with this problem is the angular weight method. In this paper, the interaural time and level difference, which are well related to the localization cues of human auditory systems, will be used to further investigate the effectiveness of the focused source generation methods. It is demonstrated that the combination of angular weight method and the numerical pressure matching method has comparatively better performance in a given reconstructed area.
Masking Level Difference Response Norms from Learning Disabled Individuals.
ERIC Educational Resources Information Center
Waryas, Paul A.; Battin, R. Ray
1985-01-01
The study presents normative data on Masking Level Difference (an improvement of the auditory processing of interaural time/intensity differences between signals and masking noises) for 90 learning disabled persons (4-35 years old). It was concluded that the MLD may quickly screen for auditory processing problems. (CL)
Adiloğlu, K.; Herzke, T.
2015-01-01
We present the first portable, binaural, real-time research platform compatible with Oticon Medical SP and XP generation cochlear implants. The platform consists of (a) a pair of behind-the-ear devices, each containing front and rear calibrated microphones, (b) a four-channel USB analog-to-digital converter, (c) real-time PC-based sound processing software called the Master Hearing Aid, and (d) USB-connected hardware and output coils capable of driving two implants simultaneously. The platform is capable of processing signals from the four microphones simultaneously and producing synchronized binaural cochlear implant outputs that drive two (bilaterally implanted) SP or XP implants. Both audio signal preprocessing algorithms (such as binaural beamforming) and novel binaural stimulation strategies (within the implant limitations) can be programmed by researchers. When the whole research platform is combined with Oticon Medical SP implants, interaural electrode timing can be controlled on individual electrodes to within ±1 µs and interaural electrode energy differences can be controlled to within ±2%. Hence, this new platform is particularly well suited to performing experiments related to interaural time differences in combination with interaural level differences in real-time. The platform also supports instantaneously variable stimulation rates and thereby enables investigations such as the effect of changing the stimulation rate on pitch perception. Because the processing can be changed on the fly, researchers can use this platform to study perceptual changes resulting from different processing strategies acutely. PMID:26721923
Backus, B; Adiloğlu, K; Herzke, T
2015-12-30
We present the first portable, binaural, real-time research platform compatible with Oticon Medical SP and XP generation cochlear implants. The platform consists of (a) a pair of behind-the-ear devices, each containing front and rear calibrated microphones, (b) a four-channel USB analog-to-digital converter, (c) real-time PC-based sound processing software called the Master Hearing Aid, and (d) USB-connected hardware and output coils capable of driving two implants simultaneously. The platform is capable of processing signals from the four microphones simultaneously and producing synchronized binaural cochlear implant outputs that drive two (bilaterally implanted) SP or XP implants. Both audio signal preprocessing algorithms (such as binaural beamforming) and novel binaural stimulation strategies (within the implant limitations) can be programmed by researchers. When the whole research platform is combined with Oticon Medical SP implants, interaural electrode timing can be controlled on individual electrodes to within ±1 µs and interaural electrode energy differences can be controlled to within ±2%. Hence, this new platform is particularly well suited to performing experiments related to interaural time differences in combination with interaural level differences in real-time. The platform also supports instantaneously variable stimulation rates and thereby enables investigations such as the effect of changing the stimulation rate on pitch perception. Because the processing can be changed on the fly, researchers can use this platform to study perceptual changes resulting from different processing strategies acutely. © The Author(s) 2015.
Rakerd, Brad; Hartmann, William M.
2010-01-01
Binaural recordings of noise in rooms were used to determine the relationship between binaural coherence and the effectiveness of the interaural time difference (ITD) as a cue for human sound localization. Experiments showed a strong, monotonic relationship between the coherence and a listener’s ability to discriminate values of ITD. The relationship was found to be independent of other, widely varying acoustical properties of the rooms. However, the relationship varied dramatically with noise band center frequency. The ability to discriminate small ITD changes was greatest for a mid-frequency band. To achieve sensitivity comparable to mid-band, the binaural coherence had to be much larger at high frequency, where waveform ITD cues are imperceptible, and also at low frequency, where the binaural coherence in a room is necessarily large. Rivalry experiments with opposing interaural level differences (ILDs) found that the trading ratio between ITD and ILD increasingly favored the ILD as coherence decreased, suggesting that the perceptual weight of the ITD is decreased by increased reflections in rooms. PMID:21110600
Binaural sensitivity in children who use bilateral cochlear implants.
Ehlers, Erica; Goupell, Matthew J; Zheng, Yi; Godar, Shelly P; Litovsky, Ruth Y
2017-06-01
Children who are deaf and receive bilateral cochlear implants (BiCIs) perform better on spatial hearing tasks using bilateral rather than unilateral inputs; however, they underperform relative to normal-hearing (NH) peers. This gap in performance is multi-factorial, including the inability of speech processors to reliably deliver binaural cues. Although much is known regarding binaural sensitivity of adults with BiCIs, less is known about how the development of binaural sensitivity in children with BiCIs compared to NH children. Sixteen children (ages 9-17 years) were tested using synchronized research processors. Interaural time differences and interaural level differences (ITDs and ILDs, respectively) were presented to pairs of pitch-matched electrodes. Stimuli were 300-ms, 100-pulses-per-second, constant-amplitude pulse trains. In the first and second experiments, discrimination of interaural cues (either ITDs or ILDs) was measured using a two-interval left/right task. In the third experiment, subjects reported the perceived intracranial position of ITDs and ILDs in a lateralization task. All children demonstrated sensitivity to ILDs, possibly due to monaural level cues. Children who were born deaf had weak or absent sensitivity to ITDs; in contrast, ITD sensitivity was noted in children with previous exposure to acoustic hearing. Therefore, factors such as auditory deprivation, in particular, lack of early exposure to consistent timing differences between the ears, may delay the maturation of binaural circuits and cause insensitivity to binaural differences.
Neural tuning matches frequency-dependent time differences between the ears
Benichoux, Victor; Fontaine, Bertrand; Franken, Tom P; Karino, Shotaro; Joris, Philip X; Brette, Romain
2015-01-01
The time it takes a sound to travel from source to ear differs between the ears and creates an interaural delay. It varies systematically with spatial direction and is generally modeled as a pure time delay, independent of frequency. In acoustical recordings, we found that interaural delay varies with frequency at a fine scale. In physiological recordings of midbrain neurons sensitive to interaural delay, we found that preferred delay also varies with sound frequency. Similar observations reported earlier were not incorporated in a functional framework. We find that the frequency dependence of acoustical and physiological interaural delays are matched in key respects. This suggests that binaural neurons are tuned to acoustical features of ecological environments, rather than to fixed interaural delays. Using recordings from the nerve and brainstem we show that this tuning may emerge from neurons detecting coincidences between input fibers that are mistuned in frequency. DOI: http://dx.doi.org/10.7554/eLife.06072.001 PMID:25915620
de Taillez, Tobias; Grimm, Giso; Kollmeier, Birger; Neher, Tobias
2018-06-01
To investigate the influence of an algorithm designed to enhance or magnify interaural difference cues on speech signals in noisy, spatially complex conditions using both technical and perceptual measurements. To also investigate the combination of interaural magnification (IM), monaural microphone directionality (DIR), and binaural coherence-based noise reduction (BC). Speech-in-noise stimuli were generated using virtual acoustics. A computational model of binaural hearing was used to analyse the spatial effects of IM. Predicted speech quality changes and signal-to-noise-ratio (SNR) improvements were also considered. Additionally, a listening test was carried out to assess speech intelligibility and quality. Listeners aged 65-79 years with and without sensorineural hearing loss (N = 10 each). IM increased the horizontal separation of concurrent directional sound sources without introducing any major artefacts. In situations with diffuse noise, however, the interaural difference cues were distorted. Preprocessing the binaural input signals with DIR reduced distortion. IM influenced neither speech intelligibility nor speech quality. The IM algorithm tested here failed to improve speech perception in noise, probably because of the dispersion and inconsistent magnification of interaural difference cues in complex environments.
Interaural intensity difference limen.
DOT National Transportation Integrated Search
1967-05-01
The ability to judge the direction (the azimuth) of a sound source and to discriminate it from others is often essential to flyers. A major factor in the judgment process is the interaural intensity difference that the pilot can perceive. Three kinds...
Comparison of Interaural Electrode Pairing Methods for Bilateral Cochlear Implants
Dietz, Mathias
2015-01-01
In patients with bilateral cochlear implants (CIs), pairing matched interaural electrodes and stimulating them with the same frequency band is expected to facilitate binaural functions such as binaural fusion, localization, and spatial release from masking. Because clinical procedures typically do not include patient-specific interaural electrode pairing, it remains the case that each electrode is allocated to a generic frequency range, based simply on the electrode number. Two psychoacoustic techniques for determining interaurally paired electrodes have been demonstrated in several studies: interaural pitch comparison and interaural time difference (ITD) sensitivity. However, these two methods are rarely, if ever, compared directly. A third, more objective method is to assess the amplitude of the binaural interaction component (BIC) derived from electrically evoked auditory brainstem responses for different electrode pairings; a method has been demonstrated to be a potential candidate for bilateral CI users. Here, we tested all three measures in the same eight CI users. We found good correspondence between the electrode pair producing the largest BIC and the electrode pair producing the maximum ITD sensitivity. The correspondence between the pairs producing the largest BIC and the pitch-matched electrode pairs was considerably weaker, supporting the previously proposed hypothesis that whilst place pitch might adapt over time to accommodate mismatched inputs, sensitivity to ITDs does not adapt to the same degree. PMID:26631108
Evaluation of a method for enhancing interaural level differences at low frequencies.
Moore, Brian C J; Kolarik, Andrew; Stone, Michael A; Lee, Young-Woo
2016-10-01
A method (called binaural enhancement) for enhancing interaural level differences at low frequencies, based on estimates of interaural time differences, was developed and evaluated. Five conditions were compared, all using simulated hearing-aid processing: (1) Linear amplification with frequency-response shaping; (2) binaural enhancement combined with linear amplification and frequency-response shaping; (3) slow-acting four-channel amplitude compression with independent compression at the two ears (AGC4CH); (4) binaural enhancement combined with four-channel compression (BE-AGC4CH); and (5) four-channel compression but with the compression gains synchronized across ears. Ten hearing-impaired listeners were tested, and gains and compression ratios for each listener were set to match targets prescribed by the CAM2 fitting method. Stimuli were presented via headphones, using virtualization methods to simulate listening in a moderately reverberant room. The intelligibility of speech at ±60° azimuth in the presence of competing speech on the opposite side of the head at ±60° azimuth was not affected by the binaural enhancement processing. Sound localization was significantly better for condition BE-AGC4CH than for condition AGC4CH for a sentence, but not for broadband noise, lowpass noise, or lowpass amplitude-modulated noise. The results suggest that the binaural enhancement processing can improve localization for sounds with distinct envelope fluctuations.
Bibee, Jacqueline M.; Stecker, G. Christopher
2016-01-01
Spatial judgments are often dominated by low-frequency binaural cues and onset cues when binaural cues vary across the spectrum and duration, respectively, of a brief sound. This study combined these dimensions to assess the spectrotemporal weighting of binaural information. Listeners discriminated target interaural time difference (ITD) and interaural level difference (ILD) carried by the onset, offset, or full duration of a 4-kHz Gabor click train with a 2-ms period in the presence or absence of a diotic 500-Hz interferer tone. ITD and ILD thresholds were significantly elevated by the interferer in all conditions and by a similar amount to previous reports for static cues. Binaural interference was dramatically greater for ITD targets lacking onset cues compared to onset and full-duration conditions. Binaural interference for ILD targets was similar across dynamic-cue conditions. These effects mirror the baseline discriminability of dynamic ITD and ILD cues [Stecker and Brown. (2010). J. Acoust. Soc. Am. 127, 3092–3103], consistent with stronger interference for less-robust/higher-variance cues. The results support the view that binaural cue integration occurs simultaneously across multiple variance-weighted dimensions, including time and frequency. PMID:27794286
Bibee, Jacqueline M; Stecker, G Christopher
2016-10-01
Spatial judgments are often dominated by low-frequency binaural cues and onset cues when binaural cues vary across the spectrum and duration, respectively, of a brief sound. This study combined these dimensions to assess the spectrotemporal weighting of binaural information. Listeners discriminated target interaural time difference (ITD) and interaural level difference (ILD) carried by the onset, offset, or full duration of a 4-kHz Gabor click train with a 2-ms period in the presence or absence of a diotic 500-Hz interferer tone. ITD and ILD thresholds were significantly elevated by the interferer in all conditions and by a similar amount to previous reports for static cues. Binaural interference was dramatically greater for ITD targets lacking onset cues compared to onset and full-duration conditions. Binaural interference for ILD targets was similar across dynamic-cue conditions. These effects mirror the baseline discriminability of dynamic ITD and ILD cues [Stecker and Brown. (2010). J. Acoust. Soc. Am. 127, 3092-3103], consistent with stronger interference for less-robust/higher-variance cues. The results support the view that binaural cue integration occurs simultaneously across multiple variance-weighted dimensions, including time and frequency.
2014-01-01
Localizing a sound source requires the auditory system to determine its direction and its distance. In general, hearing-impaired listeners do less well in experiments measuring localization performance than normal-hearing listeners, and hearing aids often exacerbate matters. This article summarizes the major experimental effects in direction (and its underlying cues of interaural time differences and interaural level differences) and distance for normal-hearing, hearing-impaired, and aided listeners. Front/back errors and the importance of self-motion are noted. The influence of vision on the localization of real-world sounds is emphasized, such as through the ventriloquist effect or the intriguing link between spatial hearing and visual attention. PMID:25492094
Sound source localization inspired by the ears of the Ormia ochracea
NASA Astrophysics Data System (ADS)
Kuntzman, Michael L.; Hall, Neal A.
2014-07-01
The parasitoid fly Ormia ochracea has the remarkable ability to locate crickets using audible sound. This ability is, in fact, remarkable as the fly's hearing mechanism spans only 1.5 mm which is 50× smaller than the wavelength of sound emitted by the cricket. The hearing mechanism is, for all practical purposes, a point in space with no significant interaural time or level differences to draw from. It has been discovered that evolution has empowered the fly with a hearing mechanism that utilizes multiple vibration modes to amplify interaural time and level differences. Here, we present a fully integrated, man-made mimic of the Ormia's hearing mechanism capable of replicating the remarkable sound localization ability of the special fly. A silicon-micromachined prototype is presented which uses multiple piezoelectric sensing ports to simultaneously transduce two orthogonal vibration modes of the sensing structure, thereby enabling simultaneous measurement of sound pressure and pressure gradient.
Binaural enhancement for bilateral cochlear implant users.
Brown, Christopher A
2014-01-01
Bilateral cochlear implant (BCI) users receive limited binaural cues and, thus, show little improvement to speech intelligibility from spatial cues. The feasibility of a method for enhancing the binaural cues available to BCI users is investigated. This involved extending interaural differences of levels, which typically are restricted to high frequencies, into the low-frequency region. Speech intelligibility was measured in BCI users listening over headphones and with direct stimulation, with a target talker presented to one side of the head in the presence of a masker talker on the other side. Spatial separation was achieved by applying either naturally occurring binaural cues or enhanced cues. In this listening configuration, BCI patients showed greater speech intelligibility with the enhanced binaural cues than with naturally occurring binaural cues. In some situations, it is possible for BCI users to achieve greater speech intelligibility when binaural cues are enhanced by applying interaural differences of levels in the low-frequency region.
Localization by interaural time difference (ITD): Effects of interaural frequency mismatch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonham, B.H.; Lewis, E.R.
1999-07-01
A commonly accepted physiological model for lateralization of low-frequency sounds by interaural time delay (ITD) stipulates that binaural comparison neurons receive input from frequency-matched channels from each ear. Here, the effects of hypothetical interaural frequency mismatches on this model are reported. For this study, the cat{close_quote}s auditory system peripheral to the binaural comparison neurons was represented by a neurophysiologically derived model, and binaural comparison neurons were represented by cross-correlators. The results of the study indicate that, for binaural comparison neurons receiving input from one cochlear channel from each ear, interaural CF mismatches may serve to either augment or diminish themore » effective difference in ipsilateral and contralateral axonal time delays from the periphery to the binaural comparison neuron. The magnitude of this increase or decrease in the effective time delay difference can be up to 400 {mu}s for CF mismatches of 0.2 octaves or less for binaural neurons with CFs between 250 Hz and 2.5 kHz. For binaural comparison neurons with nominal CFs near 500 Hz, the 25-{mu}s effective time delay difference caused by a 0.012-octave CF mismatch is equal to the ITD previously shown to be behaviorally sufficient for the cat to lateralize a low-frequency sound source. {copyright} {ital 1999 Acoustical Society of America.}« less
Binaural comodulation masking release: Effects of masker interaural correlation
Hall, Joseph W.; Buss, Emily; Grose, John H.
2007-01-01
Binaural detection was examined for a signal presented in a narrow band of noise centered on the on-signal masking band (OSB) or in the presence of flanking noise bands that were random or comodulated with respect to the OSB. The noise had an interaural correlation of 1.0 (No), 0.99 or 0.95. In No noise, random flanking bands worsened Sπ detection and comodulated bands improved Sπ detection for some listeners but had no effect for other listeners. For the 0.99 or 0.95 interaural correlation conditions, random flanking bands were less detrimental to Sπ detection and comodulated flanking bands improved Sπ detection for all listeners. Analyses based on signal detection theory indicated that the improvement in Sπ thresholds obtained with comodulated bands was not compatible with an optimal combination of monaural and binaural cues or to across-frequency analyses of dynamic interaural phase differences. Two accounts consistent with the improvement in Sπ thresholds in comodulated noise were (1) envelope information carried by the flanking bands improves the weighting of binaural cues associated with the signal; (2) the auditory system is sensitive to across-frequency differences in ongoing interaural correlation. PMID:17225415
Arndt, Susan; Aschendorff, Antje; Laszig, Roland; Wesarg, Thomas
2016-01-01
The ability to detect a target signal masked by noise is improved in normal-hearing listeners when interaural phase differences (IPDs) between the ear signals exist either in the masker or in the signal. To improve binaural hearing in bilaterally implanted cochlear implant (BiCI) users, a coding strategy providing the best possible access to IPD is highly desirable. In this study, we compared two coding strategies in BiCI users provided with CI systems from MED-EL (Innsbruck, Austria). The CI systems were bilaterally programmed either with the fine structure processing strategy FS4 or with the constant rate strategy high definition continuous interleaved sampling (HDCIS). Familiarization periods between 6 and 12 weeks were considered. The effect of IPD was measured in two types of experiments: (a) IPD detection thresholds with tonal signals addressing mainly one apical interaural electrode pair and (b) with speech in noise in terms of binaural speech intelligibility level differences (BILD) addressing multiple electrodes bilaterally. The results in (a) showed improved IPD detection thresholds with FS4 compared with HDCIS in four out of the seven BiCI users. In contrast, 12 BiCI users in (b) showed similar BILD with FS4 (0.6 ± 1.9 dB) and HDCIS (0.5 ± 2.0 dB). However, no correlation between results in (a) and (b) both obtained with FS4 was found. In conclusion, the degree of IPD sensitivity determined on an apical interaural electrode pair was not an indicator for BILD based on bilateral multielectrode stimulation. PMID:27659487
Zirn, Stefan; Arndt, Susan; Aschendorff, Antje; Laszig, Roland; Wesarg, Thomas
2016-09-22
The ability to detect a target signal masked by noise is improved in normal-hearing listeners when interaural phase differences (IPDs) between the ear signals exist either in the masker or in the signal. To improve binaural hearing in bilaterally implanted cochlear implant (BiCI) users, a coding strategy providing the best possible access to IPD is highly desirable. In this study, we compared two coding strategies in BiCI users provided with CI systems from MED-EL (Innsbruck, Austria). The CI systems were bilaterally programmed either with the fine structure processing strategy FS4 or with the constant rate strategy high definition continuous interleaved sampling (HDCIS). Familiarization periods between 6 and 12 weeks were considered. The effect of IPD was measured in two types of experiments: (a) IPD detection thresholds with tonal signals addressing mainly one apical interaural electrode pair and (b) with speech in noise in terms of binaural speech intelligibility level differences (BILD) addressing multiple electrodes bilaterally. The results in (a) showed improved IPD detection thresholds with FS4 compared with HDCIS in four out of the seven BiCI users. In contrast, 12 BiCI users in (b) showed similar BILD with FS4 (0.6 ± 1.9 dB) and HDCIS (0.5 ± 2.0 dB). However, no correlation between results in (a) and (b) both obtained with FS4 was found. In conclusion, the degree of IPD sensitivity determined on an apical interaural electrode pair was not an indicator for BILD based on bilateral multielectrode stimulation. © The Author(s) 2016.
Growth in Head Size during Infancy: Implications for Sound Localization.
ERIC Educational Resources Information Center
Clifton, Rachel K.; And Others
1988-01-01
Compared head circumference and interaural distance in infants between birth and 22 weeks of age and in a small sample of preschool children and adults. Calculated changes in interaural time differences according to age. Found a large shift in distance. (SKC)
Failure of the precedence effect with a noise-band vocoder
Seeber, Bernhard U.; Hafter, Ervin R.
2011-01-01
The precedence effect (PE) describes the ability to localize a direct, leading sound correctly when its delayed copy (lag) is present, though not separately audible. The relative contribution of binaural cues in the temporal fine structure (TFS) of lead–lag signals was compared to that of interaural level differences (ILDs) and interaural time differences (ITDs) carried in the envelope. In a localization dominance paradigm participants indicated the spatial location of lead–lag stimuli processed with a binaural noise-band vocoder whose noise carriers introduced random TFS. The PE appeared for noise bursts of 10 ms duration, indicating dominance of envelope information. However, for three test words the PE often failed even at short lead–lag delays, producing two images, one toward the lead and one toward the lag. When interaural correlation in the carrier was increased, the images appeared more centered, but often remained split. Although previous studies suggest dominance of TFS cues, no image is lateralized in accord with the ITD in the TFS. An interpretation in the context of auditory scene analysis is proposed: By replacing the TFS with that of noise the auditory system loses the ability to fuse lead and lag into one object, and thus to show the PE. PMID:21428515
Auditory cortical neurons are sensitive to static and continuously changing interaural phase cues.
Reale, R A; Brugge, J F
1990-10-01
1. The interaural-phase-difference (IPD) sensitivity of single neurons in the primary auditory (AI) cortex of the anesthetized cat was studied at stimulus frequencies ranging from 120 to 2,500 Hz. Best frequencies of the 43 AI cells sensitive to IPD ranged from 190 to 2,400 Hz. 2. A static IPD was produced when a pair of low-frequency tone bursts, differing from one another only in starting phase, were presented dichotically. The resulting IPD-sensitivity curves, which plot the number of discharges evoked by the binaural signal as a function of IPD, were deeply modulated circular functions. IPD functions were analyzed for their mean vector length (r) and mean interaural phase (phi). Phase sensitivity was relatively independent of best frequency (BF) but highly dependent on stimulus frequency. Regardless of BF or stimulus frequency within the excitatory response area the majority of cells fired maximally when the ipsilateral tone lagged the contralateral signal and fired least when this interaural-phase relationship was reversed. 3. Sensitivity to continuously changing IPD was studied by delivering to the two ears 3-s tones that differed slightly in frequency, resulting in a binaural beat. Approximately 26% of the cells that showed a sensitivity to static changes in IPD also showed a sensitivity to dynamically changing IPD created by this binaural tonal combination. The discharges were highly periodic and tightly synchronized to a particular phase of the binaural beat cycle. High synchrony can be attributed to the fact that cortical neurons typically respond to an excitatory stimulus with but a single spike that is often precisely timed to stimulus onset. A period histogram, binned on the binaural beat frequency (fb), produced an equivalent IPD-sensitivity function for dynamically changing interaural phase. For neurons sensitive to both static and continuously changing interaural phase there was good correspondence between their static (phi s) and dynamic (phi d) mean interaural phases. 4. All cells responding to a dynamically changing stimulus exhibited a linear relationship between mean interaural phase and beat frequency. Most cells responded equally well to binaural beats regardless of the initial direction of phase change. For a fixed duration stimulus, and at relatively low fb, the number of spikes evoked increased with increasing fb, reflecting the increasing number of effective stimulus cycles. At higher fb, AI neurons were unable to follow the rate at which the most effective phase repeated itself during the 3 s of stimulation.(ABSTRACT TRUNCATED AT 400 WORDS)
NASA Astrophysics Data System (ADS)
SAKAI, H.; HOTEHAMA, T.; ANDO, Y.; PRODI, N.; POMPOLI, R.
2002-02-01
Measurements of railway noise were conducted by use of a diagnostic system of regional environmental noise. The system is based on the model of the human auditory-brain system. The model consists of the interplay of autocorrelators and an interaural crosscorrelator acting on the pressure signals arriving at the ear entrances, and takes into account the specialization of left and right human cerebral hemispheres. Different kinds of railway noise were measured through binaural microphones of a dummy head. To characterize the railway noise, physical factors, extracted from the autocorrelation functions (ACF) and interaural crosscorrelation function (IACF) of binaural signals, were used. The factors extracted from ACF were (1) energy represented at the origin of the delay, Φ (0), (2) effective duration of the envelope of the normalized ACF, τe, (3) the delay time of the first peak, τ1, and (4) its amplitude,ø1 . The factors extracted from IACF were (5) IACC, (6) interaural delay time at which the IACC is defined, τIACC, and (7) width of the IACF at the τIACC,WIACC . The factor Φ (0) can be represented as a geometrical mean of energies at both ears as listening level, LL.
Tympanic-response transition in ICE: Dependence upon the interaural cavity's shape
NASA Astrophysics Data System (ADS)
van Hemmen, J. Leo
More than half of the terrestrial vertebrates have internally coupled ears (ICE), where an interaural cavity of some shape acoustically couples the eardrums. Hence what the animal's auditory system perceives is not the outside stimulus but the superposition of outside and internal pressure on the two eardrums, resulting in so-called internal time and level difference, iTD and iLD, which are keys to sound localization. For a cylindrical shape, it is known that on the frequency axis two domains with appreciably increased iTD and iLD values occur, segregated by the eardrum's fundamental frequency. Here we analyze the case where, as in nature, two or more canals couple the eardrums so that, by opening one of the canals, the animal can switch from coupled to two independent ears. We analyze the iTD/iLD transition and its dependence upon the interaural cavity's size and shape. As compared to a single connection, the iTD performance is preserved to a large extent. Nonetheless, the price to pay for freedom of choice is a reduced frequency range with high-iTD plateau. Work done in collaboration with A.P. Vedurmudi; partially supported by BCCN-Munich.
Nilsson, Mats E; Schenkman, Bo N
2016-02-01
Blind people use auditory information to locate sound sources and sound-reflecting objects (echolocation). Sound source localization benefits from the hearing system's ability to suppress distracting sound reflections, whereas echolocation would benefit from "unsuppressing" these reflections. To clarify how these potentially conflicting aspects of spatial hearing interact in blind versus sighted listeners, we measured discrimination thresholds for two binaural location cues: inter-aural level differences (ILDs) and inter-aural time differences (ITDs). The ILDs or ITDs were present in single clicks, in the leading component of click pairs, or in the lagging component of click pairs, exploiting processes related to both sound source localization and echolocation. We tested 23 blind (mean age = 54 y), 23 sighted-age-matched (mean age = 54 y), and 42 sighted-young (mean age = 26 y) listeners. The results suggested greater ILD sensitivity for blind than for sighted listeners. The blind group's superiority was particularly evident for ILD-lag-click discrimination, suggesting not only enhanced ILD sensitivity in general but also increased ability to unsuppress lagging clicks. This may be related to the blind person's experience of localizing reflected sounds, for which ILDs may be more efficient than ITDs. On the ITD-discrimination tasks, the blind listeners performed better than the sighted age-matched listeners, but not better than the sighted young listeners. ITD sensitivity declines with age, and the equal performance of the blind listeners compared to a group of substantially younger listeners is consistent with the notion that blind people's experience may offset age-related decline in ITD sensitivity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Hearing in three dimensions: Sound localization
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Kistler, Doris J.
1990-01-01
The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.
Mc Laughlin, Myles; Chabwine, Joelle Nsimire; van der Heijden, Marcel; Joris, Philip X
2008-10-01
To localize low-frequency sounds, humans rely on an interaural comparison of the temporally encoded sound waveform after peripheral filtering. This process can be compared with cross-correlation. For a broadband stimulus, after filtering, the correlation function has a damped oscillatory shape where the periodicity reflects the filter's center frequency and the damping reflects the bandwidth (BW). The physiological equivalent of the correlation function is the noise delay (ND) function, which is obtained from binaural cells by measuring response rate to broadband noise with varying interaural time delays (ITDs). For monaural neurons, delay functions are obtained by counting coincidences for varying delays across spike trains obtained to the same stimulus. Previously, we showed that BWs in monaural and binaural neurons were similar. However, earlier work showed that the damping of delay functions differs significantly between these two populations. Here, we address this paradox by looking at the role of sensitivity to changes in interaural correlation. We measured delay and correlation functions in the cat inferior colliculus (IC) and auditory nerve (AN). We find that, at a population level, AN and IC neurons with similar characteristic frequencies (CF) and BWs can have different responses to changes in correlation. Notably, binaural neurons often show compression, which is not found in the AN and which makes the shape of delay functions more invariant with CF at the level of the IC than at the AN. We conclude that binaural sensitivity is more dependent on correlation sensitivity than has hitherto been appreciated and that the mechanisms underlying correlation sensitivity should be addressed in future studies.
Detection and localization of sounds: Virtual tones and virtual reality
NASA Astrophysics Data System (ADS)
Zhang, Peter Xinya
Modern physiologically based binaural models employ internal delay lines in the pathways from left and right peripheries to central processing nuclei. Various models apply the delay lines differently, and give different predictions for the detection of dichotic pitches, wherein listeners hear a virtual tone in the noise background. Two dichotic pitch stimuli (Huggins pitch and binaural coherence edge pitch) with low boundary frequencies were used to test the predictions by two different models. The results from five experiments show that the relative dichotic pitch strengths support the equalization-cancellation model and disfavor the central activity pattern (CAP) model. The CAP model makes predictions for the lateralization of Huggins pitch based on interaural time differences (ITD). By measuring human lateralization for Huggins pitches with two different types of phase boundaries (linear-phase and stepped phase), and by comparing with lateralization of sine-tones, it was shown that the lateralization of Huggins pitch stimuli is similar to that of the corresponding sine-tones, and the lateralizations of Huggins pitch stimuli with the two different boundaries were even more similar to one another. The results agreed roughly with the CAP model predictions. Agreement was significantly improved by incorporating individualized scale factors and offsets into the model, and was further unproved with a model including compression at large ITDs. Furthermore, ambiguous stimuli, with an interaural phase difference of 180 degrees, were consistently lateralized on the left or right based on individual asymmetries---which introduces the concept of "earedness". Interaural phase difference (IPD) and interaural time difference (ITD) are two different forms of temporal cues. With varying frequency, an auditory system based on IPD or ITD gives different quantitative predictions on lateralization. A lateralization experiment with sine tones tested whether human auditory system is an IPD-meter or an ITD-meter. Listeners estimated the lateral positions of 50 sine tones with IPDs ranging from -150° to +150° and with different frequencies, all in the range where signal fine structure supports lateralization. The estimates indicated that listeners lateralize sine tones on the basis of ITD and not IPD. In order to distinguish between sound sources in front and in back, listeners use spectral cues caused by the diffraction by pinna, head, neck and torso. To study this effect, the VRX technique was developed based on transaural technology. The technique was successful in presenting desired spectra into listeners' ears with high accuracy up to 16 kHz. When presented with real source and simulated virtual signal, listeners in an anechoic room could not distinguish between them. Eleven experiments on discrimination between front and back sources were carried out in an anechoic room. The results show several findings. First, the results support a multiple band comparison model, and disfavor a necessary band(s) model. Second, it was found that preserving the spectral dips was more important than preserving the spectral peaks for successful front/back discrimination. Moreover, it was confirmed that neither monaural cues nor interaural spectral level difference cues were adequate for front/back discrimination. Furthermore, listeners' performance did not deteriorate when presented with sharpened spectra. Finally, when presented with an interaural delay less than 200 mus, listeners could succeed to discriminate front from back, although the image was pulled to the side, which suggests that the localizations in azimuthal plane and in sagittal plane are independent within certain limits.
The sensitivity of hearing-impaired adults to acoustic attributes in simulated rooms
Whitmer, William M.; McShefferty, David; Akeroyd, Michael A.
2016-01-01
In previous studies we have shown that older hearing-impaired individuals are relatively insensitive to changes in the apparent width of broadband noises when those width changes were based on differences in interaural coherence [W. Whitmer, B. Seeber and M. Akeroyd, J. Acoust. Soc. Am. 132, 369-379 (2012)]. This insensitivity has been linked to senescent difficulties in resolving binaural fine-structure differences. It is therefore possible that interaural coherence, despite its widespread use, may not be the best acoustic surrogate of spatial perception for the aged and impaired. To test this, we simulated the room impulse responses for various acoustic scenarios with differing coherence and lateral (energy) fraction attributes using room modelling software (ODEON). Bilaterally impaired adult participants were asked to sketch the perceived size of speech tokens and musical excerpts that were convolved with these impulse responses and presented to them in a sound-dampened enclosure through a 24-loudspeaker array. Participants’ binaural acuity was also measured using an interaural phase discrimination task. Corroborating our previous findings, the results showed less sensitivity to interaural coherence in the auditory source width judgments of older hearing-impaired individuals, indicating that alternate acoustic measurements in the design of spaces for the elderly may be necessary. PMID:27213028
The sensitivity of hearing-impaired adults to acoustic attributes in simulated rooms.
Whitmer, William M; McShefferty, David; Akeroyd, Michael A
2013-06-02
In previous studies we have shown that older hearing-impaired individuals are relatively insensitive to changes in the apparent width of broadband noises when those width changes were based on differences in interaural coherence [W. Whitmer, B. Seeber and M. Akeroyd, J. Acoust. Soc. Am. 132, 369-379 (2012)]. This insensitivity has been linked to senescent difficulties in resolving binaural fine-structure differences. It is therefore possible that interaural coherence, despite its widespread use, may not be the best acoustic surrogate of spatial perception for the aged and impaired. To test this, we simulated the room impulse responses for various acoustic scenarios with differing coherence and lateral (energy) fraction attributes using room modelling software (ODEON). Bilaterally impaired adult participants were asked to sketch the perceived size of speech tokens and musical excerpts that were convolved with these impulse responses and presented to them in a sound-dampened enclosure through a 24-loudspeaker array. Participants' binaural acuity was also measured using an interaural phase discrimination task. Corroborating our previous findings, the results showed less sensitivity to interaural coherence in the auditory source width judgments of older hearing-impaired individuals, indicating that alternate acoustic measurements in the design of spaces for the elderly may be necessary.
Binaural beats at high frequencies.
McFadden, D; Pasanen, E G
1975-10-24
Binaural beats have long been believed to be audible only at low frequencies, but an interaction reminiscent of a binaural beat can sometimes be heard when different two-tone complexes of high frequency are presented to the two ears. The primary requirement is that the frequency separation in the complex at one ear be slightly different from that in the other--that is, that there be a small interaural difference in the envelope periodicities. This finding is in accord with other recent demonstrations that the auditory system is not deaf to interaural time differences at high frequencies.
Franken, Tom P; Joris, Philip X; Smith, Philip H
2018-06-14
The brainstem's lateral superior olive (LSO) is thought to be crucial for localizing high-frequency sounds by coding interaural sound level differences (ILD). Its neurons weigh contralateral inhibition against ipsilateral excitation, making their firing rate a function of the azimuthal position of a sound source. Since the very first in vivo recordings, LSO principal neurons have been reported to give sustained and temporally integrating 'chopper' responses to sustained sounds. Neurons with transient responses were observed but largely ignored and even considered a sign of pathology. Using the Mongolian gerbil as a model system, we have obtained the first in vivo patch clamp recordings from labeled LSO neurons and find that principal LSO neurons, the most numerous projection neurons of this nucleus, only respond at sound onset and show fast membrane features suggesting an importance for timing. These results provide a new framework to interpret previously puzzling features of this circuit. © 2018, Franken et al.
Lee, Norman; Schrode, Katrina M.; Johns, Anastasia R.; Christensen-Dalsgaard, Jakob; Bee, Mark A.
2014-01-01
Anuran ears function as pressure difference receivers, and the amplitude and phase of tympanum vibrations are inherently directional, varying with sound incident angle. We quantified the nature of this directionality for Cope’s gray treefrog, Hyla chrysoscelis. We presented subjects with pure tones, advertisement calls, and frequency-modulated sweeps to examine the influence of frequency, signal level, lung inflation, and sex on ear directionality. Interaural differences in the amplitude of tympanum vibrations were 1–4 dB greater than sound pressure differences adjacent to the two tympana, while interaural differences in the phase of tympanum vibration were similar to or smaller than those in sound phase. Directionality in the amplitude and phase of tympanum vibration were highly dependent on sound frequency, and directionality in amplitude varied slightly with signal level. Directionality in the amplitude and phase of tone- and call-evoked responses did not differ between sexes. Lung inflation strongly affected tympanum directionality over a narrow frequency range that, in females, included call frequencies. This study provides a foundation for further work on the biomechanics and neural mechanisms of spatial hearing in H. chrysoscelis, and lends valuable perspective to behavioral studies on the use of spatial information by this species and other frogs. PMID:24504183
Caldwell, Michael S; Lee, Norman; Schrode, Katrina M; Johns, Anastasia R; Christensen-Dalsgaard, Jakob; Bee, Mark A
2014-04-01
Anuran ears function as pressure difference receivers, and the amplitude and phase of tympanum vibrations are inherently directional, varying with sound incident angle. We quantified the nature of this directionality for Cope's gray treefrog, Hyla chrysoscelis. We presented subjects with pure tones, advertisement calls, and frequency-modulated sweeps to examine the influence of frequency, signal level, lung inflation, and sex on ear directionality. Interaural differences in the amplitude of tympanum vibrations were 1-4 dB greater than sound pressure differences adjacent to the two tympana, while interaural differences in the phase of tympanum vibration were similar to or smaller than those in sound phase. Directionality in the amplitude and phase of tympanum vibration were highly dependent on sound frequency, and directionality in amplitude varied slightly with signal level. Directionality in the amplitude and phase of tone- and call-evoked responses did not differ between sexes. Lung inflation strongly affected tympanum directionality over a narrow frequency range that, in females, included call frequencies. This study provides a foundation for further work on the biomechanics and neural mechanisms of spatial hearing in H. chrysoscelis, and lends valuable perspective to behavioral studies on the use of spatial information by this species and other frogs.
Hu, Hongmei; Kollmeier, Birger; Dietz, Mathias
2016-01-01
Although bilateral cochlear implants (BiCIs) have succeeded in improving the spatial hearing performance of bilateral CI users, the overall performance is still not comparable with normal hearing listeners. Limited success can be partially caused by an interaural mismatch of the place-of-stimulation in each cochlea. Pairing matched interaural CI electrodes and stimulating them with the same frequency band is expected to facilitate binaural functions such as binaural fusion, localization, or spatial release from masking. It has been shown in animal experiments that the magnitude of the binaural interaction component (BIC) derived from the wave-eV decreases for increasing interaural place of stimulation mismatch. This motivated the investigation of the suitability of an electroencephalography-based objective electrode-frequency fitting procedure based on the BIC for BiCI users. A 61 channel monaural and binaural electrically evoked auditory brainstem response (eABR) recording was performed in 7 MED-EL BiCI subjects so far. These BiCI subjects were directly stimulated at 60% dynamic range with 19.9 pulses per second via a research platform provided by the University of Innsbruck (RIB II). The BIC was derived for several interaural electrode pairs by subtracting the response from binaural stimulation from their summed monaural responses. The BIC based pairing results are compared with two psychoacoustic pairing methods: interaural pulse time difference sensitivity and interaural pitch matching. The results for all three methods analyzed as a function of probe electrode allow for determining a matched pair in more than half of the subjects, with a typical accuracy of ± 1 electrode. This includes evidence for statistically significant tuning of the BIC as a function of probe electrode in human subjects. However, results across the three conditions were sometimes not consistent. These discrepancies will be discussed in the light of pitch plasticity versus less plastic brainstem processing.
The Relative Contribution of Interaural Time and Magnitude Cues to Dynamic Sound Localization
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)
1995-01-01
This paper presents preliminary data from a study examining the relative contribution of interaural time differences (ITDs) and interaural level differences (ILDs) to the localization of virtual sound sources both with and without head motion. The listeners' task was to estimate the apparent direction and distance of virtual sources (broadband noise) presented over headphones. Stimuli were synthesized from minimum phase representations of nonindividualized directional transfer functions; binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; the position of the listener's head was tracked and the stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. ILDs and ITDs were either correctly or incorrectly correlated with head motion: (1) both ILDs and ITDs correctly correlated, (2) ILDs correct, ITD fixed at 0 deg azimuth and 0 deg elevation, (3) ITDs correct, ILDs fixed at 0 deg, 0 deg. Similar conditions were run for static conditions except that none of the cues changed with head motion. The data indicated that, compared to static conditions, head movements helped listeners to resolve confusions primarily when ILDs were correctly correlated, although a smaller effect was also seen for correct ITDs. Together with the results for static conditions, the data suggest that localization tends to be dominated by the cue that is most reliable or consistent, when reliability is defined by consistency over time as well as across frequency bands.
The acoustical bright spot and mislocalization of tones by human listeners.
Macaulay, Eric J; Hartmann, William M; Rakerd, Brad
2010-03-01
Listeners attempted to localize 1500-Hz sine tones presented in free field from a loudspeaker array, spanning azimuths from 0 degrees (straight ahead) to 90 degrees (extreme right). During this task, the tone levels and phases were measured in the listeners' ear canals. Because of the acoustical bright spot, measured interaural level differences (ILD) were non-monotonic functions of azimuth with a maximum near 55 degrees . In a source-identification task, listeners' localization decisions closely tracked the non-monotonic ILD, and thus became inaccurate at large azimuths. When listeners received training and feedback, their accuracy improved only slightly. In an azimuth-discrimination task, listeners decided whether a first sound was to the left or to the right of a second. The discrimination results also reflected the confusion caused by the non-monotonic ILD, and they could be predicted approximately by a listener's identification results. When the sine tones were amplitude modulated or replaced by narrow bands of noise, interaural time difference (ITD) cues greatly reduced the confusion for most listeners, but not for all. Recognizing the important role of the bright spot requires a reevaluation of the transition between the low-frequency region for localization (mainly ITD) and the high-frequency region (mainly ILD).
Kuwada, S; Batra, R; Stanford, T R
1989-02-01
1. We studied the effects of sodium pentobarbital on 22 neurons in the inferior colliculus (IC) of the rabbit. We recorded changes in the sensitivity of these neurons to monaural stimulation and to ongoing interaural time differences (ITDs). Monaural stimuli were tone bursts at or near the neuron's best frequency. The ITD was varied by delivering tones that differed by 1 Hz to the two ears, resulting in a 1-Hz binaural beat. 2. We assessed a neuron's ITD sensitivity by calculating three measures from the responses to binaural beats: composite delay, characteristic delay (CD), and characteristic phase (CP). To obtain the composite delay, we first derived period histograms by averaging, showing the response at each stimulating frequency over one period of the beat frequency. Second, the period histograms were replotted as a function of their equivalent interaural delay and then averaged together to yield the composite delay curve. Last, we calculated the composite peak or trough delay by fitting a parabola to the peak or trough of this composite curve. The composite delay curve represents the average response to all frequencies within the neuron's responsive range, and the peak reflects the interaural delay that produces the maximum response. The CD and CP were estimated from a weighted fit of a regression line to the plot of the mean interaural phase of the response versus the stimulating frequency. The slope and phase intercept of this regression line yielded estimates of CD and CP, respectively. These two quantities are thought to reflect the mechanism of ITD sensitivity, which involves the convergence of phase-locked inputs on a binaural cell. The CD estimates the difference in the time required for the two inputs to travel from either ear to this cell, whereas the CP reflects the interaural phase difference of the inputs at this cell. 3. Injections of sodium pentobarbital at subsurgical dosages (less than 25 mg/kg) almost invariably altered the neuron's response rate, response latency, response pattern, and spontaneous activity. Most of these changes were predictable and consistent with an enhancement of inhibitory influences. For example, if the earliest response was inhibitory, later excitation was usually reduced and latency increased. If the earliest response was excitatory, the level of this excitation was unaltered or slightly enhanced, and changes in latency were minimal. 4. The neuron's response pattern also changed in a predictable way. For example, a response with an inhibitory pause could either change to a response with a longer pause or to a response with an onset only.(ABSTRACT TRUNCATED AT 400 WORDS)
Panniello, Mariangela; King, Andrew J; Dahmen, Johannes C; Walker, Kerry M M
2018-01-01
Abstract Despite decades of microelectrode recordings, fundamental questions remain about how auditory cortex represents sound-source location. Here, we used in vivo 2-photon calcium imaging to measure the sensitivity of layer II/III neurons in mouse primary auditory cortex (A1) to interaural level differences (ILDs), the principal spatial cue in this species. Although most ILD-sensitive neurons preferred ILDs favoring the contralateral ear, neurons with either midline or ipsilateral preferences were also present. An opponent-channel decoder accurately classified ILDs using the difference in responses between populations of neurons that preferred contralateral-ear-greater and ipsilateral-ear-greater stimuli. We also examined the spatial organization of binaural tuning properties across the imaged neurons with unprecedented resolution. Neurons driven exclusively by contralateral ear stimuli or by binaural stimulation occasionally formed local clusters, but their binaural categories and ILD preferences were not spatially organized on a more global scale. In contrast, the sound frequency preferences of most neurons within local cortical regions fell within a restricted frequency range, and a tonotopic gradient was observed across the cortical surface of individual mice. These results indicate that the representation of ILDs in mouse A1 is comparable to that of most other mammalian species, and appears to lack systematic or consistent spatial order. PMID:29136122
Epp, Bastian; Yasin, Ifat; Verhey, Jesko L
2013-12-01
The audibility of important sounds is often hampered due to the presence of other masking sounds. The present study investigates if a correlate of the audibility of a tone masked by noise is found in late auditory evoked potentials measured from human listeners. The audibility of the target sound at a fixed physical intensity is varied by introducing auditory cues of (i) interaural target signal phase disparity and (ii) coherent masker level fluctuations in different frequency regions. In agreement with previous studies, psychoacoustical experiments showed that both stimulus manipulations result in a masking release (i: binaural masking level difference; ii: comodulation masking release) compared to a condition where those cues are not present. Late auditory evoked potentials (N1, P2) were recorded for the stimuli at a constant masker level, but different signal levels within the same set of listeners who participated in the psychoacoustical experiment. The data indicate differences in N1 and P2 between stimuli with and without interaural phase disparities. However, differences for stimuli with and without coherent masker modulation were only found for P2, i.e., only P2 is sensitive to the increase in audibility, irrespective of the cue that caused the masking release. The amplitude of P2 is consistent with the psychoacoustical finding of an addition of the masking releases when both cues are present. Even though it cannot be concluded where along the auditory pathway the audibility is represented, the P2 component of auditory evoked potentials is a candidate for an objective measure of audibility in the human auditory system. Copyright © 2013 Elsevier B.V. All rights reserved.
Time-Varying Distortions of Binaural Information by Bilateral Hearing Aids
Rodriguez, Francisco A.; Portnuff, Cory D. F.; Goupell, Matthew J.; Tollin, Daniel J.
2016-01-01
In patients with bilateral hearing loss, the use of two hearing aids (HAs) offers the potential to restore the benefits of binaural hearing, including sound source localization and segregation. However, existing evidence suggests that bilateral HA users’ access to binaural information, namely interaural time and level differences (ITDs and ILDs), can be compromised by device processing. Our objective was to characterize the nature and magnitude of binaural distortions caused by modern digital behind-the-ear HAs using a variety of stimuli and HA program settings. Of particular interest was a common frequency-lowering algorithm known as nonlinear frequency compression, which has not previously been assessed for its effects on binaural information. A binaural beamforming algorithm was also assessed. Wide dynamic range compression was enabled in all programs. HAs were placed on a binaural manikin, and stimuli were presented from an arc of loudspeakers inside an anechoic chamber. Stimuli were broadband noise bursts, 10-Hz sinusoidally amplitude-modulated noise bursts, or consonant–vowel–consonant speech tokens. Binaural information was analyzed in terms of ITDs, ILDs, and interaural coherence, both for whole stimuli and in a time-varying sense (i.e., within a running temporal window) across four different frequency bands (1, 2, 4, and 6 kHz). Key findings were: (a) Nonlinear frequency compression caused distortions of high-frequency envelope ITDs and significantly reduced interaural coherence. (b) For modulated stimuli, all programs caused time-varying distortion of ILDs. (c) HAs altered the relationship between ITDs and ILDs, introducing large ITD–ILD conflicts in some cases. Potential perceptual consequences of measured distortions are discussed. PMID:27698258
Detection of Interaural Time Differences in the Alligator
Carr, Catherine E.; Soares, Daphne; Smolders, Jean; Simon, Jonathan Z.
2011-01-01
The auditory systems of birds and mammals use timing information from each ear to detect interaural time difference (ITD). To determine whether the Jeffress-type algorithms that underlie sensitivity to ITD in birds are an evolutionarily stable strategy, we recorded from the auditory nuclei of crocodilians, who are the sister group to the birds. In alligators, precisely timed spikes in the first-order nucleus magnocellularis (NM) encode the timing of sounds, and NM neurons project to neurons in the nucleus laminaris (NL) that detect interaural time differences. In vivo recordings from NL neurons show that the arrival time of phase-locked spikes differs between the ipsilateral and contralateral inputs. When this disparity is nullified by their best ITD, the neurons respond maximally. Thus NL neurons act as coincidence detectors. A biologically detailed model of NL with alligator parameters discriminated ITDs up to 1 kHz. The range of best ITDs represented in NL was much larger than in birds, however, and extended from 0 to 1000 μs contralateral, with a median ITD of 450 μs. Thus, crocodilians and birds employ similar algorithms for ITD detection, although crocodilians have larger heads. PMID:19553438
Mokri, Yasamin; Worland, Kate; Ford, Mark; Rajan, Ramesh
2015-01-01
Humans can accurately localize sounds even in unfavourable signal-to-noise conditions. To investigate the neural mechanisms underlying this, we studied the effect of background wide-band noise on neural sensitivity to variations in interaural level difference (ILD), the predominant cue for sound localization in azimuth for high-frequency sounds, at the characteristic frequency of cells in rat inferior colliculus (IC). Binaural noise at high levels generally resulted in suppression of responses (55.8%), but at lower levels resulted in enhancement (34.8%) as well as suppression (30.3%). When recording conditions permitted, we then examined if any binaural noise effects were related to selective noise effects at each of the two ears, which we interpreted in light of well-known differences in input type (excitation and inhibition) from each ear shaping particular forms of ILD sensitivity in the IC. At high signal-to-noise ratios (SNR), in most ILD functions (41%), the effect of background noise appeared to be due to effects on inputs from both ears, while for a large percentage (35.8%) appeared to be accounted for by effects on excitatory input. However, as SNR decreased, change in excitation became the dominant contributor to the change due to binaural background noise (63.6%). These novel findings shed light on the IC neural mechanisms for sound localization in the presence of continuous background noise. They also suggest that some effects of background noise on encoding of sound location reported to be emergent in upstream auditory areas can also be observed at the level of the midbrain. PMID:25865218
Active localization of virtual sounds
NASA Technical Reports Server (NTRS)
Loomis, Jack M.; Hebert, C.; Cicinelli, J. G.
1991-01-01
We describe a virtual sound display built around a 12 MHz 80286 microcomputer and special purpose analog hardware. The display implements most of the primary cues for sound localization in the ear-level plane. Static information about direction is conveyed by interaural time differences and, for frequencies above 1800 Hz, by head sound shadow (interaural intensity differences) and pinna sound shadow. Static information about distance is conveyed by variation in sound pressure (first power law) for all frequencies, by additional attenuation in the higher frequencies (simulating atmospheric absorption), and by the proportion of direct to reverberant sound. When the user actively locomotes, the changing angular position of the source occasioned by head rotations provides further information about direction and the changing angular velocity produced by head translations (motion parallax) provides further information about distance. Judging both from informal observations by users and from objective data obtained in an experiment on homing to virtual and real sounds, we conclude that simple displays such as this are effective in creating the perception of external sounds to which subjects can home with accuracy and ease.
Bernstein, Leslie R; Trahiotis, Constantine
2017-02-01
Interaural cross-correlation-based models of binaural processing have accounted successfully for a wide variety of binaural phenomena, including binaural detection, binaural discrimination, and measures of extents of laterality based on interaural temporal disparities, interaural intensitive disparities, and their combination. This report focuses on quantitative accounts of data obtained from binaural detection experiments published over five decades. Particular emphasis is placed on stimulus contexts for which commonly used correlation-based approaches fail to provide adequate explanations of the data. One such context concerns binaural detection of signals masked by certain noises that are narrow-band and/or interaurally partially correlated. It is shown that a cross-correlation-based model that includes stages of peripheral auditory processing can, when coupled with an appropriate decision variable, account well for a wide variety of classic and recently published binaural detection data including those that have, heretofore, proven to be problematic.
The acoustical bright spot and mislocalization of tones by human listeners
Macaulay, Eric J.; Hartmann, William M.; Rakerd, Brad
2010-01-01
Listeners attempted to localize 1500-Hz sine tones presented in free field from a loudspeaker array, spanning azimuths from 0° (straight ahead) to 90° (extreme right). During this task, the tone levels and phases were measured in the listeners’ ear canals. Because of the acoustical bright spot, measured interaural level differences (ILD) were non-monotonic functions of azimuth with a maximum near 55°. In a source-identification task, listeners’ localization decisions closely tracked the non-monotonic ILD, and thus became inaccurate at large azimuths. When listeners received training and feedback, their accuracy improved only slightly. In an azimuth-discrimination task, listeners decided whether a first sound was to the left or to the right of a second. The discrimination results also reflected the confusion caused by the non-monotonic ILD, and they could be predicted approximately by a listener’s identification results. When the sine tones were amplitude modulated or replaced by narrow bands of noise, interaural time difference (ITD) cues greatly reduced the confusion for most listeners, but not for all. Recognizing the important role of the bright spot requires a reevaluation of the transition between the low-frequency region for localization (mainly ITD) and the high-frequency region (mainly ILD). PMID:20329844
Models of the electrically stimulated binaural system: A review.
Dietz, Mathias
2016-01-01
In an increasing number of countries, the standard treatment for deaf individuals is moving toward the implantation of two cochlear implants. Today's device technology and fitting procedure, however, appears as if the two implants would serve two independent ears and brains. Many experimental studies have demonstrated that after careful matching and balancing of left and right stimulation in controlled laboratory studies most patients have almost normal sensitivity to interaural level differences and some sensitivity to interaural time differences (ITDs). Mechanisms underlying the limited ITD sensitivity are still poorly understood and many different aspects may contribute. Recent pioneering computational approaches identified some of the functional implications the electric input imposes on the neural brainstem circuits. Simultaneously these studies have raised new questions and certainly demonstrated that further refinement of the model stages is necessary. They join the experimental study's conclusions that binaural device technology, binaural fitting, specific speech coding strategies, and binaural signal processing algorithms are obviously missing components to maximize the benefit of bilateral implantation. Within this review, the existing models of the electrically stimulated binaural system are explained, compared, and discussed from a viewpoint of a "CI device with auditory system" and from that of neurophysiological research.
Akeroyd, Michael A; Chambers, John; Bullock, David; Palmer, Alan R; Summerfield, A Quentin; Nelson, Philip A; Gatehouse, Stuart
2007-02-01
Cross-talk cancellation is a method for synthesizing virtual auditory space using loudspeakers. One implementation is the "Optimal Source Distribution" technique [T. Takeuchi and P. Nelson, J. Acoust. Soc. Am. 112, 2786-2797 (2002)], in which the audio bandwidth is split across three pairs of loudspeakers, placed at azimuths of +/-90 degrees, +/-15 degrees, and +/-3 degrees, conveying low, mid, and high frequencies, respectively. A computational simulation of this system was developed and verified against measurements made on an acoustic system using a manikin. Both the acoustic system and the simulation gave a wideband average cancellation of almost 25 dB. The simulation showed that when there was a mismatch between the head-related transfer functions used to set up the system and those of the final listener, the cancellation was reduced to an average of 13 dB. Moreover, in this case the binaural interaural time differences and interaural level differences delivered by the simulation of the optimal source distribution (OSD) system often differed from the target values. It is concluded that only when the OSD system is set up with "matched" head-related transfer functions can it deliver accurate binaural cues.
Smith, Rosanna C G; Price, Stephen R
2014-01-01
Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.
Jones, Heath G; Kan, Alan; Litovsky, Ruth Y
2016-01-01
This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.
Binaural sensitivity changes between cortical on and off responses
Dahmen, Johannes C.; King, Andrew J.; Schnupp, Jan W. H.
2011-01-01
Neurons exhibiting on and off responses with different frequency tuning have previously been described in the primary auditory cortex (A1) of anesthetized and awake animals, but it is unknown whether other tuning properties, including sensitivity to binaural localization cues, also differ between on and off responses. We measured the sensitivity of A1 neurons in anesthetized ferrets to 1) interaural level differences (ILDs), using unmodulated broadband noise with varying ILDs and average binaural levels, and 2) interaural time delays (ITDs), using sinusoidally amplitude-modulated broadband noise with varying envelope ITDs. We also assessed fine-structure ITD sensitivity and frequency tuning, using pure-tone stimuli. Neurons most commonly responded to stimulus onset only, but purely off responses and on-off responses were also recorded. Of the units exhibiting significant binaural sensitivity nearly one-quarter showed binaural sensitivity in both on and off responses, but in almost all (∼97%) of these units the binaural tuning of the on responses differed significantly from that seen in the off responses. Moreover, averaged, normalized ILD and ITD tuning curves calculated from all units showing significant sensitivity to binaural cues indicated that on and off responses displayed different sensitivity patterns across the population. A principal component analysis of ITD response functions suggested a continuous cortical distribution of binaural sensitivity, rather than discrete response classes. Rather than reflecting a release from inhibition without any functional significance, we propose that binaural off responses may be important to cortical encoding of sound-source location. PMID:21562191
Tollin, Daniel J.; Yin, Tom C. T.
2006-01-01
The lateral superior olive (LSO) is believed to encode differences in sound level at the two ears, a cue for azimuthal sound location. Most high-frequency-sensitive LSO neurons are binaural, receiving inputs from both ears. An inhibitory input from the contralateral ear, via the medial nucleus of the trapezoid body (MNTB), and excitatory input from the ipsilateral ear enable level differences to be encoded. However, the classical descriptions of low-frequency-sensitive neurons report primarily monaural cells with no contralateral inhibition. Anatomical and physiological evidence, however, shows that low-frequency LSO neurons receive low-frequency inhibitory input from ipsilateral MNTB, which in turn receives excitatory input from the contralateral cochlear nucleus and low-frequency excitatory input from the ipsilateral cochlear nucleus. Therefore, these neurons would be expected to be binaural with contralateral inhibition. Here, we re-examined binaural interaction in low-frequency (less than ~3 kHz) LSO neurons and phase locking in the MNTB. Phase locking to low-frequency tones in MNTB and ipsilaterally driven LSO neurons with frequency sensitivities < 1.2 kHz was enhanced relative to the auditory nerve. Moreover, most low-frequency LSO neurons exhibited contralateral inhibition: ipsilaterally driven responses were suppressed by raising the level of the contralateral stimulus; most neurons were sensitive to interaural time delays in pure tone and noise stimuli such that inhibition was nearly maximal when the stimuli were presented to the ears in-phase. The data demonstrate that low-frequency LSO neurons of cat are not monaural and can exhibit contralateral inhibition like their high-frequency counterparts. PMID:16291937
NASA Astrophysics Data System (ADS)
Kim, Sungyoung; Martens, William L.
2005-04-01
By industry standard (ITU-R. Recommendation BS.775-1), multichannel stereophonic signals within the frequency range of up to 80 or 120 Hz may be mixed and delivered via a single driver (e.g., a subwoofer) without significant impairment of stereophonic sound quality. The assumption that stereophonic information within such low-frequency content is not significant was tested by measuring discrimination thresholds for changes in interaural cross-correlation (IACC) within spectral bands containing the lowest frequency components of low-pitch musical tones. Performances were recorded for three different musical instruments playing single notes ranging in fundamental frequency from 41 Hz to 110 Hz. The recordings, made using a multichannel microphone array composed of five DPA 4006 pressure microphones, were processed to produce a set of stimuli that varied in interaural cross-correlation (IACC) within a low-frequency band, but were otherwise identical in a higher-frequency band. This correlation processing was designed to have minimal effect upon other psychoacoustic variables such as loudness and timbre. The results show that changes in interaural cross correlation (IACC) within low-frequency bands of low-pitch musical tones are most easily discriminated when decorrelated signals are presented via subwoofers positioned at extreme lateral angles (far from the median plane). [Work supported by VRQ.
Brown, Andrew D; Rodriguez, Francisco A; Portnuff, Cory D F; Goupell, Matthew J; Tollin, Daniel J
2016-10-03
In patients with bilateral hearing loss, the use of two hearing aids (HAs) offers the potential to restore the benefits of binaural hearing, including sound source localization and segregation. However, existing evidence suggests that bilateral HA users' access to binaural information, namely interaural time and level differences (ITDs and ILDs), can be compromised by device processing. Our objective was to characterize the nature and magnitude of binaural distortions caused by modern digital behind-the-ear HAs using a variety of stimuli and HA program settings. Of particular interest was a common frequency-lowering algorithm known as nonlinear frequency compression, which has not previously been assessed for its effects on binaural information. A binaural beamforming algorithm was also assessed. Wide dynamic range compression was enabled in all programs. HAs were placed on a binaural manikin, and stimuli were presented from an arc of loudspeakers inside an anechoic chamber. Stimuli were broadband noise bursts, 10-Hz sinusoidally amplitude-modulated noise bursts, or consonant-vowel-consonant speech tokens. Binaural information was analyzed in terms of ITDs, ILDs, and interaural coherence, both for whole stimuli and in a time-varying sense (i.e., within a running temporal window) across four different frequency bands (1, 2, 4, and 6 kHz). Key findings were: (a) Nonlinear frequency compression caused distortions of high-frequency envelope ITDs and significantly reduced interaural coherence. (b) For modulated stimuli, all programs caused time-varying distortion of ILDs. (c) HAs altered the relationship between ITDs and ILDs, introducing large ITD-ILD conflicts in some cases. Potential perceptual consequences of measured distortions are discussed. © The Author(s) 2016.
Bernstein, Leslie R; Trahiotis, Constantine
2016-11-01
This study assessed whether audiometrically-defined "slight" or "hidden" hearing losses might be associated with degradations in binaural processing as measured in binaural detection experiments employing interaurally delayed signals and maskers. Thirty-one listeners participated, all having no greater than slight hearing losses (i.e., no thresholds greater than 25 dB HL). Across the 31 listeners and consistent with the findings of Bernstein and Trahiotis [(2015). J. Acoust. Soc. Am. 138, EL474-EL479] binaural detection thresholds at 500 Hz and 4 kHz increased with increasing magnitude of interaural delay, suggesting a loss of precision of coding with magnitude of interaural delay. Binaural detection thresholds were consistently found to be elevated for listeners whose absolute thresholds at 4 kHz exceeded 7.5 dB HL. No such elevations were observed in conditions having no binaural cues available to aid detection (i.e., "monaural" conditions). Partitioning and analyses of the data revealed that those elevated thresholds (1) were more attributable to hearing level than to age and (2) result from increased levels of internal noise. The data suggest that listeners whose high-frequency monaural hearing status would be classified audiometrically as being normal or "slight loss" may exhibit substantial and perceptually meaningful losses of binaural processing.
Performance on Tests of Central Auditory Processing by Individuals Exposed to High-Intensity Blasts
2012-07-01
percent (gap detected on at least four of the six presentations), with all longer durations receiving a score greater than 50 percent. Binaural ...Processing and Sound Localization Temporal precision of neural firing is also involved in binaural processing and localization of sound in space. The...Masking Level Difference (MLD) test evaluates the integrity of the earliest sites of binaural comparison and sensitivity to interaural phase in the
Effect of occlusion, directionality and age on horizontal localization
NASA Astrophysics Data System (ADS)
Alworth, Lynzee Nicole
Localization acuity of a given listener is dependent upon the ability discriminate between interaural time and level disparities. Interaural time differences are encoded by low frequency information whereas interaural level differences are encoded by high frequency information. Much research has examined effects of hearing aid microphone technologies and occlusion separately and prior studies have not evaluated age as a factor in localization acuity. Open-fit hearing instruments provide new earmold technologies and varying microphone capabilities; however, these instruments have yet to be evaluated with regard to horizontal localization acuity. Thus, the purpose of this study is to examine the effects of microphone configuration, type of dome in open-fit hearing instruments, and age on the horizontal localization ability of a given listener. Thirty adults participated in this study and were grouped based upon hearing sensitivity and age (young normal hearing, >50 years normal hearing, >50 hearing impaired). Each normal hearing participant completed one localization experiment (unaided/unamplified) where they listened to the stimulus "Baseball" and selected the point of origin. Hearing impaired listeners were fit with the same two receiver-in-the-ear hearing aids and same dome types, thus controlling for microphone technologies, type of dome, and fitting between trials. Hearing impaired listeners completed a total of 7 localization experiments (unaided/unamplified; open dome: omnidirectional, adaptive directional, fixed directional; micromold: omnidirectional, adaptive directional, fixed directional). Overall, results of this study indicate that age significantly affects horizontal localization ability as younger adult listeners with normal hearing made significantly fewer localization errors than older adult listeners with normal hearing. Also, results revealed a significant difference in performance between dome type; however, upon further examination was not significant. Therefore, results examining type of dome should be viewed with caution. Results examining microphone configuration and microphone configuration by dome type were not significant. Moreover, results evaluating performance relative to unaided (unamplified) were not significant. Taken together, these results suggest open-fit hearing instruments, regardless of microphone or dome type, do not degrade horizontal localization acuity within a given listener relative to their 'older aged' normal hearing counterparts in quiet environments.
Computation of interaural time difference in the owl's coincidence detector neurons.
Funabiki, Kazuo; Ashida, Go; Konishi, Masakazu
2011-10-26
Both the mammalian and avian auditory systems localize sound sources by computing the interaural time difference (ITD) with submillisecond accuracy. The neural circuits for this computation in birds consist of axonal delay lines and coincidence detector neurons. Here, we report the first in vivo intracellular recordings from coincidence detectors in the nucleus laminaris of barn owls. Binaural tonal stimuli induced sustained depolarizations (DC) and oscillating potentials whose waveforms reflected the stimulus. The amplitude of this sound analog potential (SAP) varied with ITD, whereas DC potentials did not. The amplitude of the SAP was correlated with firing rate in a linear fashion. Spike shape, synaptic noise, the amplitude of SAP, and responsiveness to current pulses differed between cells at different frequencies, suggesting an optimization strategy for sensing sound signals in neurons tuned to different frequencies.
Goupell, Matthew J
2015-03-01
Bilateral cochlear implant (CI) listeners can perform binaural tasks, but they are typically worse than normal-hearing (NH) listeners. To understand why this difference occurs and the mechanisms involved in processing dynamic binaural differences, interaural envelope correlation change discrimination sensitivity was measured in real and simulated CI users. In experiment 1, 11 CI (eight late deafened, three early deafened) and eight NH listeners were tested in an envelope correlation change discrimination task. Just noticeable differences (JNDs) were best for a matched place-of-stimulation and increased for an increasing mismatch. In experiment 2, attempts at intracranially centering stimuli did not produce lower JNDs. In experiment 3, the percentage of correct identifications of antiphasic carrier pulse trains modulated by correlated envelopes was measured as a function of mismatch and pulse rate. Sensitivity decreased for increasing mismatch and increasing pulse rate. The experiments led to two conclusions. First, envelope correlation change discrimination necessitates place-of-stimulation matched inputs. However, it is unclear if previous experience with acoustic hearing is necessary for envelope correlation change discrimination. Second, NH listeners presented with CI simulations demonstrated better performance than real CI listeners. If the simulations are realistic representations of electrical stimuli, real CI listeners appear to have difficulty processing interaural information in modulated signals.
Sound localization in common vampire bats: Acuity and use of the binaural time cue by a small mammal
Heffner, Rickye S.; Koay, Gimseong; Heffner, Henry E.
2015-01-01
Passive sound-localization acuity and the ability to use binaural time and intensity cues were determined for the common vampire bat (Desmodus rotundus). The bats were tested using a conditioned suppression/avoidance procedure in which they drank defibrinated blood from a spout in the presence of sounds from their right, but stopped drinking (i.e., broke contact with the spout) whenever a sound came from their left, thereby avoiding a mild shock. The mean minimum audible angle for three bats for a 100-ms noise burst was 13.1°—within the range of thresholds for other bats and near the mean for mammals. Common vampire bats readily localized pure tones of 20 kHz and higher, indicating they could use interaural intensity-differences. They could also localize pure tones of 5 kHz and lower, thereby demonstrating the use of interaural time-differences, despite their very small maximum interaural distance of 60 μs. A comparison of the use of locus cues among mammals suggests several implications for the evolution of sound localization and its underlying anatomical and physiological mechanisms. PMID:25618037
The effect of stimulus intensity on the right ear advantage in dichotic listening.
Hugdahl, Kenneth; Westerhausen, René; Alho, Kimmo; Medvedev, Svyatoslav; Hämäläinen, Heikki
2008-01-24
The dichotic listening test is non-invasive behavioural technique to study brain lateralization and it has been shown, that its results can be systematically modulated by varying stimulation properties (bottom-up effects) or attentional instructions (top-down effects) of the testing procedure. The goal of the present study was to further investigate the bottom-up modulation, by examining the effect of differences in the right or left ear stimulus intensity on the ear advantage. For this purpose, interaural intensity difference were gradually varied in steps of 3 dB from -21 dB in favour of the left ear to +21 dB in favour of the right ear, also including a no difference baseline condition. Thirty-three right-handed adult participants with normal hearing acuity were tested. The dichotic listening paradigm was based on consonant-vowel stimuli pairs. Only pairs with the same voicing (voice or non-voiced) of the consonant sound were used. The results showed: (a) a significant right ear advantage (REA) for interaural intensity differences from 21 to -3 dB, (b) no ear advantage (NEA) for the -6 dB difference, and (c) a significant left ear advantage (LEA) for differences form -9 to -21 dB. It is concluded that the right ear advantage in dichotic listening to CV syllables withstands an interaural intensity difference of -9 dB before yielding to a significant left ear advantage. This finding could have implications for theories of auditory laterality and hemispheric asymmetry for phonological processing.
Spitzer, M W; Semple, M N
1998-12-01
Transformation of binaural response properties in the ascending auditory pathway: influence of time-varying interaural phase disparity. J. Neurophysiol. 80: 3062-3076, 1998. Previous studies demonstrated that tuning of inferior colliculus (IC) neurons to interaural phase disparity (IPD) is often profoundly influenced by temporal variation of IPD, which simulates the binaural cue produced by a moving sound source. To determine whether sensitivity to simulated motion arises in IC or at an earlier stage of binaural processing we compared responses in IC with those of two major IPD-sensitive neuronal classes in the superior olivary complex (SOC), neurons whose discharges were phase locked (PL) to tonal stimuli and those that were nonphase locked (NPL). Time-varying IPD stimuli consisted of binaural beats, generated by presenting tones of slightly different frequencies to the two ears, and interaural phase modulation (IPM), generated by presenting a pure tone to one ear and a phase modulated tone to the other. IC neurons and NPL-SOC neurons were more sharply tuned to time-varying than to static IPD, whereas PL-SOC neurons were essentially uninfluenced by the mode of stimulus presentation. Preferred IPD was generally similar in responses to static and time-varying IPD for all unit populations. A few IC neurons were highly influenced by the direction and rate of simulated motion, but the major effect for most IC neurons and all SOC neurons was a linear shift of preferred IPD at high rates-attributable to response latency. Most IC and NPL-SOC neurons were strongly influenced by IPM stimuli simulating motion through restricted ranges of azimuth; simulated motion through partially overlapping azimuthal ranges elicited discharge profiles that were highly discontiguous, indicating that the response associated with a particular IPD is dependent on preceding portions of the stimulus. In contrast, PL-SOC responses tracked instantaneous IPD throughout the trajectory of simulated motion, resulting in highly contiguous discharge profiles for overlapping stimuli. This finding indicates that responses of PL-SOC units to time-varying IPD reflect only instantaneous IPD with no additional influence of dynamic stimulus attributes. Thus the neuronal representation of auditory spatial information undergoes a major transformation as interaural delay is initially processed in the SOC and subsequently reprocessed in IC. The finding that motion sensitivity in IC emerges from motion-insensitive input suggests that information about change of position is crucial to spatial processing at higher levels of the auditory system.
Binaural hearing in children using Gaussian enveloped and transposed tones.
Ehlers, Erica; Kan, Alan; Winn, Matthew B; Stoelb, Corey; Litovsky, Ruth Y
2016-04-01
Children who use bilateral cochlear implants (BiCIs) show significantly poorer sound localization skills than their normal hearing (NH) peers. This difference has been attributed, in part, to the fact that cochlear implants (CIs) do not faithfully transmit interaural time differences (ITDs) and interaural level differences (ILDs), which are known to be important cues for sound localization. Interestingly, little is known about binaural sensitivity in NH children, in particular, with stimuli that constrain acoustic cues in a manner representative of CI processing. In order to better understand and evaluate binaural hearing in children with BiCIs, the authors first undertook a study on binaural sensitivity in NH children ages 8-10, and in adults. Experiments evaluated sound discrimination and lateralization using ITD and ILD cues, for stimuli with robust envelope cues, but poor representation of temporal fine structure. Stimuli were spondaic words, Gaussian-enveloped tone pulse trains (100 pulse-per-second), and transposed tones. Results showed that discrimination thresholds in children were adult-like (15-389 μs for ITDs and 0.5-6.0 dB for ILDs). However, lateralization based on the same binaural cues showed higher variability than seen in adults. Results are discussed in the context of factors that may be responsible for poor representation of binaural cues in bilaterally implanted children.
The Relationship Between Intensity Coding and Binaural Sensitivity in Adults With Cochlear Implants.
Todd, Ann E; Goupell, Matthew J; Litovsky, Ruth Y
Many bilateral cochlear implant users show sensitivity to binaural information when stimulation is provided using a pair of synchronized electrodes. However, there is large variability in binaural sensitivity between and within participants across stimulation sites in the cochlea. It was hypothesized that within-participant variability in binaural sensitivity is in part affected by limitations and characteristics of the auditory periphery which may be reflected by monaural hearing performance. The objective of this study was to examine the relationship between monaural and binaural hearing performance within participants with bilateral cochlear implants. Binaural measures included dichotic signal detection and interaural time difference discrimination thresholds. Diotic signal detection thresholds were also measured. Monaural measures included dynamic range and amplitude modulation detection. In addition, loudness growth was compared between ears. Measures were made at three stimulation sites per listener. Greater binaural sensitivity was found with larger dynamic ranges. Poorer interaural time difference discrimination was found with larger difference between comfortable levels of the two ears. In addition, poorer diotic signal detection thresholds were found with larger differences between the dynamic ranges of the two ears. No relationship was found between amplitude modulation detection thresholds or symmetry of loudness growth and the binaural measures. The results suggest that some of the variability in binaural hearing performance within listeners across stimulation sites can be explained by factors nonspecific to binaural processing. The results are consistent with the idea that dynamic range and comfortable levels relate to peripheral neural survival and the width of the excitation pattern which could affect the fidelity with which central binaural nuclei process bilateral inputs.
Delphi, Maryam; Lotfi, M-Yones; Moossavi, Abdollah; Bakhshi, Enayatollah; Banimostafa, Maryam
2017-09-01
Previous studies have shown that interaural-time-difference (ITD) training can improve localization ability. Surprisingly little is, however, known about localization training vis-à-vis speech perception in noise based on interaural time difference in the envelope (ITD ENV). We sought to investigate the reliability of an ITD ENV-based training program in speech-in-noise perception among elderly individuals with normal hearing and speech-in-noise disorder. The present interventional study was performed during 2016. Sixteen elderly men between 55 and 65 years of age with the clinical diagnosis of normal hearing up to 2000 Hz and speech-in-noise perception disorder participated in this study. The training localization program was based on changes in ITD ENV. In order to evaluate the reliability of the training program, we performed speech-in-noise tests before the training program, immediately afterward, and then at 2 months' follow-up. The reliability of the training program was analyzed using the Friedman test and the SPSS software. Significant statistical differences were shown in the mean scores of speech-in-noise perception between the 3 time points (P=0.001). The results also indicated no difference in the mean scores of speech-in-noise perception between the 2 time points of immediately after the training program and 2 months' follow-up (P=0.212). The present study showed the reliability of an ITD ENV-based localization training in elderly individuals with speech-in-noise perception disorder.
Modulation cues influence binaural masking-level difference in masking-pattern experiments.
Nitschmann, Marc; Verhey, Jesko L
2012-03-01
Binaural masking patterns show a steep decrease in the binaural masking-level difference (BMLD) when masker and signal have no frequency component in common. Experimental threshold data are presented together with model simulations for a diotic masker centered at 250 or 500 Hz and a bandwidth of 10 or 100 Hz masking a sinusoid interaurally in phase (S(0)) or in antiphase (S(π)). Simulations with a binaural model, including a modulation filterbank for the monaural analysis, indicate that a large portion of the decrease in the BMLD in remote-masking conditions may be due to an additional modulation cue available for monaural detection. © 2012 Acoustical Society of America
Linear summation in the barn owl's brainstem underlies responses to interaural time differences.
Kuokkanen, Paula T; Ashida, Go; Carr, Catherine E; Wagner, Hermann; Kempter, Richard
2013-07-01
The neurophonic potential is a synchronized frequency-following extracellular field potential that can be recorded in the nucleus laminaris (NL) in the brainstem of the barn owl. Putative generators of the neurophonic are the afferent axons from the nucleus magnocellularis, synapses onto NL neurons, and spikes of NL neurons. The outputs of NL, i.e., action potentials of NL neurons, are only weakly represented in the neurophonic. Instead, the inputs to NL, i.e., afferent axons and their synaptic potentials, are the predominant origin of the neurophonic (Kuokkanen PT, Wagner H, Ashida G, Carr CE, Kempter R. J Neurophysiol 104: 2274-2290, 2010). Thus in NL the monaural inputs from the two brain sides converge and create a binaural neurophonic. If these monaural inputs contribute independently to the extracellular field, the response to binaural stimulation can be predicted from the sum of the responses to ipsi- and contralateral stimulation. We found that a linear summation model explains the dependence of the responses on interaural time difference as measured experimentally with binaural stimulation. The fit between model predictions and data was excellent, even without taking into account the nonlinear responses of NL coincidence detector neurons, although their firing rate and synchrony strongly depend on the interaural time difference. These results are consistent with the view that the afferent axons and their synaptic potentials in NL are the primary origin of the neurophonic.
Aronoff, Justin M.; Freed, Daniel J.; Fisher, Laurel M.; Pal, Ivan; Soli, Sigfrid D.
2011-01-01
Objectives Cochlear implant microphones differ in placement, frequency response, and other characteristics such as whether they are directional. Although normal hearing individuals are often used as controls in studies examining cochlear implant users’ binaural benefits, the considerable differences across cochlear implant microphones make such comparisons potentially misleading. The goal of this study was to examine binaural benefits for speech perception in noise for normal hearing individuals using stimuli processed by head-related transfer functions (HRTFs) based on the different cochlear implant microphones. Design HRTFs were created for different cochlear implant microphones and used to test participants on the Hearing in Noise Test. Experiment 1 tested cochlear implant users and normal hearing individuals with HRTF-processed stimuli and with sound field testing to determine whether the HRTFs adequately simulated sound field testing. Experiment 2 determined the measurement error and performance-intensity function for the Hearing in Noise Test with normal hearing individuals listening to stimuli processed with the various HRTFs. Experiment 3 compared normal hearing listeners’ performance across HRTFs to determine how the HRTFs affected performance. Experiment 4 evaluated binaural benefits for normal hearing listeners using the various HRTFs, including ones that were modified to investigate the contributions of interaural time and level cues. Results The results indicated that the HRTFs adequately simulated sound field testing for the Hearing in Noise Test. They also demonstrated that the test-retest reliability and performance-intensity function were consistent across HRTFs, and that the measurement error for the test was 1.3 dB, with a change in signal-to-noise ratio of 1 dB reflecting a 10% change in intelligibility. There were significant differences in performance when using the various HRTFs, with particularly good thresholds for the HRTF based on the directional microphone when the speech and masker were spatially separated, emphasizing the importance of measuring binaural benefits separately for each HRTF. Evaluation of binaural benefits indicated that binaural squelch and spatial release from masking were found for all HRTFs and binaural summation was found for all but one HRTF, although binaural summation was less robust than the other types of binaural benefits. Additionally, the results indicated that neither interaural time nor level cues dominated binaural benefits for the normal hearing participants. Conclusions This study provides a means to measure the degree to which cochlear implant microphones affect acoustic hearing with respect to speech perception in noise. It also provides measures that can be used to evaluate the independent contributions of interaural time and level cues. These measures provide tools that can aid researchers in understanding and improving binaural benefits in acoustic hearing individuals listening via cochlear implant microphones. PMID:21412155
Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou
2018-01-01
Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.
On the ability of human listeners to distinguish between front and back.
Zhang, Peter Xinya; Hartmann, William M
2010-02-01
In order to determine whether a sound source is in front or in back, listeners can use location-dependent spectral cues caused by diffraction from their anatomy. This capability was studied using a precise virtual reality technique (VRX) based on a transaural technology. Presented with a virtual baseline simulation accurate up to 16 kHz, listeners could not distinguish between the simulation and a real source. Experiments requiring listeners to discriminate between front and back locations were performed using controlled modifications of the baseline simulation to test hypotheses about the important spectral cues. The experiments concluded: (1) Front/back cues were not confined to any particular 1/3rd or 2/3rd octave frequency region. Often adequate cues were available in any of several disjoint frequency regions. (2) Spectral dips were more important than spectral peaks. (3) Neither monaural cues nor interaural spectral level difference cues were adequate. (4) Replacing baseline spectra by sharpened spectra had minimal effect on discrimination performance. (5) When presented with an interaural time difference less than 200 micros, which pulled the image to the side, listeners still successfully discriminated between front and back, suggesting that front/back discrimination is independent of azimuthal localization within certain limits. Copyright 2009 Elsevier B.V. All rights reserved.
On the ability of human listeners to distinguish between front and back
Zhang, Peter Xinya; Hartmann, William M.
2009-01-01
In order to determine whether a sound source is in front or in back, listeners can use location-dependent spectral cues caused by diffraction from their anatomy. This capability was studied using a precise virtual-reality technique (VRX) based on a transaural technology. Presented with a virtual baseline simulation accurate up to 16 kHz, listeners could not distinguish between the simulation and a real source. Experiments requiring listeners to discriminate between front and back locations were performed using controlled modifications of the baseline simulation to test hypotheses about the important spectral cues. The experiments concluded: (1) Front/back cues were not confined to any particular 1/3rd or 2/3rd octave frequency region. Often adequate cues were available in any of several disjoint frequency regions. (2) Spectral dips were more important than spectral peaks. (3) Neither monaural cues nor interaural spectral level difference cues were adequate. (4) Replacing baseline spectra by sharpened spectra had minimal effect on discrimination performance. (5) When presented with an interaural time difference less than 200 μs, which pulled the image to the side, listeners still successfully discriminated between front and back, suggesting that front/back discrimination is independent of azimuthal localization within certain limits. PMID:19900525
Stecker, G Christopher; McLaughlin, Susan A; Higgins, Nathan C
2015-10-15
Whole-brain functional magnetic resonance imaging was used to measure blood-oxygenation-level-dependent (BOLD) responses in human auditory cortex (AC) to sounds with intensity varying independently in the left and right ears. Echoplanar images were acquired at 3 Tesla with sparse image acquisition once per 12-second block of sound stimulation. Combinations of binaural intensity and stimulus presentation rate were varied between blocks, and selected to allow measurement of response-intensity functions in three configurations: monaural 55-85 dB SPL, binaural 55-85 dB SPL with intensity equal in both ears, and binaural with average binaural level of 70 dB SPL and interaural level differences (ILD) ranging ±30 dB (i.e., favoring the left or right ear). Comparison of response functions equated for contralateral intensity revealed that BOLD-response magnitudes (1) generally increased with contralateral intensity, consistent with positive drive of the BOLD response by the contralateral ear, (2) were larger for contralateral monaural stimulation than for binaural stimulation, consistent with negative effects (e.g., inhibition) of ipsilateral input, which were strongest in the left hemisphere, and (3) also increased with ipsilateral intensity when contralateral input was weak, consistent with additional, positive, effects of ipsilateral stimulation. Hemispheric asymmetries in the spatial extent and overall magnitude of BOLD responses were generally consistent with previous studies demonstrating greater bilaterality of responses in the right hemisphere and stricter contralaterality in the left hemisphere. Finally, comparison of responses to fast (40/s) and slow (5/s) stimulus presentation rates revealed significant rate-dependent adaptation of the BOLD response that varied across ILD values. Copyright © 2015. Published by Elsevier Inc.
Binaural unmasking of multi-channel stimuli in bilateral cochlear implant users.
Van Deun, Lieselot; van Wieringen, Astrid; Francart, Tom; Büchner, Andreas; Lenarz, Thomas; Wouters, Jan
2011-10-01
Previous work suggests that bilateral cochlear implant users are sensitive to interaural cues if experimental speech processors are used to preserve accurate interaural information in the electrical stimulation pattern. Binaural unmasking occurs in adults and children when an interaural delay is applied to the envelope of a high-rate pulse train. Nevertheless, for speech perception, binaural unmasking benefits have not been demonstrated consistently, even with coordinated stimulation at both ears. The present study aimed at bridging the gap between basic psychophysical performance on binaural signal detection tasks on the one hand and binaural perception of speech in noise on the other hand. Therefore, binaural signal detection was expanded to multi-channel stimulation and biologically relevant interaural delays. A harmonic complex, consisting of three sinusoids (125, 250, and 375 Hz), was added to three 125-Hz-wide noise bands centered on the sinusoids. When an interaural delay of 700 μs was introduced, an average BMLD of 3 dB was established. Outcomes are promising in view of real-life benefits. Future research should investigate the generalization of the observed benefits for signal detection to speech perception in everyday listening situations and determine the importance of coordination of bilateral speech processors and accentuation of envelope cues.
Lateralization of noise-burst trains based on onset and ongoing interaural delays.
Freyman, Richard L; Balakrishnan, Uma; Zurek, Patrick M
2010-07-01
The lateralization of 250-ms trains of brief noise bursts was measured using an acoustic pointing technique. Stimuli were designed to assess the contribution of the interaural time delay (ITD) of the onset binaural burst relative to that of the ITDs in the ongoing part of the train. Lateralization was measured by listeners' adjustments of the ITD of a pointer stimulus, a 50-ms burst of noise, to match the lateral position of the target train. Results confirmed previous reports of lateralization dominance by the onset burst under conditions in which the train is composed of frozen tokens and the ongoing part contains multiple ambiguous interaural delays. In contrast, lateralization of ongoing trains in which fresh noise tokens were used for each set of two alternating (left-leading/right-leading) binaural pairs followed the ITD of the first pair in each set, regardless of the ITD of the onset burst of the entire stimulus and even when the onset burst was removed by gradual gating. This clear lateralization of a long-duration stimulus with ambiguous interaural delay cues suggests precedence mechanisms that involve not only the interaural cues at the beginning of a sound, but also the pattern of cues within an ongoing sound.
Azimuthal sound localization in the European starling (Sturnus vulgaris): I. Physical binaural cues.
Klump, G M; Larsen, O N
1992-02-01
The physical measurements reported here test whether the European starling (Sturnus vulgaris) evaluates the azimuth direction of a sound source with a peripheral auditory system composed of two acoustically coupled pressure-difference receivers (1) or of two decoupled pressure receivers (2). A directional pattern of sound intensity in the free-field was measured at the entrance of the auditory meatus using a probe microphone, and at the tympanum using laser vibrometry. The maximum differences in the sound-pressure level measured with the microphone between various speaker positions and the frontal speaker position were 2.4 dB at 1 and 2 kHz, 7.3 dB at 4 kHz, 9.2 dB at 6 kHz, and 10.9 dB at 8 kHz. The directional amplitude pattern measured by laser vibrometry did not differ from that measured with the microphone. Neither did the directional pattern of travel times to the ear. Measurements of the amplitude and phase transfer function of the starling's interaural pathway using a closed sound system were in accord with the results of the free-field measurements. In conclusion, although some sound transmission via the interaural canal occurred, the present experiments support the hypothesis 2 above that the starling's peripheral auditory system is best described as consisting of two functionally decoupled pressure receivers.
Tellers, Philipp; Lehmann, Jessica; Führ, Hartmut; Wagner, Hermann
2017-09-01
Birds and mammals use the interaural time difference (ITD) for azimuthal sound localization. While barn owls can use the ITD of the stimulus carrier frequency over nearly their entire hearing range, mammals have to utilize the ITD of the stimulus envelope to extend the upper frequency limit of ITD-based sound localization. ITD is computed and processed in a dedicated neural circuit that consists of two pathways. In the barn owl, ITD representation is more complex in the forebrain than in the midbrain pathway because of the combination of two inputs that represent different ITDs. We speculated that one of the two inputs includes an envelope contribution. To estimate the envelope contribution, we recorded ITD response functions for correlated and anticorrelated noise stimuli in the barn owl's auditory arcopallium. Our findings indicate that barn owls, like mammals, represent both carrier and envelope ITDs of overlapping frequency ranges, supporting the hypothesis that carrier and envelope ITD-based localization are complementary beyond a mere extension of the upper frequency limit. NEW & NOTEWORTHY The results presented in this study show for the first time that the barn owl is able to extract and represent the interaural time difference (ITD) information conveyed by the envelope of a broadband acoustic signal. Like mammals, the barn owl extracts the ITD of the envelope and the carrier of a signal from the same frequency range. These results are of general interest, since they reinforce a trend found in neural signal processing across different species. Copyright © 2017 the American Physiological Society.
The Relationship Between Intensity Coding and Binaural Sensitivity in Adults With Cochlear Implants
Todd, Ann E.; Goupell, Matthew J.; Litovsky, Ruth Y.
2016-01-01
Objectives Many bilateral cochlear implant users show sensitivity to binaural information when stimulation is provided using a pair of synchronized electrodes. However, there is large variability in binaural sensitivity between and within participants across stimulation sites in the cochlea. It was hypothesized that within-participant variability in binaural sensitivity is in part affected by limitations and characteristics of the auditory periphery which may be reflected by monaural hearing performance. The objective of this study was to examine the relationship between monaural and binaural hearing performance within participants with bilateral cochlear implants. Design Binaural measures included dichotic signal detection and interaural time difference discrimination thresholds. Diotic signal detection thresholds were also measured. Monaural measures included dynamic range and amplitude modulation detection. In addition, loudness growth was compared between ears. Measures were made at three stimulation sites per listener. Results Greater binaural sensitivity was found with larger dynamic ranges. Poorer interaural time difference discrimination was found with larger difference between comfortable levels of the two ears. In addition, poorer diotic signal detection thresholds were found with larger differences between the dynamic ranges of the two ears. No relationship was found between amplitude modulation detection thresholds or symmetry of loudness growth and the binaural measures. Conclusions The results suggest that some of the variability in binaural hearing performance within listeners across stimulation sites can be explained by factors non-specific to binaural processing. The results are consistent with the idea that dynamic range and comfortable levels relate to peripheral neural survival and the width of the excitation pattern which could affect the fidelity with which central binaural nuclei process bilateral inputs. PMID:27787393
Development of the sound localization cues in cats
NASA Astrophysics Data System (ADS)
Tollin, Daniel J.
2004-05-01
Cats are a common model for developmental studies of the psychophysical and physiological mechanisms of sound localization. Yet, there are few studies on the development of the acoustical cues to location in cats. The magnitude of the three main cues, interaural differences in time (ITDs) and level (ILDs), and monaural spectral shape cues, vary with location in adults. However, the increasing interaural distance associated with a growing head and pinnae during development will result in cues that change continuously until maturation is complete. Here, we report measurements, in cats aged 1 week to adulthood, of the physical dimensions of the head and pinnae and the localization cues, computed from measurements of directional transfer functions. At 1 week, ILD depended little on azimuth for frequencies <6-7 kHz, maximum ITD was 175 μs, and for sources varying in elevation, a prominent spectral notch was located at higher frequencies than in the older cats. As cats develop, the spectral cues and the frequencies at which ILDs become substantial (>10 dB) shift to lower frequencies, and the maximum ITD increases to nearly 370 μs. Changes in the cues are correlated with the increasing size of the head and pinnae. [Work supported by NIDCD DC05122.
Intelligibility of speech in a virtual 3-D environment.
MacDonald, Justin A; Balakrishnan, J D; Orosz, Michael D; Karplus, Walter J
2002-01-01
In a simulated air traffic control task, improvement in the detection of auditory warnings when using virtual 3-D audio depended on the spatial configuration of the sounds. Performance improved substantially when two of four sources were placed to the left and the remaining two were placed to the right of the participant. Surprisingly, little or no benefits were observed for configurations involving the elevation or transverse (front/back) dimensions of virtual space, suggesting that position on the interaural (left/right) axis is the crucial factor to consider in auditory display design. The relative importance of interaural spacing effects was corroborated in a second, free-field (real space) experiment. Two additional experiments showed that (a) positioning signals to the side of the listener is superior to placing them in front even when two sounds are presented in the same location, and (b) the optimal distance on the interaural axis varies with the amplitude of the sounds. These results are well predicted by the behavior of an ideal observer under the different display conditions. This suggests that guidelines for auditory display design that allow for effective perception of speech information can be developed from an analysis of the physical sound patterns.
Bernstein, Leslie R; Trahiotis, Constantine
2014-02-01
Sensitivity to ongoing interaural temporal disparities (ITDs) was measured using bandpass-filtered pulse trains centered at 4600, 6500, or 9200 Hz. Save for minor differences in the exact center frequencies, those target stimuli were those employed by Majdak and Laback [J. Acoust. Soc. Am. 125, 3903-3913 (2009)]. At each center frequency, threshold ITD was measured for pulse repetition rates ranging from 64 to 609 Hz. The results and quantitative predictions by a cross-correlation-based model indicated that (1) at most pulse repetition rates, threshold ITD increased with center frequency, (2) the cutoff frequency of the putative envelope low-pass filter that determines sensitivity to ITD at high envelope rates appears to be inversely related to center frequency, and (3) both outcomes were accounted for by assuming that, independent of the center frequency, the listeners' decision variable was a constant criterion change in interaural correlation of the stimuli as processed internally. The finding of an inverse relation between center frequency and the envelope rate limitation, while consistent with much prior literature, runs counter to the conclusion reached by Majdak and Laback.
Gessele, Nikodemus; Garcia-Pino, Elisabet; Omerbašić, Damir; Park, Thomas J; Koch, Ursula
2016-01-01
Naked mole-rats (Heterocephalus glaber) live in large eu-social, underground colonies in narrow burrows and are exposed to a large repertoire of communication signals but negligible binaural sound localization cues, such as interaural time and intensity differences. We therefore asked whether monaural and binaural auditory brainstem nuclei in the naked mole-rat are differentially adjusted to this acoustic environment. Using antibody stainings against excitatory and inhibitory presynaptic structures, namely the vesicular glutamate transporter VGluT1 and the glycine transporter GlyT2 we identified all major auditory brainstem nuclei except the superior paraolivary nucleus in these animals. Naked mole-rats possess a well structured medial superior olive, with a similar synaptic arrangement to interaural-time-difference encoding animals. The neighboring lateral superior olive, which analyzes interaural intensity differences, is large and elongated, whereas the medial nucleus of the trapezoid body, which provides the contralateral inhibitory input to these binaural nuclei, is reduced in size. In contrast, the cochlear nucleus, the nuclei of the lateral lemniscus and the inferior colliculus are not considerably different when compared to other rodent species. Most interestingly, binaural auditory brainstem nuclei lack the membrane-bound hyperpolarization-activated channel HCN1, a voltage-gated ion channel that greatly contributes to the fast integration times in binaural nuclei of the superior olivary complex in other species. This suggests substantially lengthened membrane time constants and thus prolonged temporal integration of inputs in binaural auditory brainstem neurons and might be linked to the severely degenerated sound localization abilities in these animals.
Spiking Models for Level-Invariant Encoding
Brette, Romain
2012-01-01
Levels of ecological sounds vary over several orders of magnitude, but the firing rate and membrane potential of a neuron are much more limited in range. In binaural neurons of the barn owl, tuning to interaural delays is independent of level differences. Yet a monaural neuron with a fixed threshold should fire earlier in response to louder sounds, which would disrupt the tuning of these neurons. How could spike timing be independent of input level? Here I derive theoretical conditions for a spiking model to be insensitive to input level. The key property is a dynamic change in spike threshold. I then show how level invariance can be physiologically implemented, with specific ionic channel properties. It appears that these ingredients are indeed present in monaural neurons of the sound localization pathway of birds and mammals. PMID:22291634
NASA Astrophysics Data System (ADS)
Aaronson, Neil L.
This dissertation deals with questions important to the problem of human sound source localization in rooms, starting with perceptual studies and moving on to physical measurements made in rooms. In Chapter 1, a perceptual study is performed relevant to a specific phenomenon the effect of speech reflections occurring in the front-back dimension and the ability of humans to segregate that from unreflected speech. Distracters were presented from the same source as the target speech, a loudspeaker directly in front of the listener, and also from a loudspeaker directly behind the listener, delayed relative to the front loudspeaker. Steps were taken to minimize the contributions of binaural difference cues. For all delays within +/-32 ms, a release from informational masking of about 2 dB occurred. This suggested that human listeners are able to segregate speech sources based on spatial cues, even with minimal binaural cues. In moving on to physical measurements in rooms, a method was sought for simultaneous measurement of room characteristics such as impulse response (IR) and reverberation time (RT60), and binaural parameters such as interaural time difference (ITD), interaural level difference (ILD), and the interaural cross-correlation function and coherence. Chapter 2 involves investigations into the usefulness of maximum length sequences (MLS) for these purposes. Comparisons to random telegraph noise (RTN) show that MLS performs better in the measurement of stationary and room transfer functions, IR, and RT60 by an order of magnitude in RMS percent error, even after Wiener filtering and exponential time-domain filtering have improved the accuracy of RTN measurements. Measurements were taken in real rooms in an effort to understand how the reverberant characteristics of rooms affect binaural parameters important to sound source localization. Chapter 3 deals with interaural coherence, a parameter important for localization and perception of auditory source width. MLS were used to measure waveform and envelope coherences in two rooms for various source distances and 0° azimuth through a head-and-torso simulator (KEMAR). A relationship is sought that relates these two types of coherence, since envelope coherence, while an important quantity, is generally less accessible than waveform coherence. A power law relationship is shown to exist between the two that works well within and across bands, for any source distance, and is robust to reverberant conditions of the room. Measurements of ITD, ILD, and coherence in rooms give insight into the way rooms affect these parameters, and in turn, the ability of listeners to localize sounds in rooms. Such measurements, along with room properties, are made and analyzed using MLS methods in Chapter 4. It was found that the pinnae cause incoherence for sound sources incident between 30° and 90°. In human listeners, this does not seem to adversely affect performance in lateralization experiments. The cause of poor coherence in rooms was studied as part of Chapter 4 as well. It was found that rooms affect coherence by introducing variance into the ITD spectra within the bands in which it is measured. A mathematical model to predict the interaural coherence within a band given the standard deviation of the ITD spectrum and the center frequency of the band gives an exponential relationship. This is found to work well in predicting measured coherence given ITD spectrum variance. The pinnae seem to affect the ITD spectrum in a similar way at incident sound angles for which coherence is poor in an anechoic environment.
Modeling off-frequency binaural masking for short- and long-duration signals.
Nitschmann, Marc; Yasin, Ifat; Henning, G Bruce; Verhey, Jesko L
2017-08-01
Experimental binaural masking-pattern data are presented together with model simulations for 12- and 600-ms signals. The masker was a diotic 11-Hz wide noise centered on 500 Hz. The tonal signal was presented either diotically or dichotically (180° interaural phase difference) with frequencies ranging from 400 to 600 Hz. The results and the modeling agree with previous data and hypotheses; simulations with a binaural model sensitive to monaural modulation cues show that the effect of duration on off-frequency binaural masking-level differences is mainly a result of modulation cues which are only available in the monaural detection of long signals.
Maps of interaural delay in the owl's nucleus laminaris
Shah, Sahil; McColgan, Thomas; Ashida, Go; Kuokkanen, Paula T.; Brill, Sandra; Kempter, Richard; Wagner, Hermann
2015-01-01
Axons from the nucleus magnocellularis form a presynaptic map of interaural time differences (ITDs) in the nucleus laminaris (NL). These inputs generate a field potential that varies systematically with recording position and can be used to measure the map of ITDs. In the barn owl, the representation of best ITD shifts with mediolateral position in NL, so as to form continuous, smoothly overlapping maps of ITD with iso-ITD contours that are not parallel to the NL border. Frontal space (0°) is, however, represented throughout and thus overrepresented with respect to the periphery. Measurements of presynaptic conduction delay, combined with a model of delay line conduction velocity, reveal that conduction delays can account for the mediolateral shifts in the map of ITD. PMID:26224776
Post interaural neural net-based vowel recognition
NASA Astrophysics Data System (ADS)
Jouny, Ismail I.
2001-10-01
Interaural head related transfer functions are used to process speech signatures prior to neural net based recognition. Data representing the head related transfer function of a dummy has been collected at MIT and made available on the Internet. This data is used to pre-process vowel signatures to mimic the effects of human ear on speech perception. Signatures representing various vowels of the English language are then presented to a multi-layer perceptron trained using the back propagation algorithm for recognition purposes. The focus in this paper is to assess the effects of human interaural system on vowel recognition performance particularly when using a classification system that mimics the human brain such as a neural net.
Siveke, Ida; Leibold, Christian; Grothe, Benedikt
2007-11-01
We are regularly exposed to several concurrent sounds, producing a mixture of binaural cues. The neuronal mechanisms underlying the localization of concurrent sounds are not well understood. The major binaural cues for localizing low-frequency sounds in the horizontal plane are interaural time differences (ITDs). Auditory brain stem neurons encode ITDs by firing maximally in response to "favorable" ITDs and weakly or not at all in response to "unfavorable" ITDs. We recorded from ITD-sensitive neurons in the dorsal nucleus of the lateral lemniscus (DNLL) while presenting pure tones at different ITDs embedded in noise. We found that increasing levels of concurrent white noise suppressed the maximal response rate to tones with favorable ITDs and slightly enhanced the response rate to tones with unfavorable ITDs. Nevertheless, most of the neurons maintained ITD sensitivity to tones even for noise intensities equal to that of the tone. Using concurrent noise with a spectral composition in which the neuron's excitatory frequencies are omitted reduced the maximal response similar to that obtained with concurrent white noise. This finding indicates that the decrease of the maximal rate is mediated by suppressive cross-frequency interactions, which we also observed during monaural stimulation with additional white noise. In contrast, the enhancement of the firing rate to tones at unfavorable ITD might be due to early binaural interactions (e.g., at the level of the superior olive). A simple simulation corroborates this interpretation. Taken together, these findings suggest that the spectral composition of a concurrent sound strongly influences the spatial processing of ITD-sensitive DNLL neurons.
Brown, Andrew D; Tollin, Daniel J
2016-09-21
In mammals, localization of sound sources in azimuth depends on sensitivity to interaural differences in sound timing (ITD) and level (ILD). Paradoxically, while typical ILD-sensitive neurons of the auditory brainstem require millisecond synchrony of excitatory and inhibitory inputs for the encoding of ILDs, human and animal behavioral ILD sensitivity is robust to temporal stimulus degradations (e.g., interaural decorrelation due to reverberation), or, in humans, bilateral clinical device processing. Here we demonstrate that behavioral ILD sensitivity is only modestly degraded with even complete decorrelation of left- and right-ear signals, suggesting the existence of a highly integrative ILD-coding mechanism. Correspondingly, we find that a majority of auditory midbrain neurons in the central nucleus of the inferior colliculus (of chinchilla) effectively encode ILDs despite complete decorrelation of left- and right-ear signals. We show that such responses can be accounted for by relatively long windows of bilateral excitatory-inhibitory interaction, which we explicitly measure using trains of narrowband clicks. Neural and behavioral data are compared with the outputs of a simple model of ILD processing with a single free parameter, the duration of excitatory-inhibitory interaction. Behavioral, neural, and modeling data collectively suggest that ILD sensitivity depends on binaural integration of excitation and inhibition within a ≳3 ms temporal window, significantly longer than observed in lower brainstem neurons. This relatively slow integration potentiates a unique role for the ILD system in spatial hearing that may be of particular importance when informative ITD cues are unavailable. In mammalian hearing, interaural differences in the timing (ITD) and level (ILD) of impinging sounds carry critical information about source location. However, natural sounds are often decorrelated between the ears by reverberation and background noise, degrading the fidelity of both ITD and ILD cues. Here we demonstrate that behavioral ILD sensitivity (in humans) and neural ILD sensitivity (in single neurons of the chinchilla auditory midbrain) remain robust under stimulus conditions that render ITD cues undetectable. This result can be explained by "slow" temporal integration arising from several-millisecond-long windows of excitatory-inhibitory interaction evident in midbrain, but not brainstem, neurons. Such integrative coding can account for the preservation of ILD sensitivity despite even extreme temporal degradations in ecological acoustic stimuli. Copyright © 2016 the authors 0270-6474/16/369908-14$15.00/0.
Representation of Dynamic Interaural Phase Difference in Auditory Cortex of Awake Rhesus Macaques
Scott, Brian H.; Malone, Brian J.; Semple, Malcolm N.
2009-01-01
Neurons in auditory cortex of awake primates are selective for the spatial location of a sound source, yet the neural representation of the binaural cues that underlie this tuning remains undefined. We examined this representation in 283 single neurons across the low-frequency auditory core in alert macaques, trained to discriminate binaural cues for sound azimuth. In response to binaural beat stimuli, which mimic acoustic motion by modulating the relative phase of a tone at the two ears, these neurons robustly modulate their discharge rate in response to this directional cue. In accordance with prior studies, the preferred interaural phase difference (IPD) of these neurons typically corresponds to azimuthal locations contralateral to the recorded hemisphere. Whereas binaural beats evoke only transient discharges in anesthetized cortex, neurons in awake cortex respond throughout the IPD cycle. In this regard, responses are consistent with observations at earlier stations of the auditory pathway. Discharge rate is a band-pass function of the frequency of IPD modulation in most neurons (73%), but both discharge rate and temporal synchrony are independent of the direction of phase modulation. When subjected to a receiver operator characteristic analysis, the responses of individual neurons are insufficient to account for the perceptual acuity of these macaques in an IPD discrimination task, suggesting the need for neural pooling at the cortical level. PMID:19164111
Representation of dynamic interaural phase difference in auditory cortex of awake rhesus macaques.
Scott, Brian H; Malone, Brian J; Semple, Malcolm N
2009-04-01
Neurons in auditory cortex of awake primates are selective for the spatial location of a sound source, yet the neural representation of the binaural cues that underlie this tuning remains undefined. We examined this representation in 283 single neurons across the low-frequency auditory core in alert macaques, trained to discriminate binaural cues for sound azimuth. In response to binaural beat stimuli, which mimic acoustic motion by modulating the relative phase of a tone at the two ears, these neurons robustly modulate their discharge rate in response to this directional cue. In accordance with prior studies, the preferred interaural phase difference (IPD) of these neurons typically corresponds to azimuthal locations contralateral to the recorded hemisphere. Whereas binaural beats evoke only transient discharges in anesthetized cortex, neurons in awake cortex respond throughout the IPD cycle. In this regard, responses are consistent with observations at earlier stations of the auditory pathway. Discharge rate is a band-pass function of the frequency of IPD modulation in most neurons (73%), but both discharge rate and temporal synchrony are independent of the direction of phase modulation. When subjected to a receiver operator characteristic analysis, the responses of individual neurons are insufficient to account for the perceptual acuity of these macaques in an IPD discrimination task, suggesting the need for neural pooling at the cortical level.
Cervical Vestibular-Evoked Myogenic Potentials: Norms and Protocols
Isaradisaikul, Suwicha; Navacharoen, Niramon; Hanprasertpong, Charuk; Kangsanarak, Jaran
2012-01-01
Vestibular-evoked myogenic potential (VEMP) testing is a vestibular function test used for evaluating saccular and inferior vestibular nerve function. Parameters of VEMP testing include VEMP threshold, latencies of p1 and n1, and p1-n1 interamplitude. Less commonly used parameters were p1-n1 interlatency, interaural difference of p1 and n1 latency, and interaural amplitude difference (IAD) ratio. This paper recommends using air-conducted 500 Hz tone burst auditory stimulation presented monoaurally via an inserted ear phone while the subject is turning his head to the contralateral side in the sitting position and recording the responses from the ipsilateral sternocleidomastoid muscle. Normative values of VEMP responses in 50 normal audiovestibular volunteers were presented. VEMP testing protocols and normative values in other literature were reviewed and compared. The study is beneficial to clinicians as a reference guide to set up VEMP testing and interpretation of the VEMP responses. PMID:22577386
Ho, Cheng-Yu; Li, Pei-Chun; Chiang, Yuan-Chuan; Young, Shuenn-Tsong; Chu, Woei-Chyn
2015-01-01
Binaural hearing involves using information relating to the differences between the signals that arrive at the two ears, and it can make it easier to detect and recognize signals in a noisy environment. This phenomenon of binaural hearing is quantified in laboratory studies as the binaural masking-level difference (BMLD). Mandarin is one of the most commonly used languages, but there are no publication values of BMLD or BILD based on Mandarin tones. Therefore, this study investigated the BMLD and BILD of Mandarin tones. The BMLDs of Mandarin tone detection were measured based on the detection threshold differences for the four tones of the voiced vowels /i/ (i.e., /i1/, /i2/, /i3/, and /i4/) and /u/ (i.e., /u1/, /u2/, /u3/, and /u4/) in the presence of speech-spectrum noise when presented interaurally in phase (S0N0) and interaurally in antiphase (SπN0). The BILDs of Mandarin tone recognition in speech-spectrum noise were determined as the differences in the target-to-masker ratio (TMR) required for 50% correct tone recognitions between the S0N0 and SπN0 conditions. The detection thresholds for the four tones of /i/ and /u/ differed significantly (p<0.001) between the S0N0 and SπN0 conditions. The average detection thresholds of Mandarin tones were all lower in the SπN0 condition than in the S0N0 condition, and the BMLDs ranged from 7.3 to 11.5 dB. The TMR for 50% correct Mandarin tone recognitions differed significantly (p<0.001) between the S0N0 and SπN0 conditions, at –13.4 and –18.0 dB, respectively, with a mean BILD of 4.6 dB. The study showed that the thresholds of Mandarin tone detection and recognition in the presence of speech-spectrum noise are improved when phase inversion is applied to the target speech. The average BILDs of Mandarin tones are smaller than the average BMLDs of Mandarin tones. PMID:25835987
Threshold of the precedence effect in noise
Freyman, Richard L.; Griffin, Amanda M.; Zurek, Patrick M.
2014-01-01
Three effects that show a temporal asymmetry in the influence of interaural cues were studied through the addition of masking noise: (1) The transient precedence effect—the perceptual dominance of a leading transient over a similar lagging transient; (2) the ongoing precedence effect—lead dominance with lead and lag components that extend in time; and (3) the onset capture effect—determination by an onset transient of the lateral position of an otherwise ambiguous extended trailing sound. These three effects were evoked with noise-burst stimuli and were compared in the presence of masking noise. Using a diotic noise masker, detection thresholds for stimuli with lead/lag interaural delays of 0/500 μs were compared to those with 500/0 μs delays. None of the three effects showed a masking difference between those conditions, suggesting that none of the effects is operative at masked threshold. A task requiring the discrimination between stimuli with 500/0 and 0/500 μs interaural delays was used to determine the threshold for each effect in noise. The results showed similar thresholds in noise (10–13 dB SL) for the transient and ongoing precedence effects, but a much higher threshold (33 dB SL) for onset capture of an ambiguous trailing sound. PMID:24815272
Keating, Peter; Nodal, Fernando R; King, Andrew J
2014-01-01
For over a century, the duplex theory has guided our understanding of human sound localization in the horizontal plane. According to this theory, the auditory system uses interaural time differences (ITDs) and interaural level differences (ILDs) to localize low-frequency and high-frequency sounds, respectively. Whilst this theory successfully accounts for the localization of tones by humans, some species show very different behaviour. Ferrets are widely used for studying both clinical and fundamental aspects of spatial hearing, but it is not known whether the duplex theory applies to this species or, if so, to what extent the frequency range over which each binaural cue is used depends on acoustical or neurophysiological factors. To address these issues, we trained ferrets to lateralize tones presented over earphones and found that the frequency dependence of ITD and ILD sensitivity broadly paralleled that observed in humans. Compared with humans, however, the transition between ITD and ILD sensitivity was shifted toward higher frequencies. We found that the frequency dependence of ITD sensitivity in ferrets can partially be accounted for by acoustical factors, although neurophysiological mechanisms are also likely to be involved. Moreover, we show that binaural cue sensitivity can be shaped by experience, as training ferrets on a 1-kHz ILD task resulted in significant improvements in thresholds that were specific to the trained cue and frequency. Our results provide new insights into the factors limiting the use of different sound localization cues and highlight the importance of sensory experience in shaping the underlying neural mechanisms. PMID:24256073
Altmann, Christian F; Ueda, Ryuhei; Bucher, Benoit; Furukawa, Shigeto; Ono, Kentaro; Kashino, Makio; Mima, Tatsuya; Fukuyama, Hidenao
2017-10-01
Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex. Copyright © 2017 Elsevier Inc. All rights reserved.
JNDS of interaural time delay (ITD) of selected frequency bands in speech and music signals
NASA Astrophysics Data System (ADS)
Aliphas, Avner; Colburn, H. Steven; Ghitza, Oded
2002-05-01
JNDS of interaural time delay (ITD) of selected frequency bands in the presence of other frequency bands have been reported for noiseband stimuli [Zurek (1985); Trahiotis and Bernstein (1990)]. Similar measurements will be reported for speech and music signals. When stimuli are synthesized with bandpass/band-stop operations, performance with complex stimuli are similar to noisebands (JNDS in tens or hundreds of microseconds); however, the resulting waveforms, when viewed through a model of the auditory periphery, show distortions (irregularities in phase and level) at the boundaries of the target band of frequencies. An alternate synthesis method based upon group-delay filtering operations does not show these distortions and is being used for the current measurements. Preliminary measurements indicate that when music stimuli are created using the new techniques, JNDS of ITDs are increased significantly compared to previous studies, with values on the order of milliseconds.
Interaural attenuation for Sennheiser HDA 200 circumaural earphones.
Brännström, K Jonas; Lantz, Johannes
2010-06-01
Interaural attenuation (IA) was evaluated for pure tones (frequency range 125 to 16000 Hz) using Sennheiser HDA 200 circumaural earphones and Telephonics TDH-39P earphones in nine unilaterally deaf subjects. Audiometry was conducted in 1-dB steps using the manual ascending technique in accordance with ISO 8253-1. For all subjects and for all tested frequencies, the lowest IA value for HDA 200 was 42 dB. The present IA values for TDH-39P earphones closely resemble previously reported data. The findings show that the HDA 200 earphones provide more IA than the TDH-39P, especially at lower frequencies (
Majdak, Piotr; Laback, Bernhard; Baumgartner, Wolf-Dieter
2006-10-01
Bilateral cochlear implant (CI) listeners currently use stimulation strategies which encode interaural time differences (ITD) in the temporal envelope but which do not transmit ITD in the fine structure, due to the constant phase in the electric pulse train. To determine the utility of encoding ITD in the fine structure, ITD-based lateralization was investigated with four CI listeners and four normal hearing (NH) subjects listening to a simulation of electric stimulation. Lateralization discrimination was tested at different pulse rates for various combinations of independently controlled fine structure ITD and envelope ITD. Results for electric hearing show that the fine structure ITD had the strongest impact on lateralization at lower pulse rates, with significant effects for pulse rates up to 800 pulses per second. At higher pulse rates, lateralization discrimination depended solely on the envelope ITD. The data suggest that bilateral CI listeners benefit from transmitting fine structure ITD at lower pulse rates. However, there were strong interindividual differences: the better performing CI listeners performed comparably to the NH listeners.
Marques do Carmo, Diego; Costa, Márcio Holsbach
2018-04-01
This work presents an online approximation method for the multichannel Wiener filter (MWF) noise reduction technique with preservation of the noise interaural level difference (ILD) for binaural hearing-aids. The steepest descent method is applied to a previously proposed MWF-ILD cost function to both approximate the optimal linear estimator of the desired speech and keep the subjective perception of the original acoustic scenario. The computational cost of the resulting algorithm is estimated in terms of multiply and accumulate operations, whose number can be controlled by setting the number of iterations at each time frame. Simulation results for the particular case of one speech and one-directional noise source show that the proposed method increases the signal-to-noise ratio SNR of the originally acquired speech by up to 16.9 dB in the assessed scenarios. As compared to the online implementation of the conventional MWF technique, the proposed technique provides a reduction of up to 7 dB in the noise ILD error at the price of a reduction of up 3 dB in the output SNR. Subjective experiments with volunteers complement these objective measures with psychoacoustic results, which corroborate the expected spatial preservation of the original acoustic scenario. The proposed method allows practical online implementation of the MWF-ILD noise reduction technique under constrained computational resources. Predicted SNR improvements from 12 dB to 16.9 dB can be obtained in application-specific integrated circuits for hearing-aids and state-of-the-art digital signal processors. Copyright © 2018 Elsevier Ltd. All rights reserved.
The acoustical cues to sound location in the Guinea pig (cavia porcellus)
Greene, Nathanial T; Anbuhl, Kelsey L; Williams, Whitney; Tollin, Daniel J.
2014-01-01
There are three main acoustical cues to sound location, each attributable to space-and frequency-dependent filtering of the propagating sound waves by the outer ears, head, and torso: Interaural differences in time (ITD) and level (ILD) as well as monaural spectral shape cues. While the guinea pig has been a common model for studying the anatomy, physiology, and behavior of binaural and spatial hearing, extensive measurements of their available acoustical cues are lacking. Here, these cues were determined from directional transfer functions (DTFs), the directional components of the head-related transfer functions, for eleven adult guinea pigs. In the frontal hemisphere, monaural spectral notches were present for frequencies from ~10 to 20 kHz; in general, the notch frequency increased with increasing sound source elevation and in azimuth toward the contralateral ear. The maximum ITDs calculated from low-pass filtered (2 kHz cutoff frequency) DTFs were ~250 µs, whereas the maximum ITD measured with low frequency tone pips was over 320 µs. A spherical head model underestimates ITD magnitude under normal conditions, but closely approximates values when the pinnae were removed. Interaural level differences (ILDs) strongly depended on location and frequency; maximum ILDs were < 10 dB for frequencies < 4 kHz and were as large as 40 dB for frequencies > 10 kHz. Removal of the pinna reduced the depth and sharpness of spectral notches, altered the acoustical axis, and reduced the acoustical gain, ITDs, and ILDs; however, spectral shape features and acoustical gain were not completely eliminated, suggesting a substantial contribution of the head and torso in altering the sounds present at the tympanic membrane. PMID:25051197
McAlpine, D; Jiang, D; Palmer, A R
1996-08-01
Monaural and binaural response properties of single units in the inferior colliculus (IC) of the guinea pig were investigated. Neurones were classified according to the effect of monaural stimulation of either ear alone and the effect of binaural stimulation. The majority (309/334) of IC units were excited (E) by stimulation of the contralateral ear, of which 41% (127/309) were also excited by monaural ipsilateral stimulation (EE), and the remainder (182/309) were unresponsive to monaural ipsilateral stimulation (EO). For units with best frequencies (BF) up to 3 kHz, similar proportions of EE and EO units were observed. Above 3 kHz, however, significantly more EO than EE units were observed. Units were also classified as either facilitated (F), suppressed (S), or unaffected (O) by binaural stimulation. More EO than EE units were suppressed or unaffected by binaural stimulation, and more EE than EO units were facilitated. There were more EO/S units above 1.5 kHz than below. Binaural beats were used to examine the interaural delay sensitivity of low-BF (BF < 1.5 kHz) units. The distributions of preferred interaural phases and, by extension, interaural delays, resembled those seen in other species, and those obtained using static interaural delays in the IC of the guinea pig. Units with best phase (BP) angles closer to zero generally showed binaural facilitation, whilst those with larger BPs generally showed binaural suppression. The classification of units based upon binaural stimulation with BF tones was consistent with their interaural-delay sensitivity. Characteristic delays (CD) were examined for 96 low-BF units. A clear relationship between BF and CD was observed. CDs of units with very low BFs (< 200 Hz) were long and positive, becoming progressively shorter as BF increased until, for units with BFs between 400 and 800 Hz, the majority of CDs were negative. Above 800 Hz, both positive and negative CDs were observed. A relationship between CD and characteristic phase (CP) was also observed, with CPs increasing in value as CDs became more negative. These results demonstrate that binaural processing in the guinea pig at low frequencies is similar to that reported in all other species studied. However, the dependence of CD on BF would suggest that the delay line system that sets up the interaural-delay sensitivity in the lower brainstem varies across frequency as well as within each frequency band.
Sparreboom, Marloes; Beynon, Andy J; Snik, Ad F M; Mylanus, Emmanuel A M
2016-07-01
In many studies evaluating the effect of sequential bilateral cochlear implantation in congenitally deaf children, device use is not taken into account. In this study, however, device use was analyzed in relation to auditory brainstem maturation and speech recognition, which were measured in children with early-onset deafness, 5-6 years after bilateral cochlear implantation. We hypothesized that auditory brainstem maturation is mostly functionally driven by auditory stimulation and is therefore influenced by device use and not mainly by inter-implant delay. Twenty-one children participated and had inter-implant delays between 1.2 and 7.2 years. The electrically-evoked auditory brainstem response was measured for both implants separately. The difference in interaural wave V latency and speech recognition between both implants were used in the analyses. Device use was measured with a Likert scale. Results showed that the less the second device is used, the larger the difference in interaural wave V latencies is, which consequently leads to larger differences in interaural speech recognition. In children with early-onset deafness, after various periods of unilateral deprivation, full-time device use can lead to similar auditory brainstem responses and speech recognition between both ears. Therefore, device use should be considered as a relevant factor contributing to outcomes after sequential bilateral cochlear implantation. These results are indicative for a longer window between implantations in children with early-onset deafness to obtain symmetrical auditory pathway maturation than is mentioned in the literature. Results, however, must be interpreted as preliminary findings as actual device use with data logging was not yet available at the time of the study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Spatial cue reliability drives frequency tuning in the barn Owl's midbrain
Cazettes, Fanny; Fischer, Brian J; Pena, Jose L
2014-01-01
The robust representation of the environment from unreliable sensory cues is vital for the efficient function of the brain. However, how the neural processing captures the most reliable cues is unknown. The interaural time difference (ITD) is the primary cue to localize sound in horizontal space. ITD is encoded in the firing rate of neurons that detect interaural phase difference (IPD). Due to the filtering effect of the head, IPD for a given location varies depending on the environmental context. We found that, in barn owls, at each location there is a frequency range where the head filtering yields the most reliable IPDs across contexts. Remarkably, the frequency tuning of space-specific neurons in the owl's midbrain varies with their preferred sound location, matching the range that carries the most reliable IPD. Thus, frequency tuning in the owl's space-specific neurons reflects a higher-order feature of the code that captures cue reliability. DOI: http://dx.doi.org/10.7554/eLife.04854.001 PMID:25531067
Gifford, René H.; Grantham, D. Wesley; Sheffield, Sterling W.; Davis, Timothy J.; Dwyer, Robert; Dorman, Michael F.
2014-01-01
The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from −90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100–900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. PMID:24607490
Gifford, René H; Grantham, D Wesley; Sheffield, Sterling W; Davis, Timothy J; Dwyer, Robert; Dorman, Michael F
2014-06-01
The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from -90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100-900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. Copyright © 2014. Published by Elsevier B.V.
Perceptual consequences of disrupted auditory nerve activity.
Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold
2005-06-01
Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique contribution of neural synchrony to sensory perception but also provide guidance for translational research in terms of better diagnosis and management of human communication disorders.
[The significance of the interaural latency difference of VEMP].
Wu, Ziming; Zhang, Suzhen; Ji, Fei; Zhou, Na; Guo, Weiwei; Yang, Weiyan; Han, Dongyi
2005-05-01
To investigate the significance of the interaural latency (IAL) difference of the latency of VEMP and to raise the sensitivity of the test. Vestibular evoked myogenic potentials (VEMP) were tested in 20 healthy subjects; 13 patients with acoustic neuromaor cerebellopontile angle occupying lesions and 1 patient with multiple sclerosis. IAL differences of the wave p13,n23 and p13-n23 (abbreviatd as /delta p13/, /delta n23/ and /delta p13-n23/, respectively) were analysed to determine the normal range and the upper limit of the norm data. Four illustrative cases with the abnormality of the IAL difference were given as examples. The upper limit of the IAL of /delta p13/ was 1.13 ms; that of the /delta n23/ was 1.38 ms and that of /delta p13-n23/ was 1.54 ms. The /p13-n23/ latency between the right and left side had no significant difference (P > 0.05). /delta p13/, /delta n23/ and /delta p13-n23/, especially /delta p13/ of VEMP can suggest abnormality in the neural pathway and it may be applicable in practice.
Characteristics of stereo reproduction with parametric loudspeakers
NASA Astrophysics Data System (ADS)
Aoki, Shigeaki; Toba, Masayoshi; Tsujita, Norihisa
2012-05-01
A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. The parametric loudspeaker is one of the prominent applications of nonlinear ultrasonics. So far, the applications have been limited monaural reproduction sound system for public address in museum, station and street etc. In this paper, we discussed characteristics of stereo reproduction with two parametric loudspeakers by comparing with those with two ordinary dynamic loudspeakers. In subjective tests, three typical listening positions were selected to investigate the possibility of correct sound localization in a wide listening area. The binaural information was ILD (Interaural Level Difference) or ITD (Interaural Time Delay). The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 and 112 mm, respectively. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones and pink noise. Three young males listened to test signals 10 times in each listening condition. Subjective test results showed that listeners at the three typical listening positions perceived correct sound localization of all signals using the parametric loudspeakers. It was almost similar to those using the ordinary dynamic loudspeakers, however, except for the case of sinusoidal waves with ITD. It was determined the parametric loudspeaker could exclude the contradiction between the binaural information ILD and ITD that occurred in stereo reproduction with ordinary dynamic loudspeakers because the super directivity of parametric loudspeaker suppressed the cross talk components.
Binaural Release from Masking for a Speech Sound in Infants, Preschoolers, and Adults.
ERIC Educational Resources Information Center
Nozza, Robert J.
1988-01-01
Binaural masked thresholds for a speech sound (/ba/) were estimated under two interaural phase conditions in three age groups (infants, preschoolers, adults). Differences as a function of both age and condition and effects of reducing intensity for adults were significant in indicating possible developmental binaural hearing changes, especially…
2009-12-01
minimize onset transients. Broadband noise allows the observer access to both binaural cues (interaural differences in time of arrival and intensity) and...in the health sciences. 3’ ed. New York: Wiley; 1983. 18. Carmichel EL, Harris FP, Stoiy BH. Effects of binaural electronic hearing protectors on
Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W
2013-11-01
Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in white noise. Relative to control stimuli that contain no inter-aural timing differences, dichotic pitch stimuli typically elicit an object related negativity (ORN) response, associated with the perceptual segregation of the tone and the carrier noise into distinct auditory objects. Autistic children failed to demonstrate an ORN, suggesting a failure of segregation; however, comparison with the ORNs of age-matched typically developing controls narrowly failed to attain significance. More striking, the autistic children demonstrated a significant differential response to the pitch stimulus, peaking at around 50 ms. This was not present in the control group, nor has it been found in other groups tested using similar stimuli. This response may be a neural signature of atypical processing of pitch in at least some autistic individuals.
Kuwada, S; Yin, T C; Wickesberg, R E
1979-11-02
The interaural phase sensitivity of neurons was studied through the use of binaural beat stimuli. The response of most cells was phase-locked to the beat frequency, which provides a possible neural correlate to the human sensation of binaural beats. In addition, this stimulus allowed the direction and rate of interaural phase change to be varied. Some neurons in our sample responded selectively to manipulations of these two variables, which suggests a sensitivity to direction or speed of movement.
Keidser, Gitte; Rohrseitz, Kristin; Dillon, Harvey; Hamacher, Volkmar; Carter, Lyndal; Rass, Uwe; Convery, Elizabeth
2006-10-01
This study examined the effect that signal processing strategies used in modern hearing aids, such as multi-channel WDRC, noise reduction, and directional microphones have on interaural difference cues and horizontal localization performance relative to linear, time-invariant amplification. Twelve participants were bilaterally fitted with BTE devices. Horizontal localization testing using a 360 degrees loudspeaker array and broadband pulsed pink noise was performed two weeks, and two months, post-fitting. The effect of noise reduction was measured with a constant noise present at 80 degrees azimuth. Data were analysed independently in the left/right and front/back dimension and showed that of the three signal processing strategies, directional microphones had the most significant effect on horizontal localization performance and over time. Specifically, a cardioid microphone could decrease front/back errors over time, whereas left/right errors increased when different microphones were fitted to left and right ears. Front/back confusions were generally prominent. Objective measurements of interaural differences on KEMAR explained significant shifts in left/right errors. In conclusion, there is scope for improving the sense of localization in hearing aid users.
Cortical Measures of Binaural Processing Predict Spatial Release from Masking Performance
Papesh, Melissa A.; Folmer, Robert L.; Gallun, Frederick J.
2017-01-01
Binaural sensitivity is an important contributor to the ability to understand speech in adverse acoustical environments such as restaurants and other social gatherings. The ability to accurately report on binaural percepts is not commonly measured, however, as extensive training is required before reliable measures can be obtained. Here, we investigated the use of auditory evoked potentials (AEPs) as a rapid physiological indicator of detection of interaural phase differences (IPDs) by assessing cortical responses to 180° IPDs embedded in amplitude-modulated carrier tones. We predicted that decrements in encoding of IPDs would be evident in middle age, with further declines found with advancing age and hearing loss. Thus, participants in experiment #1 were young to middle-aged adults with relatively good hearing thresholds while participants in experiment #2 were older individuals with typical age-related hearing loss. Results revealed that while many of the participants in experiment #1 could encode IPDs in stimuli up to 1,000 Hz, few of the participants in experiment #2 had discernable responses to stimuli above 750 Hz. These results are consistent with previous studies that have found that aging and hearing loss impose frequency limits on the ability to encode interaural phase information present in the fine structure of auditory stimuli. We further hypothesized that AEP measures of binaural sensitivity would be predictive of participants' ability to benefit from spatial separation between sound sources, a phenomenon known as spatial release from masking (SRM) which depends upon binaural cues. Results indicate that not only were objective IPD measures well correlated with and predictive of behavioral SRM measures in both experiments, but that they provided much stronger predictive value than age or hearing loss. Overall, the present work shows that objective measures of the encoding of interaural phase information can be readily obtained using commonly available AEP equipment, allowing accurate determination of the degree to which binaural sensitivity has been reduced in individual listeners due to aging and/or hearing loss. In fact, objective AEP measures of interaural phase encoding are actually better predictors of SRM in speech-in-speech conditions than are age, hearing loss, or the combination of age and hearing loss. PMID:28377706
Cortical Measures of Binaural Processing Predict Spatial Release from Masking Performance.
Papesh, Melissa A; Folmer, Robert L; Gallun, Frederick J
2017-01-01
Binaural sensitivity is an important contributor to the ability to understand speech in adverse acoustical environments such as restaurants and other social gatherings. The ability to accurately report on binaural percepts is not commonly measured, however, as extensive training is required before reliable measures can be obtained. Here, we investigated the use of auditory evoked potentials (AEPs) as a rapid physiological indicator of detection of interaural phase differences (IPDs) by assessing cortical responses to 180° IPDs embedded in amplitude-modulated carrier tones. We predicted that decrements in encoding of IPDs would be evident in middle age, with further declines found with advancing age and hearing loss. Thus, participants in experiment #1 were young to middle-aged adults with relatively good hearing thresholds while participants in experiment #2 were older individuals with typical age-related hearing loss. Results revealed that while many of the participants in experiment #1 could encode IPDs in stimuli up to 1,000 Hz, few of the participants in experiment #2 had discernable responses to stimuli above 750 Hz. These results are consistent with previous studies that have found that aging and hearing loss impose frequency limits on the ability to encode interaural phase information present in the fine structure of auditory stimuli. We further hypothesized that AEP measures of binaural sensitivity would be predictive of participants' ability to benefit from spatial separation between sound sources, a phenomenon known as spatial release from masking (SRM) which depends upon binaural cues. Results indicate that not only were objective IPD measures well correlated with and predictive of behavioral SRM measures in both experiments, but that they provided much stronger predictive value than age or hearing loss. Overall, the present work shows that objective measures of the encoding of interaural phase information can be readily obtained using commonly available AEP equipment, allowing accurate determination of the degree to which binaural sensitivity has been reduced in individual listeners due to aging and/or hearing loss. In fact, objective AEP measures of interaural phase encoding are actually better predictors of SRM in speech-in-speech conditions than are age, hearing loss, or the combination of age and hearing loss.
Spatial orientation of optokinetic nystagmus and ocular pursuit during orbital space flight
NASA Technical Reports Server (NTRS)
Moore, Steven T.; Cohen, Bernard; Raphan, Theodore; Berthoz, Alain; Clement, Gilles
2005-01-01
On Earth, eye velocity of horizontal optokinetic nystagmus (OKN) orients to gravito-inertial acceleration (GIA), the sum of linear accelerations acting on the head and body. We determined whether adaptation to micro-gravity altered this orientation and whether ocular pursuit exhibited similar properties. Eye movements of four astronauts were recorded with three-dimensional video-oculography. Optokinetic stimuli were stripes moving horizontally, vertically, and obliquely at 30 degrees/s. Ocular pursuit was produced by a spot moving horizontally or vertically at 20 degrees/s. Subjects were either stationary or were centrifuged during OKN with 1 or 0.5 g of interaural or dorsoventral centripetal linear acceleration. Average eye position during OKN (the beating field) moved into the quick-phase direction by 10 degrees during lateral and upward field movement in all conditions. The beating field did not shift up during downward OKN on Earth, but there was a strong upward movement of the beating field (9 degrees) during downward OKN in the absence of gravity; this likely represents an adaptation to the lack of a vertical 1-g bias in-flight. The horizontal OKN velocity axis tilted 9 degrees in the roll plane toward the GIA during interaural centrifugation, both on Earth and in space. During oblique OKN, the velocity vector tilted towards the GIA in the roll plane when there was a disparity between the direction of stripe motion and the GIA, but not when the two were aligned. In contrast, dorsoventral acceleration tilted the horizontal OKN velocity vector 6 degrees in pitch away from the GIA. Roll tilts of the horizontal OKN velocity vector toward the GIA during interaural centrifugation are consistent with the orientation properties of velocity storage, but pitch tilts away from the GIA when centrifuged while supine are not. We speculate that visual suppression during OKN may have caused the velocity vector to tilt away from the GIA during dorsoventral centrifugation. Vertical OKN and ocular pursuit did not exhibit orientation toward the GIA in any condition. Static full-body roll tilts and centrifugation generating an equivalent interaural acceleration produced the same tilts in the horizontal OKN velocity before and after flight. Thus, the magnitude of tilt in OKN velocity was dependent on the magnitude of interaural linear acceleration, rather than the tilt of the GIA with regard to the head. These results favor a 'filter' model of spatial orientation in which orienting eye movements are proportional to the magnitude of low frequency interaural linear acceleration, rather than models that postulate an internal representation of gravity as the basis for spatial orientation.
ERIC Educational Resources Information Center
Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W.
2013-01-01
Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in…
Noble, William; Gatehouse, Stuart
2004-02-01
A series of comparative analyses is presented between a group with relatively similar degrees of hearing loss in each ear (n = 103: symmetry group) and one with dissimilar losses (n = 50: asymmetry group). Asymmetry was defined as an interaural difference of more than 10dB in hearing levels averaged over 0.5. 1, 2 and 4kHz. Comparison was focused on self-rated disabilities as reflected in responses on the Speech, Spatial and Qualities of Hearing Scale (SSQ). The connections between SSQ ratings and a global self-rating of handicap were also observed. The interrelationships among SSQ items for the two groups were analysed to determine how the SSQ behaves when applied to groups in whom binaural hearing is more (asymmetry) versus less compromised. As expected, spatial hearing is severely disabled in the group with asymmetry; this group is generally more disabled than the symmetry group across all SSQ domains. In the linkages with handicap, spatial hearing, especially in dynamic settings, was strongly represented in the asymmetry group, while all aspects of hearing were moderately to strongly represented in the symmetry group. Item intercorrelations showed that speech hearing is a relatively autonomous function for the symmetry group, whereas it is enmeshed with segregation, clarity and naturalness factors for the asymmetry group. Spatial functions were more independent of others in the asymmetry group. The SSQ shows promise in the assessment of outcomes in the case of bilateral versus unilateral amplification and/or implantation.
Akeroyd, Michael A
2004-08-01
The equalization stage in the equalization-cancellation model of binaural unmasking compensates for the interaural time delay (ITD) of a masking noise by introducing an opposite, internal delay [N. I. Durlach, in Foundations of Modern Auditory Theory, Vol. II., edited by J. V. Tobias (Academic, New York, 1972)]. Culling and Summerfield [J. Acoust. Soc. Am. 98, 785-797 (1995)] developed a multi-channel version of this model in which equalization was "free" to use the optimal delay in each channel. Two experiments were conducted to test if equalization was indeed free or if it was "restricted" to the same delay in all channels. One experiment measured binaural detection thresholds, using an adaptive procedure, for 1-, 5-, or 17-component tones against a broadband masking noise, in three binaural configurations (N0S180, N180S0, and N90S270). The thresholds for the 1-component stimuli were used to normalize the levels of each of the 5- and 17-component stimuli so that they were equally detectable. If equalization was restricted, then, for the 5- and 17-component stimuli, the N90S270 and N180S0 configurations would yield a greater threshold than the N0S180 configurations. No such difference was found. A subsequent experiment measured binaural detection thresholds, via psychometric functions, for a 2-component complex tone in the same three binaural configurations. Again, no differential effect of configuration was observed. An analytic model of the detection of a complex tone showed that the results were more consistent with free equalization than restricted equalization, although the size of the differences was found to depend on the shape of the psychometric function for detection.
Relative size of auditory pathways in symmetrically and asymmetrically eared owls.
Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N; Wylie, Douglas R
2011-01-01
Owls are highly efficient predators with a specialized auditory system designed to aid in the localization of prey. One of the most unique anatomical features of the owl auditory system is the evolution of vertically asymmetrical ears in some species, which improves their ability to localize the elevational component of a sound stimulus. In the asymmetrically eared barn owl, interaural time differences (ITD) are used to localize sounds in azimuth, whereas interaural level differences (ILD) are used to localize sounds in elevation. These two features are processed independently in two separate neural pathways that converge in the external nucleus of the inferior colliculus to form an auditory map of space. Here, we present a comparison of the relative volume of 11 auditory nuclei in both the ITD and the ILD pathways of 8 species of symmetrically and asymmetrically eared owls in order to investigate evolutionary changes in the auditory pathways in relation to ear asymmetry. Overall, our results indicate that asymmetrically eared owls have much larger auditory nuclei than owls with symmetrical ears. In asymmetrically eared owls we found that both the ITD and ILD pathways are equally enlarged, and other auditory nuclei, not directly involved in binaural comparisons, are also enlarged. We suggest that the hypertrophy of auditory nuclei in asymmetrically eared owls likely reflects both an improved ability to precisely locate sounds in space and an expansion of the hearing range. Additionally, our results suggest that the hypertrophy of nuclei that compute space may have preceded that of the expansion of the hearing range and evolutionary changes in the size of the auditory system occurred independently of phylogeny. Copyright © 2011 S. Karger AG, Basel.
Using Evoked Potentials to Match Interaural Electrode Pairs with Bilateral Cochlear Implants
Delgutte, Bertrand
2007-01-01
Bilateral cochlear implantation seeks to restore the advantages of binaural hearing to the profoundly deaf by providing binaural cues normally important for accurate sound localization and speech reception in noise. Psychophysical observations suggest that a key issue for the implementation of a successful binaural prosthesis is the ability to match the cochlear positions of stimulation channels in each ear. We used a cat model of bilateral cochlear implants with eight-electrode arrays implanted in each cochlea to develop and test a noninvasive method based on evoked potentials for matching interaural electrodes. The arrays allowed the cochlear location of stimulation to be independently varied in each ear. The binaural interaction component (BIC) of the electrically evoked auditory brainstem response (EABR) was used as an assay of binaural processing. BIC amplitude peaked for interaural electrode pairs at the same relative cochlear position and dropped with increasing cochlear separation in either direction. To test the hypothesis that BIC amplitude peaks when electrodes from the two sides activate maximally overlapping neural populations, we measured multiunit neural activity along the tonotopic gradient of the inferior colliculus (IC) with 16-channel recording probes and determined the spatial pattern of IC activation for each stimulating electrode. We found that the interaural electrode pairings that produced the best aligned IC activation patterns were also those that yielded maximum BIC amplitude. These results suggest that EABR measurements may provide a method for assigning frequency–channel mappings in bilateral implant recipients, such as pediatric patients, for which psychophysical measures of pitch ranking or binaural fusion are unavailable. PMID:17225976
Using evoked potentials to match interaural electrode pairs with bilateral cochlear implants.
Smith, Zachary M; Delgutte, Bertrand
2007-03-01
Bilateral cochlear implantation seeks to restore the advantages of binaural hearing to the profoundly deaf by providing binaural cues normally important for accurate sound localization and speech reception in noise. Psychophysical observations suggest that a key issue for the implementation of a successful binaural prosthesis is the ability to match the cochlear positions of stimulation channels in each ear. We used a cat model of bilateral cochlear implants with eight-electrode arrays implanted in each cochlea to develop and test a noninvasive method based on evoked potentials for matching interaural electrodes. The arrays allowed the cochlear location of stimulation to be independently varied in each ear. The binaural interaction component (BIC) of the electrically evoked auditory brainstem response (EABR) was used as an assay of binaural processing. BIC amplitude peaked for interaural electrode pairs at the same relative cochlear position and dropped with increasing cochlear separation in either direction. To test the hypothesis that BIC amplitude peaks when electrodes from the two sides activate maximally overlapping neural populations, we measured multiunit neural activity along the tonotopic gradient of the inferior colliculus (IC) with 16-channel recording probes and determined the spatial pattern of IC activation for each stimulating electrode. We found that the interaural electrode pairings that produced the best aligned IC activation patterns were also those that yielded maximum BIC amplitude. These results suggest that EABR measurements may provide a method for assigning frequency-channel mappings in bilateral implant recipients, such as pediatric patients, for which psychophysical measures of pitch ranking or binaural fusion are unavailable.
2017-01-01
Binaural cues occurring in natural environments are frequently time varying, either from the motion of a sound source or through interactions between the cues produced by multiple sources. Yet, a broad understanding of how the auditory system processes dynamic binaural cues is still lacking. In the current study, we directly compared neural responses in the inferior colliculus (IC) of unanesthetized rabbits to broadband noise with time-varying interaural time differences (ITD) with responses to noise with sinusoidal amplitude modulation (SAM) over a wide range of modulation frequencies. On the basis of prior research, we hypothesized that the IC, one of the first stages to exhibit tuning of firing rate to modulation frequency, might use a common mechanism to encode time-varying information in general. Instead, we found weaker temporal coding for dynamic ITD compared with amplitude modulation and stronger effects of adaptation for amplitude modulation. The differences in temporal coding of dynamic ITD compared with SAM at the single-neuron level could be a neural correlate of “binaural sluggishness,” the inability to perceive fluctuations in time-varying binaural cues at high modulation frequencies, for which a physiological explanation has so far remained elusive. At ITD-variation frequencies of 64 Hz and above, where a temporal code was less effective, noise with a dynamic ITD could still be distinguished from noise with a constant ITD through differences in average firing rate in many neurons, suggesting a frequency-dependent tradeoff between rate and temporal coding of time-varying binaural information. NEW & NOTEWORTHY Humans use time-varying binaural cues to parse auditory scenes comprising multiple sound sources and reverberation. However, the neural mechanisms for doing so are poorly understood. Our results demonstrate a potential neural correlate for the reduced detectability of fluctuations in time-varying binaural information at high speeds, as occurs in reverberation. The results also suggest that the neural mechanisms for processing time-varying binaural and monaural cues are largely distinct. PMID:28381487
Garcia-Pino, Elisabet; Gessele, Nikodemus; Koch, Ursula
2017-08-02
Hypersensitivity to sounds is one of the prevalent symptoms in individuals with Fragile X syndrome (FXS). It manifests behaviorally early during development and is often used as a landmark for treatment efficacy. However, the physiological mechanisms and circuit-level alterations underlying this aberrant behavior remain poorly understood. Using the mouse model of FXS ( Fmr1 KO ), we demonstrate that functional maturation of auditory brainstem synapses is impaired in FXS. Fmr1 KO mice showed a greatly enhanced excitatory synaptic input strength in neurons of the lateral superior olive (LSO), a prominent auditory brainstem nucleus, which integrates ipsilateral excitation and contralateral inhibition to compute interaural level differences. Conversely, the glycinergic, inhibitory input properties remained unaffected. The enhanced excitation was the result of an increased number of cochlear nucleus fibers converging onto one LSO neuron, without changing individual synapse properties. Concomitantly, immunolabeling of excitatory ending markers revealed an increase in the immunolabeled area, supporting abnormally elevated excitatory input numbers. Intrinsic firing properties were only slightly enhanced. In line with the disturbed development of LSO circuitry, auditory processing was also affected in adult Fmr1 KO mice as shown with single-unit recordings of LSO neurons. These processing deficits manifested as an increase in firing rate, a broadening of the frequency response area, and a shift in the interaural level difference function of LSO neurons. Our results suggest that this aberrant synaptic development of auditory brainstem circuits might be a major underlying cause of the auditory processing deficits in FXS. SIGNIFICANCE STATEMENT Fragile X Syndrome (FXS) is the most common inheritable form of intellectual impairment, including autism. A core symptom of FXS is extreme sensitivity to loud sounds. This is one reason why individuals with FXS tend to avoid social interactions, contributing to their isolation. Here, a mouse model of FXS was used to investigate the auditory brainstem where basic sound information is first processed. Loss of the Fragile X mental retardation protein leads to excessive excitatory compared with inhibitory inputs in neurons extracting information about sound levels. Functionally, this elevated excitation results in increased firing rates, and abnormal coding of frequency and binaural sound localization cues. Imbalanced early-stage sound level processing could partially explain the auditory processing deficits in FXS. Copyright © 2017 the authors 0270-6474/17/377403-17$15.00/0.
Lateralization of the Huggins pitch
NASA Astrophysics Data System (ADS)
Zhang, Peter Xinya; Hartmann, William M.
2004-05-01
The lateralization of the Huggins pitch (HP) was measured using a direct estimation method. The background noise was initially N0 or Nπ, and then the laterality of the entire stimulus was varied with a frequency-independent interaural delay, ranging from -1 to +1 ms. Two versions of the HP boundary region were used, stepped phase and linear phase. When presented in isolation, without the broadband background, the stepped boundary can be lateralized on its own but the linear boundary cannot. Nevertheless, the lateralizations of both forms of HP were found to be almost identical functions both of the interaural delay and of the boundary frequency over a two-octave range. In a third experiment, the same listeners lateralized sine tones in quiet as a function of interaural delay. Good agreement was found between lateralizations of the HP and of the corresponding sine tones. The lateralization judgments depended on the boundary frequency according to the expected hyperbolic law except when the frequency-independent delay was zero. For the latter case, the dependence on boundary frequency was much slower than hyperbolic. [Work supported by the NIDCD grant DC 00181.
The spatial unmasking of speech: evidence for within-channel processing of interaural time delay.
Edmonds, Barrie A; Culling, John F
2005-05-01
Across-frequency processing by common interaural time delay (ITD) in spatial unmasking was investigated by measuring speech reception thresholds (SRTs) for high- and low-frequency bands of target speech presented against concurrent speech or a noise masker. Experiment 1 indicated that presenting one of these target bands with an ITD of +500 micros and the other with zero ITD (like the masker) provided some release from masking, but full binaural advantage was only measured when both target bands were given an ITD of + 500 micros. Experiment 2 showed that full binaural advantage could also be achieved when the high- and low-frequency bands were presented with ITDs of equal but opposite magnitude (+/- 500 micros). In experiment 3, the masker was also split into high- and low-frequency bands with ITDs of equal but opposite magnitude (+/-500 micros). The ITD of the low-frequency target band matched that of the high-frequency masking band and vice versa. SRTs indicated that, as long as the target and masker differed in ITD within each frequency band, full binaural advantage could be achieved. These results suggest that the mechanism underlying spatial unmasking exploits differences in ITD independently within each frequency channel.
Churchill, Tyler H; Kan, Alan; Goupell, Matthew J; Litovsky, Ruth Y
2014-09-01
Most contemporary cochlear implant (CI) processing strategies discard acoustic temporal fine structure (TFS) information, and this may contribute to the observed deficits in bilateral CI listeners' ability to localize sounds when compared to normal hearing listeners. Additionally, for best speech envelope representation, most contemporary speech processing strategies use high-rate carriers (≥900 Hz) that exceed the limit for interaural pulse timing to provide useful binaural information. Many bilateral CI listeners are sensitive to interaural time differences (ITDs) in low-rate (<300 Hz) constant-amplitude pulse trains. This study explored the trade-off between superior speech temporal envelope representation with high-rate carriers and binaural pulse timing sensitivity with low-rate carriers. The effects of carrier pulse rate and pulse timing on ITD discrimination, ITD lateralization, and speech recognition in quiet were examined in eight bilateral CI listeners. Stimuli consisted of speech tokens processed at different electrical stimulation rates, and pulse timings that either preserved or did not preserve acoustic TFS cues. Results showed that CI listeners were able to use low-rate pulse timing cues derived from acoustic TFS when presented redundantly on multiple electrodes for ITD discrimination and lateralization of speech stimuli.
NASA Astrophysics Data System (ADS)
Suzuki, Yôiti; Watanabe, Kanji; Iwaya, Yukio; Gyoba, Jiro; Takane, Shouichi
2005-04-01
Because the transfer functions governing subjective sound localization (HRTFs) show strong individuality, sound localization systems based on synthesis of HRTFs require suitable HRTFs for individual listeners. However, it is impractical to obtain HRTFs for all listeners based on measurements. Improving sound localization by adjusting non-individualized HRTFs to a specific listener based on that listener's anthropometry might be a practical method. This study first developed a new method to estimate interaural time differences (ITDs) using HRTFs. Then correlations between ITDs and anthropometric parameters were analyzed using the canonical correlation method. Results indicated that parameters relating to head size, and shoulder and ear positions are significant. Consequently, it was attempted to express ITDs based on listener's anthropometric data. In this process, the change of ITDs as a function of azimuth angle was parameterized as a sum of sine functions. Then the parameters were analyzed using multiple regression analysis, in which the anthropometric parameters were used as explanatory variables. The predicted or individualized ITDs were installed in the nonindividualized HRTFs to evaluate sound localization performance. Results showed that individualization of ITDs improved horizontal sound localization.
The dynamic contributions of the otolith organs to human ocular torsion
NASA Technical Reports Server (NTRS)
Merfeld, D. M.; Teiwes, W.; Clarke, A. H.; Scherer, H.; Young, L. R.
1996-01-01
We measured human ocular torsion (OT) monocularly (using video) and binocularly (using search coils) while sinusoidally accelerating (0.7 g) five human subjects along an earth-horizontal axis at five frequencies (0.35, 0.4, 0.5, 0.75, and 1.0 Hz). The compensatory nature of OT was investigated by changing the relative orientation of the dynamic (linear acceleration) and static (gravitational) cues. Four subject orientations were investigated: (1) Y-upright-acceleration along the interaural (y) axis while upright; (2) Y-supine-acceleration along the y-axis while supine; (3) Z-RED-acceleration along the dorsoventral (z) axis with right ear down; (4) Z-supine-acceleration along the z-axis while supine. Linear acceleration in the Y-upright, Y-supine and Z-RED orientations elicited conjugate OT. The smaller response in the Z-supine orientation appeared disconjugate. The amplitude of the response decreased and the phase lag increased with increasing frequency for each orientation. This frequency dependence does not match the frequency response of the regular or irregular afferent otolith neurons; therefore the response dynamics cannot be explained by simple peripheral mechanisms. The Y-upright responses were larger than the Y-supine responses (P < 0.05). This difference indicates that OT must be more complicated than a simple low-pass filtered response to interaural shear force, since the dynamic shear force along the interaural axis was identical in these two orientations. The Y-supine responses were, in turn, larger than the Z-RED responses (P < 0.01). Interestingly, the vector sum of the Y-supine responses plus Z-RED responses was not significantly different (P = 0.99) from the Y-upright responses. This suggests that, in this frequency range, the conjugate OT response during Y-upright stimulation might be composed of two components: (1) a response to shear force along the y-axis (as in Y-supine stimulation), and (2) a response to roll tilt of gravitoinertial force (as in Z-RED stimulation).
Modeling the utility of binaural cues for underwater sound localization.
Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo
2014-06-01
The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.
Adaptation to stimulus statistics in the perception and neural representation of auditory space.
Dahmen, Johannes C; Keating, Peter; Nodal, Fernando R; Schulz, Andreas L; King, Andrew J
2010-06-24
Sensory systems are known to adapt their coding strategies to the statistics of their environment, but little is still known about the perceptual implications of such adjustments. We investigated how auditory spatial processing adapts to stimulus statistics by presenting human listeners and anesthetized ferrets with noise sequences in which interaural level differences (ILD) rapidly fluctuated according to a Gaussian distribution. The mean of the distribution biased the perceived laterality of a subsequent stimulus, whereas the distribution's variance changed the listeners' spatial sensitivity. The responses of neurons in the inferior colliculus changed in line with these perceptual phenomena. Their ILD preference adjusted to match the stimulus distribution mean, resulting in large shifts in rate-ILD functions, while their gain adapted to the stimulus variance, producing pronounced changes in neural sensitivity. Our findings suggest that processing of auditory space is geared toward emphasizing relative spatial differences rather than the accurate representation of absolute position.
NASA Astrophysics Data System (ADS)
Bernstein, Leslie R.; Trahiotis, Constantine
2003-06-01
An acoustic pointing task was used to determine whether interaural temporal disparities (ITDs) conveyed by high-frequency ``transposed'' stimuli would produce larger extents of laterality than ITDs conveyed by bands of high-frequency Gaussian noise. The envelopes of transposed stimuli are designed to provide high-frequency channels with information similar to that conveyed by the waveforms of low-frequency stimuli. Lateralization was measured for low-frequency Gaussian noises, the same noises transposed to 4 kHz, and high-frequency Gaussian bands of noise centered at 4 kHz. Extents of laterality obtained with the transposed stimuli were greater than those obtained with bands of Gaussian noise centered at 4 kHz and, in some cases, were equivalent to those obtained with low-frequency stimuli. In a second experiment, the general effects on lateral position produced by imposed combinations of bandwidth, ITD, and interaural phase disparities (IPDs) on low-frequency stimuli remained when those stimuli were transposed to 4 kHz. Overall, the data were fairly well accounted for by a model that computes the cross-correlation subsequent to known stages of peripheral auditory processing augmented by low-pass filtering of the envelopes within the high-frequency channels of each ear.
Monaghan, Jessica J. M.; Seeber, Bernhard U.
2017-01-01
The ability of normal-hearing (NH) listeners to exploit interaural time difference (ITD) cues conveyed in the modulated envelopes of high-frequency sounds is poor compared to ITD cues transmitted in the temporal fine structure at low frequencies. Sensitivity to envelope ITDs is further degraded when envelopes become less steep, when modulation depth is reduced, and when envelopes become less similar between the ears, common factors when listening in reverberant environments. The vulnerability of envelope ITDs is particularly problematic for cochlear implant (CI) users, as they rely on information conveyed by slowly varying amplitude envelopes. Here, an approach to improve access to envelope ITDs for CIs is described in which, rather than attempting to reduce reverberation, the perceptual saliency of cues relating to the source is increased by selectively sharpening peaks in the amplitude envelope judged to contain reliable ITDs. Performance of the algorithm with room reverberation was assessed through simulating listening with bilateral CIs in headphone experiments with NH listeners. Relative to simulated standard CI processing, stimuli processed with the algorithm generated lower ITD discrimination thresholds and increased extents of laterality. Depending on parameterization, intelligibility was unchanged or somewhat reduced. The algorithm has the potential to improve spatial listening with CIs. PMID:27586742
A circuit for detection of interaural time differences in the nucleus laminaris of turtles.
Willis, Katie L; Carr, Catherine E
2017-11-15
The physiological hearing range of turtles is approximately 50-1000 Hz, as determined by cochlear microphonics ( Wever and Vernon, 1956a). These low frequencies can constrain sound localization, particularly in red-eared slider turtles, which are freshwater turtles with small heads and isolated middle ears. To determine if these turtles were sensitive to interaural time differences (ITDs), we investigated the connections and physiology of their auditory brainstem nuclei. Tract tracing experiments showed that cranial nerve VIII bifurcated to terminate in the first-order nucleus magnocellularis (NM) and nucleus angularis (NA), and the NM projected bilaterally to the nucleus laminaris (NL). As the NL received inputs from each side, we developed an isolated head preparation to examine responses to binaural auditory stimulation. Magnocellularis and laminaris units responded to frequencies from 100 to 600 Hz, and phase-locked reliably to the auditory stimulus. Responses from the NL were binaural, and sensitive to ITD. Measures of characteristic delay revealed best ITDs around ±200 μs, and NL neurons typically had characteristic phases close to 0, consistent with binaural excitation. Thus, turtles encode ITDs within their physiological range, and their auditory brainstem nuclei have similar connections and cell types to other reptiles. © 2017. Published by The Company of Biologists Ltd.
NASA Astrophysics Data System (ADS)
Miller, Robert E. (Robin)
2005-04-01
Perception of very low frequencies (VLF) below 125 Hz reproduced by large woofers and subwoofers (SW), encompassing 3 octaves of the 10 regarded as audible, has physiological and content aspects. Large room acoustics and vibrato add VLF fluctuations, modulating audible carrier frequencies to >1 Hz. By convention, sounds below 90 Hz produce no interaural cues useful for spatial perception or localization, therefore bass management redirects the VLF range from main channels to a single (monaural) subwoofer channel, even if to more than one subwoofer. Yet subjects claim they hear a difference between a single subwoofer channel and two (stereo bass). If recordings contain spatial VLF content, is it possible physiologically to perceive interaural time/phase difference (ITD/IPD) between 16 and 125 Hz? To what extent does this perception have a lifelike quality; to what extent is it localization? If a first approximation of localization, would binaural SWs allow a higher crossover frequency (smaller satellite speakers)? Reported research supports the Jeffress model of ITD determination in brain structures, and extending the accepted lower frequency limit of IPD. Meanwhile, uncorrelated very low frequencies exist in all tested multi-channel music and movie content. The audibility, recording, and reproduction of uncorrelated VLF are explored in theory and experiments.
Moore, Brian C J; Sęk, Aleksander
2016-09-07
Multichannel amplitude compression is widely used in hearing aids. The preferred compression speed varies across individuals. Moore (2008) suggested that reduced sensitivity to temporal fine structure (TFS) may be associated with preference for slow compression. This idea was tested using a simulated hearing aid. It was also assessed whether preferences for compression speed depend on the type of stimulus: speech or music. Twenty-two hearing-impaired subjects were tested, and the stimulated hearing aid was fitted individually using the CAM2A method. On each trial, a given segment of speech or music was presented twice. One segment was processed with fast compression and the other with slow compression, and the order was balanced across trials. The subject indicated which segment was preferred and by how much. On average, slow compression was preferred over fast compression, more so for music, but there were distinct individual differences, which were highly correlated for speech and music. Sensitivity to TFS was assessed using the difference limen for frequency at 2000 Hz and by two measures of sensitivity to interaural phase at low frequencies. The results for the difference limens for frequency, but not the measures of sensitivity to interaural phase, supported the suggestion that preference for compression speed is affected by sensitivity to TFS. © The Author(s) 2016.
Whiteford, Kelly L; Kreft, Heather A; Oxenham, Andrew J
2017-08-01
Natural sounds can be characterized by their fluctuations in amplitude and frequency. Ageing may affect sensitivity to some forms of fluctuations more than others. The present study used individual differences across a wide age range (20-79 years) to test the hypothesis that slow-rate, low-carrier frequency modulation (FM) is coded by phase-locked auditory-nerve responses to temporal fine structure (TFS), whereas fast-rate FM is coded via rate-place (tonotopic) cues, based on amplitude modulation (AM) of the temporal envelope after cochlear filtering. Using a low (500 Hz) carrier frequency, diotic FM and AM detection thresholds were measured at slow (1 Hz) and fast (20 Hz) rates in 85 listeners. Frequency selectivity and TFS coding were assessed using forward masking patterns and interaural phase disparity tasks (slow dichotic FM), respectively. Comparable interaural level disparity tasks (slow and fast dichotic AM and fast dichotic FM) were measured to control for effects of binaural processing not specifically related to TFS coding. Thresholds in FM and AM tasks were correlated, even across tasks thought to use separate peripheral codes. Age was correlated with slow and fast FM thresholds in both diotic and dichotic conditions. The relationship between age and AM thresholds was generally not significant. Once accounting for AM sensitivity, only diotic slow-rate FM thresholds remained significantly correlated with age. Overall, results indicate stronger effects of age on FM than AM. However, because of similar effects for both slow and fast FM when not accounting for AM sensitivity, the effects cannot be unambiguously ascribed to TFS coding.
Bernstein, Leslie R.; Trahiotis, Constantine
2009-01-01
This study addressed how manipulating certain aspects of the envelopes of high-frequency stimuli affects sensitivity to envelope-based interaural temporal disparities (ITDs). Listener’s threshold ITDs were measured using an adaptive two-alternative paradigm employing “raised-sine” stimuli [John, M. S., et al. (2002). Ear Hear. 23, 106–117] which permit independent variation in their modulation frequency, modulation depth, and modulation exponent. Threshold ITDs were measured while manipulating modulation exponent for stimuli having modulation frequencies between 32 and 256 Hz. The results indicated that graded increases in the exponent led to graded decreases in envelope-based threshold ITDs. Threshold ITDs were also measured while parametrically varying modulation exponent and modulation depth. Overall, threshold ITDs decreased with increases in the modulation depth. Unexpectedly, increases in the exponent of the raised-sine led to especially large decreases in threshold ITD when the modulation depth was low. An interaural correlation-based model was generally able to capture changes in threshold ITD stemming from changes in the exponent, depth of modulation, and frequency of modulation of the raised-sine stimuli. The model (and several variations of it), however, could not account for the unexpected interaction between the value of raised-sine exponent and its modulation depth. PMID:19425666
Bernstein, Leslie R; Trahiotis, Constantine
2014-12-01
Binaural detection was measured as a function of the center frequency, bandwidth, and interaural correlation of masking noise. Thresholds were obtained for 500-Hz or 125-Hz Sπ tonal signals and for the latter stimuli (noise or signal-plus-noise) transposed to 4 kHz. A primary goal was assessment of the generality of van der Heijden and Trahiotis' [J. Acoust. Soc. Am. 101, 1019-1022 (1997)] hypothesis that thresholds could be accounted for by the "additive" masking effects of the underlying No and Nπ components of a masker having an interaural correlation of ρ. Results indicated that (1) the overall patterning of the data depended neither upon center frequency nor whether information was conveyed via the waveform or by its envelope; (2) thresholds for transposed stimuli improved relative to their low-frequency counterparts as bandwidth of the masker was increased; (3) the additivity approach accounted well for the data across stimulus conditions but consistently overestimated MLDs, especially for narrowband maskers; (4) a quantitative approach explicitly taking into account the distributions of time-varying ITD-based lateral positions produced by masker-alone and signal-plus-masker waveforms proved more successful, albeit while employing a larger set of assumptions, parameters, and computational complexity.
Hausmann, Laura; von Campenhausen, Mark; Endler, Frank; Singheiser, Martin; Wagner, Hermann
2009-11-05
When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs), which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba) are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD), interaural intensity differences (ILD), and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl's ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests. HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly) smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes. The facial ruff a) improves azimuthal sound localization by increasing the ITD range and b) improves elevational sound localization in the frontal field by introducing a shift of iso-ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the virtual removal of the ruff. These data provide new insights into the function of external hearing structures and open up the possibility to apply the results on autonomous agents, creation of virtual auditory environments for humans, or in hearing aids.
Alteration of frequency range for binaural beats in acute low-tone hearing loss.
Karino, Shotaro; Yamasoba, Tatsuya; Ito, Ken; Kaga, Kimitaka
2005-01-01
The effect of acute low-tone sensorineural hearing loss (ALHL) on the interaural frequency difference (IFD) required for perception of binaural beats (BBs) was investigated in 12 patients with unilateral ALHL and 7 patients in whom ALHL had lessened. A continuous pure tone of 30 dB sensation level at 250 Hz was presented to the contralateral, normal-hearing ear. The presence of BBs was determined by a subjective yes-no procedure as the frequency of a loudness-balanced test tone was gradually adjusted around 250 Hz in the affected ear. The frequency range in which no BBs were perceived (FRNB) was significantly wider in the patients with ALHL than in the controls, and FRNBs became narrower in the recovered ALHL group. Specifically, detection of slow BBs with a small IFD was impaired in this limited (10 s) observation period. The significant correlation between the hearing level at 250 Hz and FRNBs suggests that FRNBs represent the degree of cochlear damage caused by ALHL.
Tympanometric findings in superior semicircular canal dehiscence syndrome.
Castellucci, A; Brandolini, C; Piras, G; Modugno, G C
2013-04-01
The diagnostic role of audio-impedancemetry in superior semicircular canal dehiscence (SSCD) disease is well known. In particular, since the first reports, the presence of evoked acoustic reflexes has represented a determining instrumental exhibit in differential diagnosis with other middle ear pathologies that are responsible for a mild-low frequencies air-bone gap (ABG). Even though high resolution computed tomography (HRCT) completed by parasagittal reformatted images still represents the diagnostic gold standard, several instrumental tests can support a suspect of labyrinthine capsule dehiscence when "suggestive" symptoms occur. Objective and subjective audiometry often represents the starting point of the diagnostic course aimed at investigating the cause responsible for the so-called "intra-labyrinthine conductive hearing loss". The purpose of this study is to evaluate the role of tympanometry, in particular of the inter-aural asymmetry ratio in peak compliance as a function of different mild-low frequencies ABG on the affected side, in the diagnostic work-up in patients with unilateral SSCD. The working hypothesis is that an increase in admittance of the "inner-middle ear" conduction system due to a "third mobile window" could be detected by tympanometry. A retrospective review of the clinical records of 45 patients with unilateral dehiscence selected from a pool of 140 subjects diagnosed with SSCD at our institution from 2003 to 2011 was performed. Values of ABG amplitude on the dehiscent side and tympanometric measurements of both ears were collected for each patient in the study group (n = 45). An asymmetry between tympanometric peak compliance of the involved side and that of the contralateral side was investigated by calculating the inter-aural difference and the asymmetry ratio of compliance at the eardrum. A statistically significant correlation (p = 0.015 by Fisher's test) between an asymmetry ratio ≥ 14% in favour of the pathologic ear and an ABG > 20 dB nHL on the same side was found. When "evocative" symptoms of SSCD associated with important ABG occur, the inter-aural difference in tympanometric peak compliance at the eardrum in favour of the "suspected" side could suggest an intra-labyrinthine origin for the asymmetry. Tympanometry would thus prove to be a useful instrument in clinical-instrumental diagnosis of SSCD in detection of cases associated with alterations of inner ear impedance.
Mechanisms for Adjusting Interaural Time Differences to Achieve Binaural Coincidence Detection
Seidl, Armin H.; Rubel, Edwin W; Harris, David M.
2010-01-01
Understanding binaural perception requires detailed analyses of the neural circuitry responsible for the computation of interaural time differences (ITDs). In the avian brainstem, this circuit consists of internal axonal delay lines innervating an array of coincidence detector neurons that encode external ITDs. Nucleus magnocellularis (NM) neurons project to the dorsal dendritic field of the ipsilateral nucleus laminaris (NL) and to the ventral field of the contralateral NL. Contralateral-projecting axons form a delay line system along a band of NL neurons. Binaural acoustic signals in the form of phase-locked action potentials from NM cells arrive at NL and establish a topographic map of sound source location along the azimuth. These pathways are assumed to represent a circuit similar to the Jeffress model of sound localization, establishing a place code along an isofrequency contour of NL. Three-dimensional measurements of axon lengths reveal major discrepancies with the current model; the temporal offset based on conduction length alone makes encoding of physiological ITDs impossible. However, axon diameter and distances between Nodes of Ranvier also influence signal propagation times along an axon. Our measurements of these parameters reveal that diameter and internode distance can compensate for the temporal offset inferred from axon lengths alone. Together with other recent studies these unexpected results should inspire new thinking on the cellular biology, evolution and plasticity of the circuitry underlying low frequency sound localization in both birds and mammals. PMID:20053889
Underwater localization of pure tones by harbor seals (Phoca vitulina).
Bodson, Anaïs; Miersch, Lars; Dehnhardt, Guido
2007-10-01
The underwater sound localization acuity of harbor seals (Phoca vitulina) was measured in the horizontal plane. Minimum audible angles (MAAs) of pure tones were determined as a function of frequency from 0.2 to 16 kHz for two seals. Testing was conducted in a 10-m-diam underwater half circle using a right/left psychophysical procedure. The results indicate that for both harbor seals, MAAs were large at high frequencies (13.5 degrees and 17.4 degrees at 16 kHz), transitional at intermediate frequencies (9.6 degrees and 10.1 degrees at 4 kHz), and particularly small at low frequencies (3.2 degrees and 3.1 degrees at 0.2 kHz). Harbor seals seem to be able to utilize both binaural cues, interaural time differences (ITDs) and interaural intensity differences (IIDs), but a significant decrease in the sound localization acuity with increasing frequency suggests that IID cues may not be as robust as ITD cues under water. These results suggest that the harbor seal can be regarded as a low-frequency specialist. Additionally, to obtain a MAA more representative of the species, the horizontal underwater MAA of six adult harbor seals was measured at 2 kHz under identical conditions. The MAAs of the six animals ranged from 8.8 degrees to 11.7 degrees , resulting in a mean MAA of 10.3 degrees .
Encke, Jörg; Hemmert, Werner
2018-01-01
The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs) which are first analyzed in the medial superior olive (MSO) in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.
Sutojo, Sarinah; van de Par, Steven; Schoenmaker, Esther
2018-06-01
In situations with competing talkers or in the presence of masking noise, speech intelligibility can be improved by spatially separating the target speaker from the interferers. This advantage is generally referred to as spatial release from masking (SRM) and different mechanisms have been suggested to explain it. One proposed mechanism to benefit from spatial cues is the binaural masking release, which is purely stimulus driven. According to this mechanism, the spatial benefit results from differences in the binaural cues of target and masker, which need to appear simultaneously in time and frequency to improve the signal detection. In an alternative proposed mechanism, the differences in the interaural cues improve the segregation of auditory streams, a process, which involves top-down processing rather than being purely stimulus driven. Other than the cues that produce binaural masking release, the interaural cue differences between target and interferer required to improve stream segregation do not have to appear simultaneously in time and frequency. This study is concerned with the contribution of binaural masking release to SRM for three masker types that differ with respect to the amount of energetic masking they exert. Speech intelligibility was measured, employing a stimulus manipulation that inhibits binaural masking release, and analyzed with a metric to account for the number of better-ear glimpses. Results indicate that the contribution of the stimulus-driven binaural masking release plays a minor role while binaural stream segregation and the availability of glimpses in the better ear had a stronger influence on improving the speech intelligibility. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
The Neural Substrate for Binaural Masking Level Differences in the Auditory Cortex
Gilbert, Heather J.; Krumbholz, Katrin; Palmer, Alan R.
2015-01-01
The binaural masking level difference (BMLD) is a phenomenon whereby a signal that is identical at each ear (S0), masked by a noise that is identical at each ear (N0), can be made 12–15 dB more detectable by inverting the waveform of either the tone or noise at one ear (Sπ, Nπ). Single-cell responses to BMLD stimuli were measured in the primary auditory cortex of urethane-anesthetized guinea pigs. Firing rate was measured as a function of signal level of a 500 Hz pure tone masked by low-passed white noise. Responses were similar to those reported in the inferior colliculus. At low signal levels, the response was dominated by the masker. At higher signal levels, firing rate either increased or decreased. Detection thresholds for each neuron were determined using signal detection theory. Few neurons yielded measurable detection thresholds for all stimulus conditions, with a wide range in thresholds. However, across the entire population, the lowest thresholds were consistent with human psychophysical BMLDs. As in the inferior colliculus, the shape of the firing-rate versus signal-level functions depended on the neurons' selectivity for interaural time difference. Our results suggest that, in cortex, BMLD signals are detected from increases or decreases in the firing rate, consistent with predictions of cross-correlation models of binaural processing and that the psychophysical detection threshold is based on the lowest neural thresholds across the population. PMID:25568115
Litovsky, Ruth Y.; Gordon, Karen
2017-01-01
Spatial hearing skills are essential for children as they grow, learn and play. They provide critical cues for determining the locations of sources in the environment, and enable segregation of important sources, such as speech, from background maskers or interferers. Spatial hearing depends on availability of monaural cues and binaural cues. The latter result from integration of inputs arriving at the two ears from sounds that vary in location. The binaural system has exquisite mechanisms for capturing differences between the ears in both time of arrival and intensity. The major cues that are thus referred to as being vital for binaural hearing are: interaural differences in time (ITDs) and interaural differences in levels (ILDs). In children with normal hearing (NH), spatial hearing abilities are fairly well developed by age 4–5 years. In contrast, children who are deaf and hear through cochlear implants (CIs) do not have an opportunity to experience normal, binaural acoustic hearing early in life. These children may function by having to utilize auditory cues that are degraded with regard to numerous stimulus features. In recent years there has been a notable increase in the number of children receiving bilateral CIs, and evidence suggests that while having two CIs helps them function better than when listening through a single CI, they generally perform worse than their NH peers. This paper reviews some of the recent work on bilaterally implanted children. The focus is on measures of spatial hearing, including sound localization, release from masking for speech understanding in noise and binaural sensitivity using research processors. Data from behavioral and electrophysiological studies are included, with a focus on the recent work of the authors and their collaborators. The effects of auditory plasticity and deprivation on the emergence of binaural and spatial hearing are discussed along with evidence for reorganized processing from both behavioral and electrophysiological studies. The consequences of both unilateral and bilateral auditory deprivation during development suggest that the relevant set of issues is highly complex with regard to successes and the limitations experienced by children receiving bilateral cochlear implants. PMID:26828740
NASA Astrophysics Data System (ADS)
Sakai, H.; Sato, S.; Prodi, N.; Pompoli, R.
2001-03-01
Measurements of aircraft noise were made at the airport "G. Marconi" in Bologna by using a measurement system for regional environmental noise. The system is based on the model of the human auditory-brain system, which is based on the interplay of autocorrelators and an interaural cross-correlator acting on the pressure signals arriving at the ear entrances, and takes into account the specialization of left and right human cerebral hemispheres (see reference [8]). Measurements were taken through dual microphones at ear entrances of a dummy head. The aircraft noise was characterized with the following physical factors calculated from the autocorrelation function (ACF) and interaural cross-correlation function (IACF) for binaural signals. From the ACF analysis, (1) energy represented at the origin of delay,Φ (0), (2) effective duration of the envelope of the normalized ACF, τe, (3) the delay time of the first peak, τ1, and (4) its amplitude, φ1were extracted. From the IACF analysis, (5) IACC, (6) interaural delay time at which the IACC is defined, τIACC, and (7) width of the IACF at the τIACC, WIACCwere extracted. The factorΦ (0) can be represented as the geometrical mean of the energies at both ears. A noise source may be identified by these factors as timbre.
Van Hoesel, Richard; Ramsden, Richard; Odriscoll, Martin
2002-04-01
To characterize some of the benefits available from using two cochlear implants compared with just one, sound-direction identification (ID) abilities, sensitivity to interaural time delays (ITDs) and speech intelligibility in noise were measured for a bilateral multi-channel cochlear implant user. Sound-direction ID in the horizontal plane was tested with a bilateral cochlear implant user. The subject was tested both unilaterally and bilaterally using two independent behind-the-ear ESPRIT (Cochlear Ltd.) processors, as well as bilaterally using custom research processors. Pink noise bursts were presented using an 11-loudspeaker array spanning the subject's frontal 180 degrees arc in an anechoic room. After each burst, the subject was asked to identify which loudspeaker had produced the sound. No explicit training, and no feedback were given. Presentation levels were nominally at 70 dB SPL, except for a repeat experiment using the clinical devices where the presentation levels were reduced to 60 dB SPL to avoid activation of the devices' automatic gain control (AGC) circuits. Overall presentation levels were randomly varied by +/- 3 dB. For the research processor, a "low-update-rate" and a "high-update-rate" strategy were tested. Direct measurements of ITD just noticeable differences (JNDs) were made using a 3 AFC paradigm targeting 70% correct performance on the psychometric function. Stimuli included simple, low-rate electrical pulse trains as well as high-rate pulse trains modulated at 100 Hz. Speech data comparing monaural and binaural performance in noise were also collected with both low, and high update-rate strategies on the research processors. Open-set sentences were presented from directly in front of the subject and competing multi-talker babble noise was presented from the same loudspeaker, or from a loudspeaker placed 90 degrees to the left or right of the subject. For the sound-direction ID task, monaural performance using the clinical devices showed large mean absolute errors of 81 degrees and 73 degrees, with standard deviations (averaged across all 11 loud-speakers) of 10 degrees and 17 degrees, for left and right ears, respectively. Fore bilateral device use at a presentation level of 70 dB SPL, the mean error improved to about 16 degrees with an average standard deviation of 18 degrees. When the presentation level was decreased to 60 dB SPL to avoid activation of the automatic gain control (AGC) circuits in the clinical processors, the mean response error improved further to 8 degrees with a standard deviation of 13 degrees. Further tests with the custom research processors, which had a higher stimulation rate and did not include AGCs, showed comparable response errors: around 8 or 9 degrees and a standard deviation of about 11 degrees for both update rates. The best ITD JNDs measured for this subject were between 350 to 400 microsec for simple low-rate pulse trains. Speech results showed a substantial headshadow advantage for bilateral device use when speech and noise were spatially separated, but little evidence of binaural unmasking. For spatially coincident speech and noise, listening with both ears showed similar results to listening with either side alone when loudness summation was compensated for. No significant differences were observed between binaural results for high and low update-rates in any test configuration. Only for monaural listening in one test configuration did the high rate show a small significant improvement over the low rate. Results show that even if interaural time delay cues are not well coded or perceived, bilateral implants can offer important advantages, both for speech in noise as well as for sound-direction identification.
Audible sonar images generated with proprioception for target analysis.
Kuc, Roman B
2017-05-01
Some blind humans have demonstrated the ability to detect and classify objects with echolocation using palatal clicks. An audible-sonar robot mimics human click emissions, binaural hearing, and head movements to extract interaural time and level differences from target echoes. Targets of various complexity are examined by transverse displacements of the sonar and by target pose rotations that model movements performed by the blind. Controlled sonar movements executed by the robot provide data that model proprioception information available to blind humans for examining targets from various aspects. The audible sonar uses this sonar location and orientation information to form two-dimensional target images that are similar to medical diagnostic ultrasound tomograms. Simple targets, such as single round and square posts, produce distinguishable and recognizable images. More complex targets configured with several simple objects generate diffraction effects and multiple reflections that produce image artifacts. The presentation illustrates the capabilities and limitations of target classification from audible sonar images.
Theory of acoustic design of opera house and a design proposal
NASA Astrophysics Data System (ADS)
Ando, Yoichi
2004-05-01
First of all, the theory of subjective preference for sound fields based on the model of auditory-brain system is briefly mentioned. It consists of the temporal factors and spatial factors associated with the left and right cerebral hemispheres, respectively. The temporal criteria are the initial time delay gap between the direct sound and the first Reflection (Dt1) and the subsequent reverberation time (Tsub). These preferred conditions are related to the minimum value of effective duration of the running autocorrelation function of source signals (te)min. The spatial criteria are binaural listening level (LL) and the IACC, which may be extracted from the interaural crosscorrelation function. In the opera house, there are two different kind of sound sources, i.e., the vocal source of relatively short values of (te)min in the stage and the orchestra music of long values of (te)min in the pit. For these sources, a proposal is made here.
Why Internally Coupled Ears (ICE) Work Well
NASA Astrophysics Data System (ADS)
van Hemmen, J. Leo
2014-03-01
Many vertebrates, such as frogs and lizards, have an air-filled cavity between left and right eardrum, i.e., internally coupled ears (ICE). Depending on source direction, internal time (iTD) and level (iLD) difference as experienced by the animal's auditory system may greatly exceed [C. Vossen et al., JASA 128 (2010) 909-918] the external, or interaural, time and level difference (ITD and ILD). Sensory processing only encodes iTD and iLD. We present an extension of ICE theory so as to elucidate the underlying physics. First, the membrane properties of the eardrum explain why for low frequencies iTD dominates whereas iLD does so for higher frequencies. Second, the plateau of iTD = γ ITD for constant 1 < γ < 5 and variable input frequency <ν∘ follows; e.g., for the Tockay gecko ν∘ ~ 1 . 5 kHz. Third, we use a sectorial instead of circular membrane to quantify the effect of the extracolumella embedded in the tympanum and connecting with the cochlea. The main parameters can be adjusted so that the model is species independent. Work done in collaboration with A.P. Vedurmudi and J. Goulet; partially supported by BCCN-Munich.
Greene, Nathaniel T; Anbuhl, Kelsey L; Ferber, Alexander T; DeGuzman, Marisa; Allen, Paul D; Tollin, Daniel J
2018-08-01
Despite the common use of guinea pigs in investigations of the neural mechanisms of binaural and spatial hearing, their behavioral capabilities in spatial hearing tasks have surprisingly not been thoroughly investigated. To begin to fill this void, we tested the spatial hearing of adult male guinea pigs in several experiments using a paradigm based on the prepulse inhibition (PPI) of the acoustic startle response. In the first experiment, we presented continuous broadband noise from one speaker location and switched to a second speaker location (the "prepulse") along the azimuth prior to presenting a brief, ∼110 dB SPL startle-eliciting stimulus. We found that the startle response amplitude was systematically reduced for larger changes in speaker swap angle (i.e., greater PPI), indicating that using the speaker "swap" paradigm is sufficient to assess stimulus detection of spatially separated sounds. In a second set of experiments, we swapped low- and high-pass noise across the midline to estimate their ability to utilize interaural time- and level-difference cues, respectively. The results reveal that guinea pigs can utilize both binaural cues to discriminate azimuthal sound sources. A third set of experiments examined spatial release from masking using a continuous broadband noise masker and a broadband chirp signal, both presented concurrently at various speaker locations. In general, animals displayed an increase in startle amplitude (i.e., lower PPI) when the masker was presented at speaker locations near that of the chirp signal, and reduced startle amplitudes (increased PPI) indicating lower detection thresholds when the noise was presented from more distant speaker locations. In summary, these results indicate that guinea pigs can: 1) discriminate changes in source location within a hemifield as well as across the midline, 2) discriminate sources of low- and high-pass sounds, demonstrating that they can effectively utilize both low-frequency interaural time and high-frequency level difference sound localization cues, and 3) utilize spatial release from masking to discriminate sound sources. This report confirms the guinea pig as a suitable spatial hearing model and reinforces prior estimates of guinea pig hearing ability from acoustical and physiological measurements. Copyright © 2018 Elsevier B.V. All rights reserved.
Aihara, Noritaka; Murakami, Shingo; Takahashi, Mariko; Yamada, Kazuo
2014-01-01
We classified the results of preoperative auditory brainstem response (ABR) in 121 patients with useful hearing and considered the utility of preoperative ABR as a preliminary assessment for intraoperative monitoring. Wave V was confirmed in 113 patients and was not confirmed in 8 patients. Intraoperative ABR could not detect wave V in these 8 patients. The 8 patients without wave V were classified into two groups (flat and wave I only), and the reason why wave V could not be detected may have differed between the groups. Because high-frequency hearing was impaired in flat patients, an alternative to click stimulation may be more effective. Monitoring cochlear nerve action potential (CNAP) may be useful because CNAP could be detected in 4 of 5 wave I only patients. Useful hearing was preserved after surgery in 1 patient in the flat group and 2 patients in wave I only group. Among patients with wave V, the mean interaural latency difference of wave V was 0.88 ms in Class A (n = 57) and 1.26 ms in Class B (n = 56). Because the latency of wave V is already prolonged before surgery, to estimate delay in wave V latency during surgery probably underestimates cochlear nerve damage. Recording intraoperative ABR is indispensable to avoid cochlear nerve damage and to provide information for surgical decisions. Confirming the condition of ABR before surgery helps to solve certain problems, such as choosing to monitor the interaural latency difference of wave V, CNAP, or alternative sound-evoked ABR.
Młynarski, Wiktor
2015-05-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
The effect of preterm birth on vestibular evoked myogenic potentials in children.
Eshaghi, Zahra; Jafari, Zahra; Shaibanizadeh, Abdolreza; Jalaie, Shohreh; Ghaseminejad, Azizeh
2014-01-01
Preterm birth is a significant global health problem with serious short- and long-term consequences. This study examined the long term effects of preterm birth on vestibular evoked myogenic potentials (VEMPs) among preschool-aged children. Thirty-one children with preterm and 20 children with term birth histories aged 5.5 to 6.5 years were studied. Each child underwent VEMPs testing using a 500 Hz tone-burst stimulus with a 95 dB nHL (normal hearing level) intensity level. The mean peak latencies of the p13 and n23 waves in the very preterm group were significantly longer than for the full-term group (p≤ 0.041). There was a significant difference between very and mildly preterm children in the latency of peak p13 (p= 0.003). No significant differences existed between groups for p13-n23 amplitude and the interaural amplitude difference ratio. The tested ear and gender did not affect the results of the test. Prolonged VEMPs in very preterm children may reflect neurodevelopmental impairment and incomplete maturity of the vestibulospinal tract (sacculocollic reflex pathway), especially myelination. VEMPs is a non-invasive technique for investigating the vestibular function in young children, and considered to be an appropriate tool for evaluating vestibular impairments at the low brainstem level. It can be used in follow-ups of the long-term effects of preterm birth on the vestibular system.
Congenital amusia: a cognitive disorder limited to resolved harmonics and with no peripheral basis.
Cousineau, Marion; Oxenham, Andrew J; Peretz, Isabelle
2015-01-01
Pitch plays a fundamental role in audition, from speech and music perception to auditory scene analysis. Congenital amusia is a neurogenetic disorder that appears to affect primarily pitch and melody perception. Pitch is normally conveyed by the spectro-temporal fine structure of low harmonics, but some pitch information is available in the temporal envelope produced by the interactions of higher harmonics. Using 10 amusic subjects and 10 matched controls, we tested the hypothesis that amusics suffer exclusively from impaired processing of spectro-temporal fine structure. We also tested whether the inability of amusics to process acoustic temporal fine structure extends beyond pitch by measuring sensitivity to interaural time differences, which also rely on temporal fine structure. Further tests were carried out on basic intensity and spectral resolution. As expected, pitch perception based on spectro-temporal fine structure was impaired in amusics; however, no significant deficits were observed in amusics' ability to perceive the pitch conveyed via temporal-envelope cues. Sensitivity to interaural time differences was also not significantly different between the amusic and control groups, ruling out deficits in the peripheral coding of temporal fine structure. Finally, no significant differences in intensity or spectral resolution were found between the amusic and control groups. The results demonstrate a pitch-specific deficit in fine spectro-temporal information processing in amusia that seems unrelated to temporal or spectral coding in the auditory periphery. These results are consistent with the view that there are distinct mechanisms dedicated to processing resolved and unresolved harmonics in the general population, the former being altered in congenital amusia while the latter is spared. Copyright © 2014 Elsevier Ltd. All rights reserved.
Rowe, James B.; Ghosh, Boyd C. P.; Carlyon, Robert P.; Plack, Christopher J.; Gockel, Hedwig E.
2014-01-01
Under binaural listening conditions, the detection of target signals within background masking noise is substantially improved when the interaural phase of the target differs from that of the masker. Neural correlates of this binaural masking level difference (BMLD) have been observed in the inferior colliculus and temporal cortex, but it is not known whether degeneration of the inferior colliculus would result in a reduction of the BMLD in humans. We used magnetoencephalography to examine the BMLD in 13 healthy adults and 13 patients with progressive supranuclear palsy (PSP). PSP is associated with severe atrophy of the upper brain stem, including the inferior colliculus, confirmed by voxel-based morphometry of structural MRI. Stimuli comprised in-phase sinusoidal tones presented to both ears at three levels (high, medium, and low) masked by in-phase noise, which rendered the low-level tone inaudible. Critically, the BMLD was measured using a low-level tone presented in opposite phase across ears, making it audible against the noise. The cortical waveforms from bilateral auditory sources revealed significantly larger N1m peaks for the out-of-phase low-level tone compared with the in-phase low-level tone, for both groups, indicating preservation of early cortical correlates of the BMLD in PSP. In PSP a significant delay was observed in the onset of the N1m deflection and the amplitude of the P2m was reduced, but these differences were not restricted to the BMLD condition. The results demonstrate that although PSP causes subtle auditory deficits, binaural processing can survive the presence of significant damage to the upper brain stem. PMID:25231610
Hughes, Laura E; Rowe, James B; Ghosh, Boyd C P; Carlyon, Robert P; Plack, Christopher J; Gockel, Hedwig E
2014-12-15
Under binaural listening conditions, the detection of target signals within background masking noise is substantially improved when the interaural phase of the target differs from that of the masker. Neural correlates of this binaural masking level difference (BMLD) have been observed in the inferior colliculus and temporal cortex, but it is not known whether degeneration of the inferior colliculus would result in a reduction of the BMLD in humans. We used magnetoencephalography to examine the BMLD in 13 healthy adults and 13 patients with progressive supranuclear palsy (PSP). PSP is associated with severe atrophy of the upper brain stem, including the inferior colliculus, confirmed by voxel-based morphometry of structural MRI. Stimuli comprised in-phase sinusoidal tones presented to both ears at three levels (high, medium, and low) masked by in-phase noise, which rendered the low-level tone inaudible. Critically, the BMLD was measured using a low-level tone presented in opposite phase across ears, making it audible against the noise. The cortical waveforms from bilateral auditory sources revealed significantly larger N1m peaks for the out-of-phase low-level tone compared with the in-phase low-level tone, for both groups, indicating preservation of early cortical correlates of the BMLD in PSP. In PSP a significant delay was observed in the onset of the N1m deflection and the amplitude of the P2m was reduced, but these differences were not restricted to the BMLD condition. The results demonstrate that although PSP causes subtle auditory deficits, binaural processing can survive the presence of significant damage to the upper brain stem. Copyright © 2014 the American Physiological Society.
Investigations in mechanisms and strategies to enhance hearing with cochlear implants
NASA Astrophysics Data System (ADS)
Churchill, Tyler H.
Cochlear implants (CIs) produce hearing sensations by stimulating the auditory nerve (AN) with current pulses whose amplitudes are modulated by filtered acoustic temporal envelopes. While this technology has provided hearing for multitudinous CI recipients, even bilaterally-implanted listeners have more difficulty understanding speech in noise and localizing sounds than normal hearing (NH) listeners. Three studies reported here have explored ways to improve electric hearing abilities. Vocoders are often used to simulate CIs for NH listeners. Study 1 was a psychoacoustic vocoder study examining the effects of harmonic carrier phase dispersion and simulated CI current spread on speech intelligibility in noise. Results showed that simulated current spread was detrimental to speech understanding and that speech vocoded with carriers whose components' starting phases were equal was the least intelligible. Cross-correlogram analyses of AN model simulations confirmed that carrier component phase dispersion resulted in better neural envelope representation. Localization abilities rely on binaural processing mechanisms in the brainstem and mid-brain that are not fully understood. In Study 2, several potential mechanisms were evaluated based on the ability of metrics extracted from stereo AN simulations to predict azimuthal locations. Results suggest that unique across-frequency patterns of binaural cross-correlation may provide a strong cue set for lateralization and that interaural level differences alone cannot explain NH sensitivity to lateral position. While it is known that many bilateral CI users are sensitive to interaural time differences (ITDs) in low-rate pulsatile stimulation, most contemporary CI processing strategies use high-rate, constant-rate pulse trains. In Study 3, we examined the effects of pulse rate and pulse timing on ITD discrimination, ITD lateralization, and speech recognition by bilateral CI listeners. Results showed that listeners were able to use low-rate pulse timing cues presented redundantly on multiple electrodes for ITD discrimination and lateralization of speech stimuli even when mixed with high rates on other electrodes. These results have contributed to a better understanding of those aspects of the auditory system that support speech understanding and binaural hearing, suggested vocoder parameters that may simulate aspects of electric hearing, and shown that redundant, low-rate pulse timing supports improved spatial hearing for bilateral CI listeners.
Spatial separation benefit for unaided and aided listening
Ahlstrom, Jayne B.; Horwitz, Amy R.; Dubno, Judy R.
2013-01-01
Consonant recognition in noise was measured at a fixed signal-to-noise ratio as a function of low-pass-cutoff frequency and noise location in older adults fit with bilateral hearing aids. To quantify age-related differences, spatial benefit was assessed in younger and older adults with normal hearing. Spatial benefit was similar for all groups suggesting that older adults used interaural difference cues to improve speech recognition in noise equivalently to younger adults. Although amplification was sufficient to increase high-frequency audibility with spatial separation, hearing-aid benefit was minimal, suggesting that factors beyond simple audibility may be responsible for limited hearing-aid benefit. PMID:24121648
Binaural processing of speech in light aircraft.
DOT National Transportation Integrated Search
1972-09-01
Laboratory studies have shown that the human binaural auditory system can extract signals from noise more effectively when the signals (or the noise) are presented in one of several interaurally disparate configurations. Questions arise as to whether...
Kawashima, Takayuki; Sato, Takao
2012-01-01
When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue. In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter's ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz). The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing.
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
Grose, John H; Buss, Emily; Hall, Joseph W
2017-01-01
The purpose of this study was to test the hypothesis that listeners with frequent exposure to loud music exhibit deficits in suprathreshold auditory performance consistent with cochlear synaptopathy. Young adults with normal audiograms were recruited who either did ( n = 31) or did not ( n = 30) have a history of frequent attendance at loud music venues where the typical sound levels could be expected to result in temporary threshold shifts. A test battery was administered that comprised three sets of procedures: (a) electrophysiological tests including distortion product otoacoustic emissions, auditory brainstem responses, envelope following responses, and the acoustic change complex evoked by an interaural phase inversion; (b) psychoacoustic tests including temporal modulation detection, spectral modulation detection, and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and speech-in-noise recognition. The results demonstrated that a history of loud music exposure can lead to a profile of peripheral auditory function that is consistent with an interpretation of cochlear synaptopathy in humans, namely, modestly abnormal auditory brainstem response Wave I/Wave V ratios in the presence of normal distortion product otoacoustic emissions and normal audiometric thresholds. However, there were no other electrophysiological, psychophysical, or speech perception effects. The absence of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synaptopathy is a valid pathophysiological condition in humans, its perceptual sequelae are either too diffuse or too inconsequential to permit a simple differential diagnosis of hidden hearing loss.
Four-choice sound localization abilities of two Florida manatees, Trichechus manatus latirostris.
Colbert, Debborah E; Gaspard, Joseph C; Reep, Roger; Mann, David A; Bauer, Gordon B
2009-07-01
The absolute sound localization abilities of two Florida manatees (Trichechus manatus latirostris) were measured using a four-choice discrimination paradigm, with test locations positioned at 45 deg., 90 deg., 270 deg. and 315 deg. angles relative to subjects facing 0 deg. Three broadband signals were tested at four durations (200, 500, 1000, 3000 ms), including a stimulus that spanned a wide range of frequencies (0.2-20 kHz), one stimulus that was restricted to frequencies with wavelengths shorter than their interaural time distances (6-20 kHz) and one that was limited to those with wavelengths longer than their interaural time distances (0.2-2 kHz). Two 3000 ms tonal signals were tested, including a 4 kHz stimulus, which is the midpoint of the 2.5-5.9 kHz fundamental frequency range of manatee vocalizations and a 16 kHz stimulus, which is in the range of manatee best-hearing sensitivity. Percentage correct within the broadband conditions ranged from 79% to 93% for Subject 1 and from 51% to 93% for Subject 2. Both performed above chance with the tonal signals but had much lower accuracy than with broadband signals, with Subject 1 at 44% and 33% and Subject 2 at 49% and 32% at the 4 kHz and 16 kHz conditions, respectively. These results demonstrate that manatees are able to localize frequency bands with wavelengths that are both shorter and longer than their interaural time distances and suggest that they have the ability to localize both manatee vocalizations and recreational boat engine noises.
NASA Technical Reports Server (NTRS)
Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, S. J.
2011-01-01
The neural mechanisms to resolve ambiguous tilt-translation motion have been hypothesized to be different for motion perception and eye movements. Previous studies have demonstrated differences in ocular and perceptual responses using a variety of motion paradigms, including Off-Vertical Axis Rotation (OVAR), Variable Radius Centrifugation (VRC), translation along a linear track, and tilt about an Earth-horizontal axis. While the linear acceleration across these motion paradigms is presumably equivalent, there are important differences in semicircular canal cues. The purpose of this study was to compare translation motion perception and horizontal slow phase velocity to quantify consistencies, or lack thereof, across four different motion paradigms. Twelve healthy subjects were exposed to sinusoidal interaural linear acceleration between 0.01 and 0.6 Hz at 1.7 m/s/s (equivalent to 10 tilt) using OVAR, VRC, roll tilt, and lateral translation. During each trial, subjects verbally reported the amount of perceived peak-to-peak lateral translation and indicated the direction of motion with a joystick. Binocular eye movements were recorded using video-oculography. In general, the gain of translation perception (ratio of reported linear displacement to equivalent linear stimulus displacement) increased with stimulus frequency, while the phase did not significantly vary. However, translation perception was more pronounced during both VRC and lateral translation involving actual translation, whereas perceptions were less consistent and more variable during OVAR and roll tilt which did not involve actual translation. For each motion paradigm, horizontal eye movements were negligible at low frequencies and showed phase lead relative to the linear stimulus. At higher frequencies, the gain of the eye movements increased and became more inphase with the acceleration stimulus. While these results are consistent with the hypothesis that the neural computational strategies for motion perception and eye movements differ, they also indicate that the specific motion platform employed can have a significant effect on both the amplitude and phase of each.
The Clinical Utility of Vestibular-Evoked Myogenic Potentials in the Diagnosis of Ménière’s Disease
Maheu, Maxime; Alvarado-Umanzor, Jenny Marylin; Delcenserie, Audrey; Champoux, François
2017-01-01
Ménière’s disease (MD) is a condition that has been proposed over 150 years ago, which involves audiological and vestibular manifestations, such as aural fullness, tinnitus, vertigo, and fluctuating hearing thresholds. Over the past few years, many researchers have assessed different techniques to help diagnose this pathology. Vestibular-evoked myogenic potential (VEMP) is an electrophysiological method assessing the saccule (cVEMP) and the utricule (oVEMP). Its clinical utility in the diagnosis of multiple pathologies, such as superior canal dehiscence, has made this tool a common method used in otologic clinics. The main objective of the present review is to determine the current state of knowledge of the VEMP in the identification of MD, such as the type of stimuli, the frequency tuning, and the interaural asymmetry ratio of the cVEMP and the oVEMP. Results show that the type of stimulation, the frequency sensitivity shift and the interaural asymmetry ratio (IAR) could be useful tool to diagnose and describe the evolution of MD. It is, however, important to emphasize that further studies are needed to confirm the utility of VEMP in the identification of MD in its early stage, using either bone-conduction vibration or air-conduction stimulation, which is of clinical importance when it comes to early intervention. PMID:28861037
Andrade, Isabel Vaamonde Sanchez; Santos-Perez, Sofia; Diz, Pilar Gayoso; Caballero, Torcuato Labella; Soto-Varela, Andrés
2013-05-01
Bithermal caloric testing and vestibular evoked myogenic potentials (VEMPs) are both diagnostic tools for the study of the vestibular system. The first tests the horizontal semicircular canal and the second evaluates the saccule and lower vestibular nerve. The results of these two tests can therefore be expected to be correlated. The aim of this study was to compare bithermal caloric test results with VEMP records in normal subjects to verify whether they are correlated. A prospective study was conducted in 60 healthy subjects (30 men and 30 women) who underwent otoscopy, pure tone audiometry, bithermal caloric testing and VEMPs. From the caloric test, we assessed the presence of possible vestibular hypofunction, whether there was directional preponderance and reflectivity of each ear (all based on both slow phase velocity and nystagmus frequency). The analysed VEMPs variables were: p1 and n1 latency, corrected amplitude, interaural p1 latency difference and p1 interaural amplitude asymmetry. We compared the reflectivity, hypofunction and directional preponderance of the caloric tests with the corrected amplitudes and amplitude asymmetries of the VEMPs. No correlations were found in the different comparisons between bithermal caloric testing results and VEMPs except for a weak correlation (p = 0.039) when comparing preponderance based on the number of nystagmus in the caloric test and amplitude asymmetry with 99 dB tone burst in the VEMPs test. The results indicate that the two diagnostic tests are not comparable, so one of them cannot replace the other, but the use of both increases diagnostic success in some conditions.
Marquardt, Torsten; Stange, Annette; Pecka, Michael; Grothe, Benedikt; McAlpine, David
2014-01-01
Recently, with the use of an amplitude-modulated binaural beat (AMBB), in which sound amplitude and interaural-phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al. 2013b), we demonstrated that the human auditory system uses interaural timing differences in the temporal fine structure of modulated sounds only during the rising portion of each modulation cycle. However, the degree to which peripheral or central mechanisms contribute to the observed strong dominance of the rising slope remains to be determined. Here, by recording responses of single neurons in the medial superior olive (MSO) of anesthetized gerbils and in the inferior colliculus (IC) of anesthetized guinea pigs to AMBBs, we report a correlation between the position within the amplitude-modulation (AM) cycle generating the maximum response rate and the position at which the instantaneous IPD dominates the total neural response. The IPD during the rising segment dominates the total response in 78% of MSO neurons and 69% of IC neurons, with responses of the remaining neurons predominantly coding the IPD around the modulation maximum. The observed diversity of dominance regions within the AM cycle, especially in the IC, and its comparison with the human behavioral data suggest that only the subpopulation of neurons with rising slope dominance codes the sound-source location in complex listening conditions. A comparison of two models to account for the data suggests that emphasis on IPDs during the rising slope of the AM cycle depends on adaptation processes occurring before binaural interaction. PMID:24554782
Brand, Thomas
2018-01-01
In studies investigating binaural processing in human listeners, relatively long and task-dependent time constants of a binaural window ranging from 10 ms to 250 ms have been observed. Such time constants are often thought to reflect “binaural sluggishness.” In this study, the effect of binaural sluggishness on binaural unmasking of speech in stationary speech-shaped noise is investigated in 10 listeners with normal hearing. In order to design a masking signal with temporally varying binaural cues, the interaural phase difference of the noise was modulated sinusoidally with frequencies ranging from 0.25 Hz to 64 Hz. The lowest, that is the best, speech reception thresholds (SRTs) were observed for the lowest modulation frequency. SRTs increased with increasing modulation frequency up to 4 Hz. For higher modulation frequencies, SRTs remained constant in the range of 1 dB to 1.5 dB below the SRT determined in the diotic situation. The outcome of the experiment was simulated using a short-term binaural speech intelligibility model, which combines an equalization–cancellation (EC) model with the speech intelligibility index. This model segments the incoming signal into 23.2-ms time frames in order to predict release from masking in modulated noises. In order to predict the results from this study, the model required a further time constant applied to the EC mechanism representing binaural sluggishness. The best agreement with perceptual data was achieved using a temporal window of 200 ms in the EC mechanism. PMID:29338577
Sensitivity to Envelope Interaural Time Differences at High Modulation Rates
Bleeck, Stefan; McAlpine, David
2015-01-01
Sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure of low-frequency tones and the modulated envelopes of high-frequency sounds are considered comparable, particularly for envelopes shaped to transmit similar fidelity of temporal information normally present for low-frequency sounds. Nevertheless, discrimination performance for envelope modulation rates above a few hundred Hertz is reported to be poor—to the point of discrimination thresholds being unattainable—compared with the much higher (>1,000 Hz) limit for low-frequency ITD sensitivity, suggesting the presence of a low-pass filter in the envelope domain. Further, performance for identical modulation rates appears to decline with increasing carrier frequency, supporting the view that the low-pass characteristics observed for envelope ITD processing is carrier-frequency dependent. Here, we assessed listeners’ sensitivity to ITDs conveyed in pure tones and in the modulated envelopes of high-frequency tones. ITD discrimination for the modulated high-frequency tones was measured as a function of both modulation rate and carrier frequency. Some well-trained listeners appear able to discriminate ITDs extremely well, even at modulation rates well beyond 500 Hz, for 4-kHz carriers. For one listener, thresholds were even obtained for a modulation rate of 800 Hz. The highest modulation rate for which thresholds could be obtained declined with increasing carrier frequency for all listeners. At 10 kHz, the highest modulation rate at which thresholds could be obtained was 600 Hz. The upper limit of sensitivity to ITDs conveyed in the envelope of high-frequency modulated sounds appears to be higher than previously considered. PMID:26721926
Hauth, Christopher F; Brand, Thomas
2018-01-01
In studies investigating binaural processing in human listeners, relatively long and task-dependent time constants of a binaural window ranging from 10 ms to 250 ms have been observed. Such time constants are often thought to reflect "binaural sluggishness." In this study, the effect of binaural sluggishness on binaural unmasking of speech in stationary speech-shaped noise is investigated in 10 listeners with normal hearing. In order to design a masking signal with temporally varying binaural cues, the interaural phase difference of the noise was modulated sinusoidally with frequencies ranging from 0.25 Hz to 64 Hz. The lowest, that is the best, speech reception thresholds (SRTs) were observed for the lowest modulation frequency. SRTs increased with increasing modulation frequency up to 4 Hz. For higher modulation frequencies, SRTs remained constant in the range of 1 dB to 1.5 dB below the SRT determined in the diotic situation. The outcome of the experiment was simulated using a short-term binaural speech intelligibility model, which combines an equalization-cancellation (EC) model with the speech intelligibility index. This model segments the incoming signal into 23.2-ms time frames in order to predict release from masking in modulated noises. In order to predict the results from this study, the model required a further time constant applied to the EC mechanism representing binaural sluggishness. The best agreement with perceptual data was achieved using a temporal window of 200 ms in the EC mechanism.
Ozmeral, Erol J; Eddins, David A; Eddins, Ann C
2016-12-01
Previous electrophysiological studies of interaural time difference (ITD) processing have demonstrated that ITDs are represented by a nontopographic population rate code. Rather than narrow tuning to ITDs, neural channels have broad tuning to ITDs in either the left or right auditory hemifield, and the relative activity between the channels determines the perceived lateralization of the sound. With advancing age, spatial perception weakens and poor temporal processing contributes to declining spatial acuity. At present, it is unclear whether age-related temporal processing deficits are due to poor inhibitory controls in the auditory system or degraded neural synchrony at the periphery. Cortical processing of spatial cues based on a hemifield code are susceptible to potential age-related physiological changes. We consider two distinct predictions of age-related changes to ITD sensitivity: declines in inhibitory mechanisms would lead to increased excitation and medial shifts to rate-azimuth functions, whereas a general reduction in neural synchrony would lead to reduced excitation and shallower slopes in the rate-azimuth function. The current study tested these possibilities by measuring an evoked response to ITD shifts in a narrow-band noise. Results were more in line with the latter outcome, both from measured latencies and amplitudes of the global field potentials and source-localized waveforms in the left and right auditory cortices. The measured responses for older listeners also tended to have reduced asymmetric distribution of activity in response to ITD shifts, which is consistent with other sensory and cognitive processing models of aging. Copyright © 2016 the American Physiological Society.
Impact of monaural frequency compression on binaural fusion at the brainstem level.
Klauke, Isabelle; Kohl, Manuel C; Hannemann, Ronny; Kornagel, Ulrich; Strauss, Daniel J; Corona-Strauss, Farah I
2015-08-01
A classical objective measure for binaural fusion at the brainstem level is the so-called β-wave of the binaural interaction component (BIC) in the auditory brainstem response (ABR). However, in some cases it appeared that a reliable detection of this component still remains a challenge. In this study, we investigate the wavelet phase synchronization stability (WPSS) of ABR data for the analysis of binaural fusion and compare it to the BIC. In particular, we examine the impact of monaural nonlinear frequency compression on binaural fusion. As the auditory system is tonotopically organized, an interaural frequency mismatch caused by monaural frequency compression could negatively effect binaural fusion. In this study, only few subjects showed a detectable β-wave and in most cases only for low ITDs. However, we present a novel objective measure for binaural fusion that outperforms the current state-of-the-art technique (BIC): the WPSS analysis showed a significant difference between the phase stability of the sum of the monaurally evoked responses and the phase stability of the binaurally evoked ABR. This difference could be an indicator for binaural fusion in the brainstem. Furthermore, we observed that monaural frequency compression could indeed effect binaural fusion, as the WPSS results for this condition vary strongly from the results obtained without frequency compression.
Can monaural temporal masking explain the ongoing precedence effect?
Freyman, Richard L; Morse-Fortier, Charlotte; Griffin, Amanda M; Zurek, Patrick M
2018-02-01
The precedence effect for transient sounds has been proposed to be based primarily on monaural processes, manifested by asymmetric temporal masking. This study explored the potential for monaural explanations with longer ("ongoing") sounds exhibiting the precedence effect. Transient stimuli were single lead-lag noise burst pairs; ongoing stimuli were trains of 63 burst pairs. Unlike with transients, monaural masking data for ongoing sounds showed no advantage for the lead, and are inconsistent with asymmetric audibility as an explanation for ongoing precedence. This result, along with supplementary measurements of interaural time discrimination, suggests different explanations for transient and ongoing precedence.
Echolocation of insects using intermittent frequency-modulated sounds.
Matsuo, Ikuo; Takanashi, Takuma
2015-09-01
Using echolocation influenced by Doppler shift, bats can capture flying insects in real three-dimensional space. On the basis of this principle, a model that estimates object locations using frequency modulated (FM) sound was proposed. However, no investigation was conducted to verify whether the model can localize flying insects from their echoes. This study applied the model to estimate the range and direction of flying insects by extracting temporal changes from the time-frequency pattern and interaural range difference, respectively. The results obtained confirm that a living insect's position can be estimated using this model with echoes measured while emitting intermittent FM sounds.
Perceptually relevant parameters for virtual listening simulation of small room acoustics
Zahorik, Pavel
2009-01-01
Various physical aspects of room-acoustic simulation techniques have been extensively studied and refined, yet the perceptual attributes of the simulations have received relatively little attention. Here a method of evaluating the perceptual similarity between rooms is described and tested using 15 small-room simulations based on binaural room impulse responses (BRIRs) either measured from a real room or estimated using simple geometrical acoustic modeling techniques. Room size and surface absorption properties were varied, along with aspects of the virtual simulation including the use of individualized head-related transfer function (HRTF) measurements for spatial rendering. Although differences between BRIRs were evident in a variety of physical parameters, a multidimensional scaling analysis revealed that when at-the-ear signal levels were held constant, the rooms differed along just two perceptual dimensions: one related to reverberation time (T60) and one related to interaural coherence (IACC). Modeled rooms were found to differ from measured rooms in this perceptual space, but the differences were relatively small and should be easily correctable through adjustment of T60 and IACC in the model outputs. Results further suggest that spatial rendering using individualized HRTFs offers little benefit over nonindividualized HRTF rendering for room simulation applications where source direction is fixed. PMID:19640043
Stellmack, Mark A.; Byrne, Andrew J.; Viemeister, Neal F.
2010-01-01
When different components of a stimulus carry different binaural information, processing of binaural information in a target component is often affected. The present experiments examine whether such interference is affected by amplitude modulation and the relative phase of modulation of the target and distractors. In all experiments, listeners attempted to discriminate interaural time differences of a target stimulus in the presence of distractor stimuli with ITD=0. In Experiment 1, modulation of the distractors but not the target reduced interference between components. In Experiment 2, synthesized musical notes exhibited little binaural interference when there were slight asynchronies between different streams of notes (31 or 62 ms). The remaining experiments suggested that the reduction in binaural interference in the previous experiments was due neither to the complex spectra of the synthesized notes nor to greater detectability of the target in the presence of modulated distractors. These data suggest that this interference is reduced when components are modulated in ways that result in the target appearing briefly in isolation, not because of segregation cues. These data also suggest that modulation and asynchronies between modulators that might be encountered in real-world listening situations are adequate to reduce binaural interference to inconsequential levels. PMID:20815459
Binaural sluggishness in the perception of tone sequences and speech in noise.
Culling, J F; Colburn, H S
2000-01-01
The binaural system is well-known for its sluggish response to changes in the interaural parameters to which it is sensitive. Theories of binaural unmasking have suggested that detection of signals in noise is mediated by detection of differences in interaural correlation. If these theories are correct, improvements in the intelligibility of speech in favorable binaural conditions is most likely mediated by spectro-temporal variations in interaural correlation of the stimulus which mirror the spectro-temporal amplitude modulations of the speech. However, binaural sluggishness should limit the temporal resolution of the representation of speech recovered by this means. The present study tested this prediction in two ways. First, listeners' masked discrimination thresholds for ascending vs descending pure-tone arpeggios were measured as a function of rate of frequency change in the NoSo and NoSpi binaural configurations. Three-tone arpeggios were presented repeatedly and continuously for 1.6 s, masked by a 1.6-s burst of noise. In a two-interval task, listeners determined the interval in which the arpeggios were ascending. The results showed a binaural advantage of 12-14 dB for NoSpi at 3.3 arpeggios per s (arp/s), which reduced to 3-5 dB at 10.4 arp/s. This outcome confirmed that the discrimination of spectro-temporal patterns in noise is susceptible to the effects of binaural sluggishness. Second, listeners' masked speech-reception thresholds were measured in speech-shaped noise using speech which was 1, 1.5, and 2 times the original articulation rate. The articulation rate was increased using a phase-vocoder technique which increased all the modulation frequencies in the speech without altering its pitch. Speech-reception thresholds were, on average, 5.2 dB lower for the NoSpi than for the NoSo configuration, at the original articulation rate. This binaural masking release was reduced to 2.8 dB when the articulation rate was doubled, but the most notable effect was a 6-8 dB increase in thresholds with articulation rate for both configurations. These results suggest that higher modulation frequencies in masked signals cannot be temporally resolved by the binaural system, but that the useful modulation frequencies in speech are sufficiently low (<5 Hz) that they are invulnerable to the effects of binaural sluggishness, even at elevated articulation rates.
Li, Chuan; Han, Lei; Ma, Chun-Wai; Lai, Suk-King; Lai, Chun-Hong; Shum, Daisy Kwok Yan; Chan, Ying-Shing
2013-07-01
Using sinusoidal oscillations of linear acceleration along both the horizontal and vertical planes to stimulate otolith organs in the inner ear, we charted the postnatal time at which responsive neurons in the rat inferior olive (IO) first showed Fos expression, an indicator of neuronal recruitment into the otolith circuit. Neurons in subnucleus dorsomedial cell column (DMCC) were activated by vertical stimulation as early as P9 and by horizontal (interaural) stimulation as early as P11. By P13, neurons in the β subnucleus of IO (IOβ) became responsive to horizontal stimulation along the interaural and antero-posterior directions. By P21, neurons in the rostral IOβ became also responsive to vertical stimulation, but those in the caudal IOβ remained responsive only to horizontal stimulation. Nearly all functionally activated neurons in DMCC and IOβ were immunopositive for the NR1 subunit of the NMDA receptor and the GluR2/3 subunit of the AMPA receptor. In situ hybridization studies further indicated abundant mRNA signals of the glutamate receptor subunits by the end of the second postnatal week. This is reinforced by whole-cell patch-clamp data in which glutamate receptor-mediated miniature excitatory postsynaptic currents of rostral IOβ neurons showed postnatal increase in amplitude, reaching the adult level by P14. Further, these neurons exhibited subthreshold oscillations in membrane potential as from P14. Taken together, our results support that ionotropic glutamate receptors in the IO enable postnatal coding of gravity-related information and that the rostral IOβ is the only IO subnucleus that encodes spatial orientations in 3-D.
Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed
2014-11-01
Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Audiometric asymmetry and tinnitus laterality.
Tsai, Betty S; Sweetow, Robert W; Cheung, Steven W
2012-05-01
To identify an optimal audiometric asymmetry index for predicting tinnitus laterality. Retrospective medical record review. Data from adult tinnitus patients (80 men and 44 women) were extracted for demographic, audiometric, tinnitus laterality, and related information. The main measures were sensitivity, specificity, positive predictive value (PPV), and receiver operating characteristic (ROC) curves. Three audiometric asymmetry indices were constructed using one, two, or three frequency elements to compute the average interaural threshold difference (aITD). Tinnitus laterality predictive performance of a particular index was assessed by increasing the cutoff or minimum magnitude of the aITD from 10 to 35 dB in 5-dB steps to determine its ROC curve. Single frequency index performance was inferior to the other two (P < .05). Double and triple frequency indices were indistinguishable (P > .05). Two adjoining frequency elements with aITD ≥ 15 dB performed optimally for predicting tinnitus laterality (sensitivity = 0.59, specificity = 0.71, and PPV = 0.76). Absolute and relative magnitudes of hearing loss in the poorer ear were uncorrelated with tinnitus distress. An optimal audiometric asymmetry index to predict tinnitus laterality is one whereby 15 dB is the minimum aITD of two adjoining frequencies, inclusive of the maximal ITD. Tinnitus laterality dependency on magnitude of interaural asymmetry may inform design and interpretation of neuroimaging studies. Monaural acoustic tinnitus therapy may be an initial consideration for asymmetric hearing loss meeting the criterion of aITD ≥ 15 dB. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.
Functional relevance of acoustic tracheal design in directional hearing in crickets.
Schmidt, Arne K D; Römer, Heiner
2016-10-15
Internally coupled ears (ICEs) allow small animals to reliably determine the direction of a sound source. ICEs are found in a variety of taxa, but crickets have evolved the most complex arrangement of coupled ears: an acoustic tracheal system composed of a large cross-body trachea that connects two entry points for sound in the thorax with the leg trachea of both ears. The key structure that allows for the tuned directionality of the ear is a tracheal inflation (acoustic vesicle) in the midline of the cross-body trachea holding a thin membrane (septum). Crickets are known to display a wide variety of acoustic tracheal morphologies, most importantly with respect to the presence of a single or double acoustic vesicle. However, the functional relevance of this variation is still not known. In this study, we investigated the peripheral directionality of three co-occurring, closely related cricket species of the subfamily Gryllinae. No support could be found for the hypothesis that a double vesicle should be regarded as an evolutionary innovation to (1) increase interaural directional cues, (2) increase the selectivity of the directional filter or (3) provide a better match between directional and sensitivity tuning. Nonetheless, by manipulating the double acoustic vesicle in the rainforest cricket Paroecanthus podagrosus, selectively eliminating the sound-transmitting pathways, we revealed that these pathways contribute almost equally to the total amount of interaural intensity differences, emphasizing their functional relevance in the system. © 2016. Published by The Company of Biologists Ltd.
Schwartz, Andrew H; Shinn-Cunningham, Barbara G
2013-04-01
Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.
Evolutionary trends in directional hearing
Carr, Catherine E.; Christensen-Dalsgaard, Jakob
2016-01-01
Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850
Effect of source location and listener location on ILD cues in a reverberant room
NASA Astrophysics Data System (ADS)
Ihlefeld, Antje; Shinn-Cunningham, Barbara G.
2004-05-01
Short-term interaural level differences (ILDs) were analyzed for simulations of the signals that would reach a listener in a reverberant room. White noise was convolved with manikin head-related impulse responses measured in a classroom to simulate different locations of the source relative to the manikin and different manikin positions in the room. The ILDs of the signals were computed within each third-octave band over a relatively short time window to investigate how reliably ILD cues encode source laterality. Overall, the mean of the ILD magnitude increases with lateral angle and decreases with distance, as expected. Increasing reverberation decreases the mean ILD magnitude and increases the variance of the short-term ILD, so that the spatial information carried by ILD cues is degraded by reverberation. These results suggest that the mean ILD is not a reliable cue for determining source laterality in a reverberant room. However, by taking into account both the mean and variance, the distribution of high-frequency short-term ILDs provides some spatial information. This analysis suggests that, in order to use ILDs to judge source direction in reverberant space, listeners must accumulate information about how the short-term ILD varies over time. [Work supported by NIDCD and AFOSR.
Individual Differences Reveal Correlates of Hidden Hearing Deficits
Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G.
2015-01-01
Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing.” PMID:25653371
Multisensory guidance of orienting behavior.
Maier, Joost X; Groh, Jennifer M
2009-12-01
We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.
Subliminal speech perception and auditory streaming.
Dupoux, Emmanuel; de Gardelle, Vincent; Kouider, Sid
2008-11-01
Current theories of consciousness assume a qualitative dissociation between conscious and unconscious processing: while subliminal stimuli only elicit a transient activity, supraliminal stimuli have long-lasting influences. Nevertheless, the existence of this qualitative distinction remains controversial, as past studies confounded awareness and stimulus strength (energy, duration). Here, we used a masked speech priming method in conjunction with a submillisecond interaural delay manipulation to contrast subliminal and supraliminal processing at constant prime, mask and target strength. This delay induced a perceptual streaming effect, with the prime popping out in the supraliminal condition. By manipulating the prime-target interval (ISI), we show a qualitatively distinct profile of priming longevity as a function of prime awareness. While subliminal priming disappeared after half a second, supraliminal priming was independent of ISI. This shows that the distinction between conscious and unconscious processing depends on high-level perceptual streaming factors rather than low-level features (energy, duration).
ERIC Educational Resources Information Center
Passow, Susanne; Müller, Maike; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R.; Lindenberger, Ulman; Li, Shu-Chen
2013-01-01
Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation…
Yakushin, Sergei B; Bukharina, Svetlana E; Raphan, Theodore; Buttner-Ennever, Jean; Cohen, Bernard
2003-10-01
Alterations in the gain of the vertical angular vestibulo-ocular reflex (VOR) are dependent on the head position in which the gain changes were produced. We determined how long gravity-dependent gain changes last in monkeys after four hours of adaptation, and whether the adaptation is mediated through the nodulus and uvula of the vestibulocerebellum. Vertical VOR gains were adaptively modified by rotation about an interaural axis, in phase or out of phase with the visual surround. Vertical VOR gains were modified with the animals in one of three orientations: upright, left-side down, or right-side down. Monkeys were tested in darkness for up to four days after adaptation using sinusoidal rotation about an interaural axis that was incrementally tilted in 10 degrees steps from vertical to side down positions. Animals were unrestrained in their cages in normal light conditions between tests. Gravity-dependent gain changes lasted for a day or less after adaptation while upright, but persisted for two days or more after on-side adaptation. These data show that gravity-dependent gain changes can last for prolonged periods after only four hours of adaptation in monkeys, as in humans. They also demonstrate that natural head movements made while upright do not provide an adequate stimulus for rapid recovery of vertical VOR gains that were induced on side. In two animals, the nodulus and uvula were surgically ablated. Vertical gravity-dependent gain changes were not significantly different before and after surgery, indicating that the nodulus and uvula do not have a critical role in producing them.
The Balance of Excitatory and Inhibitory Synaptic Inputs for Coding Sound Location
Ono, Munenori
2014-01-01
The localization of high-frequency sounds in the horizontal plane uses an interaural-level difference (ILD) cue, yet little is known about the synaptic mechanisms that underlie processing this cue in the inferior colliculus (IC) of mouse. Here, we study the synaptic currents that process ILD in vivo and use stimuli in which ILD varies around a constant average binaural level (ABL) to approximate sounds on the horizontal plane. Monaural stimulation in either ear produced EPSCs and IPSCs in most neurons. The temporal properties of monaural responses were well matched, suggesting connected functional zones with matched inputs. The EPSCs had three patterns in response to ABL stimuli, preference for the sound field with the highest level stimulus: (1) contralateral; (2) bilateral highly lateralized; or (3) at the center near 0 ILD. EPSCs and IPSCs were well correlated except in center-preferred neurons. Summation of the monaural EPSCs predicted the binaural excitatory response but less well than the summation of monaural IPSCs. Binaural EPSCs often showed a nonlinearity that strengthened the response to specific ILDs. Extracellular spike and intracellular current recordings from the same neuron showed that the ILD tuning of the spikes was sharper than that of the EPSCs. Thus, in the IC, balanced excitatory and inhibitory inputs may be a general feature of synaptic coding for many types of sound processing. PMID:24599475
ERIC Educational Resources Information Center
Huang, Ying; Huang, Qiang; Chen, Xun; Wu, Xihong; Li, Liang
2009-01-01
Perceptual integration of the sound directly emanating from the source with reflections needs both temporal storage and correlation computation of acoustic details. We examined whether the temporal storage is frequency dependent and associated with speech unmasking. In Experiment 1, a break in correlation (BIC) between interaurally correlated…
Davis, Kevin A; Lomakin, Oleg; Pesavento, Michael J
2007-09-01
The dorsal nucleus of the lateral lemniscus (DNLL) receives afferent inputs from many brain stem nuclei and, in turn, is a major source of inhibitory inputs to the inferior colliculus (IC). The goal of this study was to characterize the monaural and binaural response properties of neurons in the DNLL of unanesthetized decerebrate cat. Monaural responses were classified according to the patterns of excitation and inhibition observed in contralateral and ipsilateral frequency response maps. Binaural classification was based on unit sensitivity to interaural level differences. The results show that units in the DNLL can be grouped into three distinct types. Type v units produce contralateral response maps that show a wide V-shaped excitatory area and no inhibition. These units receive ipsilateral excitation and exhibit binaural facilitation. The contralateral maps of type i units show a more restricted I-shaped region of excitation that is flanked by inhibition. Type o maps display an O-shaped island of excitation at low stimulus levels that is bounded by inhibition at higher levels. Both type i and type o units receive ipsilateral inhibition and exhibit binaural inhibition. Units that produce type v maps have a low best frequency (BF), whereas type i and type o units have high BFs. Type v and type i units give monotonic rate-level responses for both BF tones and broadband noise. Type o units are inhibited by tones at high levels, but are excited by high-level noise. These results show that the DNLL can exert strong, differential effects in the IC.
Exploring the additivity of binaural and monaural masking release
Hall, Joseph W.; Buss, Emily; Grose, John H.
2011-01-01
Experiment 1 examined comodulation masking release (CMR) for a 700-Hz tonal signal under conditions of NoSo (noise and signal interaurally in phase) and NoSπ (noise in phase, signal out of phase) stimulation. The baseline stimulus for CMR was either a single 24-Hz wide narrowband noise centered on the signal frequency [on-signal band (OSB)] or the OSB plus, a set of flanking noise bands having random envelopes. Masking noise was either gated or continuous. The CMR, defined with respect to either the OSB or the random noise baseline, was smaller for NoSπ than NoSo stimulation, particularly when the masker was continuous. Experiment 2 examined whether the same pattern of results would be obtained for a 2000-Hz signal frequency; the number of flanking bands was also manipulated (two versus eight). Results again showed smaller CMR for NoSπ than NoSo stimulation for both continuous and gated masking noise. The CMR was larger with eight than with two flanking bands, and this difference was greater for NoSo than NoSπ. The results of this study are compatible with serial mechanisms of binaural and monaural masking release, but they indicate that the combined masking release (binaural masking-level difference and CMR) falls short of being additive. PMID:21476663
Sensitivity to binaural timing in bilateral cochlear implant users.
van Hoesel, Richard J M
2007-04-01
Various measures of binaural timing sensitivity were made in three bilateral cochlear implant users, who had demonstrated moderate-to-good interaural time delay (ITD) sensitivity at 100 pulses-per-second (pps). Overall, ITD thresholds increased at higher pulse rates, lower levels, and shorter durations, although intersubject differences were evident. Monaural rate-discrimination thresholds, using the same stimulation parameters, showed more substantial elevation than ITDs with increased rate. ITD sensitivity with 6000 pps stimuli, amplitude-modulated at 100 Hz, was similar to that with unmodulated pulse trains at 100 pps, but at 200 and 300 Hz performance was poorer than with unmodulated signals. Measures of sensitivity to binaural beats with unmodulated pulse-trains showed that all three subjects could use time-varying ITD cues at 100 pps, but not 300 pps, even though static ITD sensitivity was relatively unaffected over that range. The difference between static and dynamic ITD thresholds is discussed in terms of relative contributions from initial and later arriving cues, which was further examined in an experiment using two-pulse stimuli as a function of interpulse separation. In agreement with the binaural-beat data, findings from that experiment showed poor discrimination of ITDs on the second pulse when the interval between pulses was reduced to a few milliseconds.
Whiteford, Kelly L.; Oxenham, Andrew J.
2015-01-01
The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding. PMID:26627783
Whiteford, Kelly L; Oxenham, Andrew J
2015-11-01
The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding.
Borisyuk, Alla; Semple, Malcolm N; Rinzel, John
2002-10-01
A mathematical model was developed for exploring the sensitivity of low-frequency inferior colliculus (IC) neurons to interaural phase disparity (IPD). The formulation involves a firing-rate-type model that does not include spikes per se. The model IC neuron receives IPD-tuned excitatory and inhibitory inputs (viewed as the output of a collection of cells in the medial superior olive). The model cell possesses cellular properties of firing rate adaptation and postinhibitory rebound (PIR). The descriptions of these mechanisms are biophysically reasonable, but only semi-quantitative. We seek to explain within a minimal model the experimentally observed mismatch between responses to IPD stimuli delivered dynamically and those delivered statically (McAlpine et al. 2000; Spitzer and Semple 1993). The model reproduces many features of the responses to static IPD presentations, binaural beat, and partial range sweep stimuli. These features include differences in responses to a stimulus presented in static or dynamic context: sharper tuning and phase shifts in response to binaural beats, and hysteresis and "rise-from-nowhere" in response to partial range sweeps. Our results suggest that dynamic response features are due to the structure of inputs and the presence of firing rate adaptation and PIR mechanism in IC cells, but do not depend on a specific biophysical mechanism. We demonstrate how the model's various components contribute to shaping the observed phenomena. For example, adaptation, PIR, and transmission delay shape phase advances and delays in responses to binaural beats, adaptation and PIR shape hysteresis in different ranges of IPD, and tuned inhibition underlies asymmetry in dynamic tuning properties. We also suggest experiments to test our modeling predictions: in vitro simulation of the binaural beat (phase advance at low beat frequencies, its dependence on firing rate), in vivo partial range sweep experiments (dependence of the hysteresis curve on parameters), and inhibition blocking experiments (to study inhibitory tuning properties by observation of phase shifts).
Franken, Tom P.; Bremen, Peter; Joris, Philip X.
2014-01-01
Coincidence detection by binaural neurons in the medial superior olive underlies sensitivity to interaural time difference (ITD) and interaural correlation (ρ). It is unclear whether this process is akin to a counting of individual coinciding spikes, or rather to a correlation of membrane potential waveforms resulting from converging inputs from each side. We analyzed spike trains of axons of the cat trapezoid body (TB) and auditory nerve (AN) in a binaural coincidence scheme. ITD was studied by delaying “ipsi-” vs. “contralateral” inputs; ρ was studied by using responses to different noises. We varied the number of inputs; the monaural and binaural threshold and the coincidence window duration. We examined physiological plausibility of output “spike trains” by comparing their rate and tuning to ITD and ρ to those of binaural cells. We found that multiple inputs are required to obtain a plausible output spike rate. In contrast to previous suggestions, monaural threshold almost invariably needed to exceed binaural threshold. Elevation of the binaural threshold to values larger than 2 spikes caused a drastic decrease in rate for a short coincidence window. Longer coincidence windows allowed a lower number of inputs and higher binaural thresholds, but decreased the depth of modulation. Compared to AN fibers, TB fibers allowed higher output spike rates for a low number of inputs, but also generated more monaural coincidences. We conclude that, within the parameter space explored, the temporal patterns of monaural fibers require convergence of multiple inputs to achieve physiological binaural spike rates; that monaural coincidences have to be suppressed relative to binaural ones; and that the neuron has to be sensitive to single binaural coincidences of spikes, for a number of excitatory inputs per side of 10 or less. These findings suggest that the fundamental operation in the mammalian binaural circuit is coincidence counting of single binaural input spikes. PMID:24822037
Comparative physiology of sound localization in four species of owls.
Volman, S F; Konishi, M
1990-01-01
Bilateral ear asymmetry is found in some, but not all, species of owls. We investigated the neural basis of sound localization in symmetrical and asymmetrical species, to deduce how ear asymmetry might have evolved from the ancestral condition, by comparing the response properties of neurons in the external nucleus of the inferior colliculus (ICx) of the symmetrical burrowing owl and asymmetrical long-eared owl with previous findings in the symmetrical great horned owl and asymmetrical barn owl. In the ICx of all of these owls, the neurons had spatially restricted receptive fields, and auditory space was topographically mapped. In the symmetrical owls, ICx units were not restricted in elevation, and only azimuth was mapped in ICx. In the barn owl, the space map is two-dimensional, with elevation forming the second dimension. Receptive fields in the long-eared owl were somewhat restricted in elevation, but their tuning was not sharp enough to determine if elevation is mapped. In every species, the primary cue for azimuth was interaural time difference, although ICx units were also tuned for interaural intensity difference (IID). In the barn owl, the IIDs of sounds with frequencies between about 5 and 8 kHz vary systematically with elevation, and the IID selectivity of ICx neurons primarily encodes elevation. In the symmetrical owls, whose ICx neurons do not respond to frequencies above about 5 kHz, IID appears to be a supplementary cue for azimuth. We hypothesize that ear asymmetry can be exploited by owls that have evolved the higher-frequency hearing necessary to generate elevation cues. Thus, the IID selectivity of ICx neurons in symmetrical owls may preadapt them for asymmetry; the neural circuitry that underlies IID selectivity is already present in symmetrical owls, but because IID is not absolutely required to encode azimuth it can come to encode elevation in asymmetrical owls.
Individual differences reveal correlates of hidden hearing deficits.
Bharadwaj, Hari M; Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G
2015-02-04
Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of "normal hearing." Copyright © 2015 the authors 0270-6474/15/352161-12$15.00/0.
NASA Astrophysics Data System (ADS)
Martens, William
2005-04-01
Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.
Auditory display for the blind
NASA Technical Reports Server (NTRS)
Fish, R. M. (Inventor)
1974-01-01
A system for providing an auditory display of two-dimensional patterns as an aid to the blind is described. It includes a scanning device for producing first and second voltages respectively indicative of the vertical and horizontal positions of the scan and a further voltage indicative of the intensity at each point of the scan and hence of the presence or absence of the pattern at that point. The voltage related to scan intensity controls transmission of the sounds to the subject so that the subject knows that a portion of the pattern is being encountered by the scan when a tone is heard, the subject determining the position of this portion of the pattern in space by the frequency and interaural difference information contained in the tone.
Binaural model-based dynamic-range compression.
Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D
2018-01-26
Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.
Monaural and binaural processing of complex waveforms
NASA Astrophysics Data System (ADS)
Trahiotis, Constantine; Bernstein, Leslie R.
1992-01-01
Our research concerned the manners by which the monaural and binaural auditory systems process information in complex sounds. Substantial progress was made in three areas, consistent with the ojectives outlined in the original proposal. (1) New electronic equipment, including a NeXT computer was purchased, installed and interfaced with the existing laboratory. Software was developed for generating the necessary complex digital stimuli and for running behavioral experiments utilizing those stimuli. (2) Monaural experiments showed that the CMR is not obtained successively and is reduced or non-existent when the flanking bands are pulsed rather than presented continuously. Binaural investigations revealed that the detectability of a tonal target in a masking level difference paradigm could be degraded by the presence of a spectrally remote interfering tone. (3) In collaboration with Dr. Richard Stem, theoretical efforts included the explication and evaluation of a weighted-image model of binaural hearing, attempts to extend the Stern-Colbum position-variable model to account for many crucial lateralization and localization data gathered over the past 50 years, and the continuation of efforts to incorporate into a general model notions that lateralization and localization of spectrally-rich stimuli depend upon the patterns of neural activity within a plane defined by frequency and interaural delay.
A novel procedure for examining pre-lexical phonetic-level analysis
NASA Astrophysics Data System (ADS)
Bashford, James A.; Warren, Richard M.; Lenz, Peter W.
2005-09-01
A recorded word repeated over and over is heard to undergo a series of illusory changes (verbal transformations) to other syllables and words in the listener's lexicon. When a second image of the same repeating word is added through dichotic presentation (with an interaural delay preventing fusion), the two distinct lateralized images of the word undergo independent illusory transformations at the same rate observed for a single image [Lenz et al., J. Acoust. Soc. Am. 107, 2857 (2000)]. However, when the contralateral word differs by even one phoneme, transformation rate decreases dramatically [Bashford et al., J. Acoust. Soc. Am. 110, 2658 (2001)]. This suppression of transformations did not occur when a nonspeech competitor was employed. The present study found that dichotic suppression of transformation rate also is independent of the top-down influence of a verbal competitor's word frequency, neighborhood density, and lexicality. However, suppression did increase with the extent of feature mismatch at a given phoneme position (e.g., transformations for ``dark'' were suppressed more by contralateral ``hark'' than by ``bark''). These and additional findings indicate that dichotic verbal transformations can provide experimental access to a pre-lexical phonetic analysis normally obscured by subsequent processing. [Work supported by NIH.
Knight, Richard D
2004-01-01
Limited data are available on the relationship between diplacusis and otoacoustic emissions and sudden hearing threshold changes, and the detail of the mechanism underlying diplacusis is not well understood. Data are presented here from an intensively studied single episode of sudden, non-conductive, mild hearing loss with associated binaural diplacusis, probably due to a viral infection. Treatment with steroids was administered for 1 week. This paper examines the relationships between the hearing loss, diplacusis and otoacoustic emissions during recovery on a day-by-day basis. The hearing thresholds were elevated by up to 20 dB at 4kHz and upwards, and there was an interaural pitch difference up to 12% at 4 and 8 kHz. There was also a frequency-specific change in transient evoked otoacoustic emission (TEOAE) and distortion-product otoacoustic emission (DPOAE) level. DPOAE level was reduced by up to 20 dB. with the greatest change seen when a stimulus with a wide stimulus frequency ratio was used. Frequency shifts in the 2f2-fi DPOAE fine structure corresponded to changes in the diplacusis. Complete recovery to previous levels was observed for TEOAE, DPOAE and hearing threshold. The diplacusis recovered to within normal limits after 4 weeks. The frequency shift seen in the DPOAE fine structure did not quite resolve, suggesting a very slight permanent change. The time-courses of TEOAE. diplacusis and hearing threshold were significantly different: most notably, the hearing threshold was stable over a period when the diplacusis deteriorated. This suggests that the cochlear mechanisms involved in diplacusis, hearing threshold and OAE may not be identical.
Glycinergic inhibition tunes coincidence detection in the auditory brainstem
Myoga, Michael H.; Lehnert, Simon; Leibold, Christian; Felmy, Felix; Grothe, Benedikt
2014-01-01
Neurons in the medial superior olive (MSO) detect microsecond differences in the arrival time of sounds between the ears (interaural time differences or ITDs), a crucial binaural cue for sound localization. Synaptic inhibition has been implicated in tuning ITD sensitivity, but the cellular mechanisms underlying its influence on coincidence detection are debated. Here we determine the impact of inhibition on coincidence detection in adult Mongolian gerbil MSO brain slices by testing precise temporal integration of measured synaptic responses using conductance-clamp. We find that inhibition dynamically shifts the peak timing of excitation, depending on its relative arrival time, which in turn modulates the timing of best coincidence detection. Inhibitory control of coincidence detection timing is consistent with the diversity of ITD functions observed in vivo and is robust under physiologically relevant conditions. Our results provide strong evidence that temporal interactions between excitation and inhibition on microsecond timescales are critical for binaural processing. PMID:24804642
Directional hearing by linear summation of binaural inputs at the medial superior olive
van der Heijden, Marcel; Lorteije, Jeannette A. M.; Plauška, Andrius; Roberts, Michael T.; Golding, Nace L.; Borst, J. Gerard G.
2013-01-01
SUMMARY Neurons in the medial superior olive (MSO) enable sound localization by their remarkable sensitivity to submillisecond interaural time differences (ITDs). Each MSO neuron has its own “best ITD” to which it responds optimally. A difference in physical path length of the excitatory inputs from both ears cannot fully account for the ITD tuning of MSO neurons. As a result, it is still debated how these inputs interact and whether the segregation of inputs to opposite dendrites, well-timed synaptic inhibition, or asymmetries in synaptic potentials or cellular morphology further optimize coincidence detection or ITD tuning. Using in vivo whole-cell and juxtacellular recordings, we show here that ITD tuning of MSO neurons is determined by the timing of their excitatory inputs. The inputs from both ears sum linearly, whereas spike probability depends nonlinearly on the size of synaptic inputs. This simple coincidence detection scheme thus makes accurate sound localization possible. PMID:23764292
Seshagiri, Chandran V.; Delgutte, Bertrand
2007-01-01
The complex anatomical structure of the central nucleus of the inferior colliculus (ICC), the principal auditory nucleus in the midbrain, may provide the basis for functional organization of auditory information. To investigate this organization, we used tetrodes to record from neighboring neurons in the ICC of anesthetized cats and studied the similarity and difference among the responses of these neurons to pure-tone stimuli using widely used physiological characterizations. Consistent with the tonotopic arrangement of neurons in the ICC and reports of a threshold map, we found a high degree of correlation in the best frequencies (BFs) of neighboring neurons, which were mostly <3 kHz in our sample, and the pure-tone thresholds among neighboring neurons. However, width of frequency tuning, shapes of the frequency response areas, and temporal discharge patterns showed little or no correlation among neighboring neurons. Because the BF and threshold are measured at levels near the threshold and the characteristic frequency (CF), neighboring neurons may receive similar primary inputs tuned to their CF; however, at higher levels, additional inputs from other frequency channels may be recruited, introducing greater variability in the responses. There was also no correlation among neighboring neurons' sensitivity to interaural time differences (ITD) measured with binaural beats. However, the characteristic phases (CPs) of neighboring neurons revealed a significant correlation. Because the CP is related to the neural mechanisms generating the ITD sensitivity, this result is consistent with segregation of inputs to the ICC from the lateral and medial superior olives. PMID:17671101
Seshagiri, Chandran V; Delgutte, Bertrand
2007-10-01
The complex anatomical structure of the central nucleus of the inferior colliculus (ICC), the principal auditory nucleus in the midbrain, may provide the basis for functional organization of auditory information. To investigate this organization, we used tetrodes to record from neighboring neurons in the ICC of anesthetized cats and studied the similarity and difference among the responses of these neurons to pure-tone stimuli using widely used physiological characterizations. Consistent with the tonotopic arrangement of neurons in the ICC and reports of a threshold map, we found a high degree of correlation in the best frequencies (BFs) of neighboring neurons, which were mostly <3 kHz in our sample, and the pure-tone thresholds among neighboring neurons. However, width of frequency tuning, shapes of the frequency response areas, and temporal discharge patterns showed little or no correlation among neighboring neurons. Because the BF and threshold are measured at levels near the threshold and the characteristic frequency (CF), neighboring neurons may receive similar primary inputs tuned to their CF; however, at higher levels, additional inputs from other frequency channels may be recruited, introducing greater variability in the responses. There was also no correlation among neighboring neurons' sensitivity to interaural time differences (ITD) measured with binaural beats. However, the characteristic phases (CPs) of neighboring neurons revealed a significant correlation. Because the CP is related to the neural mechanisms generating the ITD sensitivity, this result is consistent with segregation of inputs to the ICC from the lateral and medial superior olives.
Exploring the additivity of binaural and monaural masking release.
Hall, Joseph W; Buss, Emily; Grose, John H
2011-04-01
Experiment 1 examined comodulation masking release (CMR) for a 700-Hz tonal signal under conditions of N(o)S(o) (noise and signal interaurally in phase) and N(o)S(π) (noise in phase, signal out of phase) stimulation. The baseline stimulus for CMR was either a single 24-Hz wide narrowband noise centered on the signal frequency [on-signal band (OSB)] or the OSB plus, a set of flanking noise bands having random envelopes. Masking noise was either gated or continuous. The CMR, defined with respect to either the OSB or the random noise baseline, was smaller for N(o)S(π) than N(o)S(o) stimulation, particularly when the masker was continuous. Experiment 2 examined whether the same pattern of results would be obtained for a 2000-Hz signal frequency; the number of flanking bands was also manipulated (two versus eight). Results again showed smaller CMR for N(o)S(π) than N(o)S(o) stimulation for both continuous and gated masking noise. The CMR was larger with eight than with two flanking bands, and this difference was greater for N(o)S(o) than N(o)S(π). The results of this study are compatible with serial mechanisms of binaural and monaural masking release, but they indicate that the combined masking release (binaural masking-level difference and CMR) falls short of being additive.
Examination of Insert Ear Interaural Attenuation (IA)Values in Audiological Evaluations.
Gumus, Nebi M; Gumus, Merve; Unsal, Selim; Yuksel, Mustafa; Gunduz, Mehmet
2016-12-01
The purpose of this study was to evaluate Interaural Attenuation (IA) in frequency base in the insert earphones that are used in audiological assessments. Thirty healthy subjects between 18-65 years of age (14 female and 16 male) participated in our study. Otoscopic examination was performed on all participants. Audiological evaluations were performed using the Interacoustics AC40 clinical audiometer and ER-3A insert earphones. IA value was calculated by subtracting good ear bone conduction hearing thresholds of the worst airway hearing threshold. In our measuring for 0.125-8.0 kHz frequency were performed in our audiometry device separately for each frequency. IA amount in the results we found in 1000 Hz and below frequencies about 75-110 dB range avarage is 89±5dB, in above 1000 Hz frequencies in 50-95 dB range and avarage it is changed to 69±5dB. According to the obtained findings the quantity of melting in the transition between the ears are increasing with the insert earphones. The insert earphone should be beside supraaural earphone that is routinely used in clinics. Difficult masking applications due to the increase in the value of IA can be easily done with insert earphones.
Toward a more ecologically valid measure of speech understanding in background noise.
Jerger, J; Greenwald, R; Wambacq, I; Seipel, A; Moncrieff, D
2000-05-01
In an attempt to develop a more ecologically valid measure of speech understanding in a background of competing speech, we constructed a quasidichotic procedure based on the monitoring of continuous speech from loudspeakers placed directly to the listener's right and left sides. The listener responded to the presence of incongruous or anomalous words imbedded within the context of two children's fairy tales. Attention was directed either to the right or to the left side in blocks of 25 utterances. Within each block, there were target (anomalous) and nontarget (nonanomalous) words. Responses to target words were analyzed separately for attend-right and attend-left conditions. Our purpose was twofold: (1) to evaluate the feasibility of such an approach for obtaining electrophysiologic performance measures in the sound field and (2) to gather normative interaural symmetry data for the new technique in young adults with normal hearing. Event-related potentials to target and nontarget words at 30 electrode sites were obtained in 20 right-handed young adults with normal hearing. Waveforms and associated topographic maps were characterized by a slight negativity in the region of 400 msec (N400) and robust positivity in the region of 900 msec (P900). Norms for interaural symmetry of the P900 event-related potential in young adults were derived.
Separation of concurrent broadband sound sources by human listeners
NASA Astrophysics Data System (ADS)
Best, Virginia; van Schaik, André; Carlile, Simon
2004-01-01
The effect of spatial separation on the ability of human listeners to resolve a pair of concurrent broadband sounds was examined. Stimuli were presented in a virtual auditory environment using individualized outer ear filter functions. Subjects were presented with two simultaneous noise bursts that were either spatially coincident or separated (horizontally or vertically), and responded as to whether they perceived one or two source locations. Testing was carried out at five reference locations on the audiovisual horizon (0°, 22.5°, 45°, 67.5°, and 90° azimuth). Results from experiment 1 showed that at more lateral locations, a larger horizontal separation was required for the perception of two sounds. The reverse was true for vertical separation. Furthermore, it was observed that subjects were unable to separate stimulus pairs if they delivered the same interaural differences in time (ITD) and level (ILD). These findings suggested that the auditory system exploited differences in one or both of the binaural cues to resolve the sources, and could not use monaural spectral cues effectively for the task. In experiments 2 and 3, separation of concurrent noise sources was examined upon removal of low-frequency content (and ITDs), onset/offset ITDs, both of these in conjunction, and all ITD information. While onset and offset ITDs did not appear to play a major role, differences in ongoing ITDs were robust cues for separation under these conditions, including those in the envelopes of high-frequency channels.
Neural Correlates of the Binaural Masking Level Difference in Human Frequency-Following Responses.
Clinard, Christopher G; Hodgson, Sarah L; Scherer, Mary Ellen
2017-04-01
The binaural masking level difference (BMLD) is an auditory phenomenon where binaural tone-in-noise detection is improved when the phase of either signal or noise is inverted in one of the ears (S π N o or S o N π , respectively), relative to detection when signal and noise are in identical phase at each ear (S o N o ). Processing related to BMLDs and interaural time differences has been confirmed in the auditory brainstem of non-human mammals; in the human auditory brainstem, phase-locked neural responses elicited by BMLD stimuli have not been systematically examined across signal-to-noise ratio. Behavioral and physiological testing was performed in three binaural stimulus conditions: S o N o , S π N o , and S o N π . BMLDs at 500 Hz were obtained from 14 young, normal-hearing adults (ages 21-26). Physiological BMLDs used the frequency-following response (FFR), a scalp-recorded auditory evoked potential dependent on sustained phase-locked neural activity; FFR tone-in-noise detection thresholds were used to calculate physiological BMLDs. FFR BMLDs were significantly smaller (poorer) than behavioral BMLDs, and FFR BMLDs did not reflect a physiological release from masking, on average. Raw FFR amplitude showed substantial reductions in the S π N o condition relative to S o N o and S o N π conditions, consistent with negative effects of phase summation from left and right ear FFRs. FFR amplitude differences between stimulus conditions (e.g., S o N o amplitude-S π N o amplitude) were significantly predictive of behavioral S π N o BMLDs; individuals with larger amplitude differences had larger (better) behavioral B MLDs and individuals with smaller amplitude differences had smaller (poorer) behavioral B MLDs. These data indicate a role for sustained phase-locked neural activity in BMLDs of humans and are the first to show predictive relationships between behavioral BMLDs and human brainstem responses.
The role of off-frequency masking in binaural hearing
Buss, Emily; Hall, Joseph W.
2010-01-01
The present studies examined the binaural masking level difference (MLD) for off-frequency masking. It has been shown previously that the MLD decreases steeply with increasing spectral separation between a pure tone signal and a 10-Hz wide band of masking noise. Data collected here show that this reduction in the off-frequency MLD as a function of signal∕masker separation is comparable at 250 and 2500 Hz, indicating that neither interaural phase cues nor frequency resolution are critical to this finding. The MLD decreases more gradually with spectral separation when the masker is a 250-Hz-wide band of noise, a result that implicates the rate of inherent amplitude modulation of the masker. Thresholds were also measured for a brief signal presented coincident with a local masker modulation minimum or maximum. Sensitivity was better in the minima for all NoSπ and off-frequency NoSo conditions, with little or no effect of signal position for on-frequency NoSo conditions. Taken together, the present results indicate that the steep reduction in the off-frequency MLD for a narrowband noise masker is due at least in part to envelope cues in the NoSo conditions. There was no evidence of a reduction in binaural cue quality for off-frequency masking. PMID:20550265
Mismatch negativity to acoustical illusion of beat: how and where the change detection takes place?
Chakalov, Ivan; Paraskevopoulos, Evangelos; Wollbrink, Andreas; Pantev, Christo
2014-10-15
In case of binaural presentation of two tones with slightly different frequencies the structures of brainstem can no longer follow the interaural time differences (ITD) resulting in an illusionary perception of beat corresponding to frequency difference between the two prime tones. Hence, the beat-frequency does not exist in the prime tones presented to either ear. This study used binaural beats to explore the nature of acoustic deviance detection in humans by means of magnetoencephalography (MEG). Recent research suggests that the auditory change detection is a multistage process. To test this, we employed 26 Hz-binaural beats in a classical oddball paradigm. However, the prime tones (250 Hz and 276 Hz) were switched between the ears in the case of the deviant-beat. Consequently, when the deviant is presented, the cochleae and auditory nerves receive a "new afferent", although the standards and the deviants are heard identical (26 Hz-beats). This allowed us to explore the contribution of auditory periphery to change detection process, and furthermore, to evaluate its influence on beats-related auditory steady-state responses (ASSRs). LORETA-source current density estimates of the evoked fields in a typical mismatch negativity time-window (MMN) and the subsequent difference-ASSRs were determined and compared. The results revealed an MMN generated by a complex neural network including the right parietal lobe and the left middle frontal gyrus. Furthermore, difference-ASSR was generated in the paracentral gyrus. Additionally, psychophysical measures showed no perceptual difference between the standard- and deviant-beats when isolated by noise. These results suggest that the auditory periphery has an important contribution to novelty detection already at sub-cortical level. Overall, the present findings support the notion of hierarchically organized acoustic novelty detection system. Copyright © 2014 Elsevier Inc. All rights reserved.
On the Possible Detection of Lightning Storms by Elephants
Kelley, Michael C.; Garstang, Michael
2013-01-01
Simple Summary We use data similar to that taken by the International Monitoring System for the detection of nuclear explosions, to determine whether elephants might be capable of detecting and locating the source of sounds generated by thunderstorms. Knowledge that elephants might be capable of responding to such storms, particularly at the end of the dry season when migrations are initiated, is of considerable interest to management and conservation. Abstract Theoretical calculations suggest that sounds produced by thunderstorms and detected by a system similar to the International Monitoring System (IMS) for the detection of nuclear explosions at distances ≥100 km, are at sound pressure levels equal to or greater than 6 × 10−3 Pa. Such sound pressure levels are well within the range of elephant hearing. Frequencies carrying these sounds might allow for interaural time delays such that adult elephants could not only hear but could also locate the source of these sounds. Determining whether it is possible for elephants to hear and locate thunderstorms contributes to the question of whether elephant movements are triggered or influenced by these abiotic sounds. PMID:26487406
NASA Technical Reports Server (NTRS)
Clement, G.; Moore, S. T.; Raphan, T.; Cohen, B.
2001-01-01
During the 1998 Neurolab mission (STS-90), four astronauts were exposed to interaural and head vertical (dorsoventral) linear accelerations of 0.5 g and 1 g during constant velocity rotation on a centrifuge, both on Earth and during orbital space flight. Subjects were oriented either left-ear-out or right-ear-out (Gy centrifugation), or lay supine along the centrifuge arm with their head off-axis (Gz centrifugation). Pre-flight centrifugation, producing linear accelerations of 0.5 g and 1 g along the Gy (interaural) axis, induced illusions of roll-tilt of 20 degrees and 34 degrees for gravito-inertial acceleration (GIA) vector tilts of 27 degrees and 45 degrees , respectively. Pre-flight 0.5 g and 1 g Gz (head dorsoventral) centrifugation generated perceptions of backward pitch of 5 degrees and 15 degrees , respectively. In the absence of gravity during space flight, the same centrifugation generated a GIA that was equivalent to the centripetal acceleration and aligned with the Gy or Gz axes. Perception of tilt was underestimated relative to this new GIA orientation during early in-flight Gy centrifugation, but was close to the GIA after 16 days in orbit, when subjects reported that they felt as if they were 'lying on side'. During the course of the mission, inflight roll-tilt perception during Gy centrifugation increased from 45 degrees to 83 degrees at 1 g and from 42 degrees to 48 degrees at 0.5 g. Subjects felt 'upside-down' during in-flight Gz centrifugation from the first in-flight test session, which reflected the new GIA orientation along the head dorsoventral axis. The different levels of in-flight tilt perception during 0.5 g and 1 g Gy centrifugation suggests that other non-vestibular inputs, including an internal estimate of the body vertical and somatic sensation, were utilized in generating tilt perception. Interpretation of data by a weighted sum of body vertical and somatic vectors, with an estimate of the GIA from the otoliths, suggests that perception weights the sense of the body vertical more heavily early in-flight, that this weighting falls during adaptation to microgravity, and that the decreased reliance on the body vertical persists early post-flight, generating an exaggerated sense of tilt. Since graviceptors respond to linear acceleration and not to head tilt in orbit, it has been proposed that adaptation to weightlessness entails reinterpretation of otolith activity, causing tilt to be perceived as translation. Since linear acceleration during in-flight centrifugation was always perceived as tilt, not translation, the findings do not support this hypothesis.
Index to FAA Office of Aviation Medicine Reports: 1961 through 1998.
1999-01-01
Mechanisms of action of the insecticide endrin. AD431299 63-17 Tobias, J. V: Application of a "relative" procedure to a problem in binaural beat ...selection, 90-13. ... auditory fatigue, 63-19, 65-1, 65-2. ... binaural beat perception, 63-17. ... cockpit noise intensities, 68-21, 68-25. ... ear...Communication ... ATC/pilot voice, 93-20, 95-15, 96-26, 98-17, 98-20. ... binaural beat perception, 63-17. ... earphone response, 63-7. ... interaural
Underwater hearing and sound localization with and without an air interface.
Shupak, Avi; Sharoni, Zohara; Yanir, Yoav; Keynan, Yoav; Alfie, Yechezkel; Halpern, Pinchas
2005-01-01
Underwater hearing acuity and sound localization are improved by the presence of an air interface around the pinnae and inside the external ear canals. Hearing threshold and the ability to localize sound sources are reduced underwater. The resonance frequency of the external ear is lowered when the external ear canal is filled with water, and the impedance-matching ability of the middle ear is significantly reduced due to elevation of the ambient pressure, the water-mass load on the tympanic membrane, and the addition of a fluid-air interface during submersion. Sound lateralization on land is largely explained by the mechanisms of interaural intensity differences and interaural temporal or phase differences. During submersion, these differences are largely lost due to the increase in underwater sound velocity and cancellation of the head's acoustic shadow effect because of the similarity between the impedance of the skull and the surrounding water. Ten scuba divers wearing a regular opaque face mask or an opaque ProEar 2000 (Safe Dive, Ltd., Hofit, Israel) mask that enables the presence of air at ambient pressure in and around the ear made a dive to a depth of 3 m in the open sea. Four underwater speakers arranged on the horizontal plane at 90-degree intervals and at a distance of 5 m from the diver were used for testing pure-tone hearing thresholds (PTHT), the reception threshold for the recorded sound of a rubber-boat engine, and sound localization. For sound localization, the sound of the rubber boat's engine was randomly delivered by one speaker at a time at 40 dB HL above the recorded sound of a rubber-boat engine, and the diver was asked to point to the sound source. The azimuth was measured by the diver's companion using a navigation board. Underwater PTHT with both masks were significantly higher for frequencies of 250 to 6000 Hz when compared with the thresholds on land (p <0.0001). No differences were found in the PTHT or the reception threshold for the recorded sound of a rubber-boat engine for dry or wet ear conditions. There was no difference in the sound localization error between the regular mask and the ProEar 2000 mask. The presence of air around the pinna and inside the external ear canal did not improve underwater hearing sensitivity or sound localization. These results support the argument that bone conduction plays the main role in underwater hearing.
Väljamäe, Aleksander; Sell, Sara
2014-01-01
In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection) is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV), focusing on participants imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types) and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD) when delivering via loudspeaker array. The significant differences in circular vection intensity showed that (1) AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; (2) ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and (3) individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection "rich" cues, i.e., acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in a similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensory induced vection.
Väljamäe, Aleksander; Sell, Sara
2014-01-01
In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection) is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV), focusing on participants imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types) and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD) when delivering via loudspeaker array. The significant differences in circular vection intensity showed that (1) AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; (2) ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and (3) individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection “rich” cues, i.e., acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in a similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensory induced vection. PMID:25520683
Physiological models of the lateral superior olive
2017-01-01
In computational biology, modeling is a fundamental tool for formulating, analyzing and predicting complex phenomena. Most neuron models, however, are designed to reproduce certain small sets of empirical data. Hence their outcome is usually not compatible or comparable with other models or datasets, making it unclear how widely applicable such models are. In this study, we investigate these aspects of modeling, namely credibility and generalizability, with a specific focus on auditory neurons involved in the localization of sound sources. The primary cues for binaural sound localization are comprised of interaural time and level differences (ITD/ILD), which are the timing and intensity differences of the sound waves arriving at the two ears. The lateral superior olive (LSO) in the auditory brainstem is one of the locations where such acoustic information is first computed. An LSO neuron receives temporally structured excitatory and inhibitory synaptic inputs that are driven by ipsi- and contralateral sound stimuli, respectively, and changes its spike rate according to binaural acoustic differences. Here we examine seven contemporary models of LSO neurons with different levels of biophysical complexity, from predominantly functional ones (‘shot-noise’ models) to those with more detailed physiological components (variations of integrate-and-fire and Hodgkin-Huxley-type). These models, calibrated to reproduce known monaural and binaural characteristics of LSO, generate largely similar results to each other in simulating ITD and ILD coding. Our comparisons of physiological detail, computational efficiency, predictive performances, and further expandability of the models demonstrate (1) that the simplistic, functional LSO models are suitable for applications where low computational costs and mathematical transparency are needed, (2) that more complex models with detailed membrane potential dynamics are necessary for simulation studies where sub-neuronal nonlinear processes play important roles, and (3) that, for general purposes, intermediate models might be a reasonable compromise between simplicity and biological plausibility. PMID:29281618
2009-07-01
Therefore, it’s safe to assume that most large errors are due to front-back confusions. Front-back confusions occur in part because the binaural ...two ear) cues that dominate sound localization do not distinguish the front and rear hemispheres. The two binaural cues relied on are interaural...121 (5), 3094–3094. Shinn-Cunningham, B. G.; Kopčo, N.; Martin, T. J. Localizing Nearby Sound Sources in a Classroom: Binaural Room Impulse
Neural coding of sound envelope in reverberant environments.
Slama, Michaël C C; Delgutte, Bertrand
2015-03-11
Speech reception depends critically on temporal modulations in the amplitude envelope of the speech signal. Reverberation encountered in everyday environments can substantially attenuate these modulations. To assess the effect of reverberation on the neural coding of amplitude envelope, we recorded from single units in the inferior colliculus (IC) of unanesthetized rabbit using sinusoidally amplitude modulated (AM) broadband noise stimuli presented in simulated anechoic and reverberant environments. Although reverberation degraded both rate and temporal coding of AM in IC neurons, in most neurons, the degradation in temporal coding was smaller than the AM attenuation in the stimulus. This compensation could largely be accounted for by the compressive shape of the modulation input-output function (MIOF), which describes the nonlinear transformation of modulation depth from acoustic stimuli into neural responses. Additionally, in a subset of neurons, the temporal coding of AM was better for reverberant stimuli than for anechoic stimuli having the same modulation depth at the ear. Using hybrid anechoic stimuli that selectively possess certain properties of reverberant sounds, we show that this reverberant advantage is not caused by envelope distortion, static interaural decorrelation, or spectral coloration. Overall, our results suggest that the auditory system may possess dual mechanisms that make the coding of amplitude envelope relatively robust in reverberation: one general mechanism operating for all stimuli with small modulation depths, and another mechanism dependent on very specific properties of reverberant stimuli, possibly the periodic fluctuations in interaural correlation at the modulation frequency. Copyright © 2015 the authors 0270-6474/15/354452-17$15.00/0.
Sensorimotor Model of Obstacle Avoidance in Echolocating Bats
Vanderelst, Dieter; Holderied, Marc W.; Peremans, Herbert
2015-01-01
Bat echolocation is an ability consisting of many subtasks such as navigation, prey detection and object recognition. Understanding the echolocation capabilities of bats comes down to isolating the minimal set of acoustic cues needed to complete each task. For some tasks, the minimal cues have already been identified. However, while a number of possible cues have been suggested, little is known about the minimal cues supporting obstacle avoidance in echolocating bats. In this paper, we propose that the Interaural Intensity Difference (IID) and travel time of the first millisecond of the echo train are sufficient cues for obstacle avoidance. We describe a simple control algorithm based on the use of these cues in combination with alternating ear positions modeled after the constant frequency bat Rhinolophus rouxii. Using spatial simulations (2D and 3D), we show that simple phonotaxis can steer a bat clear from obstacles without performing a reconstruction of the 3D layout of the scene. As such, this paper presents the first computationally explicit explanation for obstacle avoidance validated in complex simulated environments. Based on additional simulations modelling the FM bat Phyllostomus discolor, we conjecture that the proposed cues can be exploited by constant frequency (CF) bats and frequency modulated (FM) bats alike. We hypothesize that using a low level yet robust cue for obstacle avoidance allows bats to comply with the hard real-time constraints of this basic behaviour. PMID:26502063
Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher
2017-09-05
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher
2017-01-01
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357
Kim, Min-Beom; Choi, Jeesun; Park, Ga Young; Cho, Yang-Sun; Hong, Sung Hwa; Chung, Won-Ho
2013-06-01
Our goal was to find the clinical value of cervical vestibular evoked myogenic potential (VEMP) in Ménière's disease (MD) and to evaluate whether the VEMP results can be useful in assessing the stage of MD. Furthermore, we tried to evaluate the clinical effectiveness of VEMP in predicting hearing outcomes. The amplitude, peak latency and interaural amplitude difference (IAD) ratio were obtained using cervical VEMP. The VEMP results of MD were compared with those of normal subjects, and the MD stages were compared with the IAD ratio. Finally, the hearing changes were analyzed according to their VEMP results. In clinically definite unilateral MD (n=41), the prevalence of cervical VEMP abnormality in the IAD ratio was 34.1%. When compared with normal subjects (n=33), the VEMP profile of MD patients showed a low amplitude and a similar latency. The mean IAD ratio in MD was 23%, which was significantly different from that of normal subjects (P=0.01). As the stage increased, the IAD ratio significantly increased (P=0.09). After stratification by initial hearing level, stage I and II subjects (hearing threshold, 0-40 dB) with an abnormal IAD ratio showed a decrease in hearing over time compared to those with a normal IAD ratio (P=0.08). VEMP parameters have an important clinical role in MD. Especially, the IAD ratio can be used to assess the stage of MD. An abnormal IAD ratio may be used as a predictor of poor hearing outcomes in subjects with early stage MD.
Intelligibility for Binaural Speech with Discarded Low-SNR Speech Components.
Schoenmaker, Esther; van de Par, Steven
2016-01-01
Speech intelligibility in multitalker settings improves when the target speaker is spatially separated from the interfering speakers. A factor that may contribute to this improvement is the improved detectability of target-speech components due to binaural interaction in analogy to the Binaural Masking Level Difference (BMLD). This would allow listeners to hear target speech components within specific time-frequency intervals that have a negative SNR, similar to the improvement in the detectability of a tone in noise when these contain disparate interaural difference cues. To investigate whether these negative-SNR target-speech components indeed contribute to speech intelligibility, a stimulus manipulation was performed where all target components were removed when local SNRs were smaller than a certain criterion value. It can be expected that for sufficiently high criterion values target speech components will be removed that do contribute to speech intelligibility. For spatially separated speakers, assuming that a BMLD-like detection advantage contributes to intelligibility, degradation in intelligibility is expected already at criterion values below 0 dB SNR. However, for collocated speakers it is expected that higher criterion values can be applied without impairing speech intelligibility. Results show that degradation of intelligibility for separated speakers is only seen for criterion values of 0 dB and above, indicating a negligible contribution of a BMLD-like detection advantage in multitalker settings. These results show that the spatial benefit is related to a spatial separation of speech components at positive local SNRs rather than to a BMLD-like detection improvement for speech components at negative local SNRs.
Analysis of masking effects on speech intelligibility with respect to moving sound stimulus
NASA Astrophysics Data System (ADS)
Chen, Chiung Yao
2004-05-01
The purpose of this study is to compare the disturbed degree of speech by an immovable noise source and an apparent moving one (AMN). In the study of the sound localization, we found that source-directional sensitivity (SDS) well associates with the magnitude of interaural cross correlation (IACC). Ando et al. [Y. Ando, S. H. Kang, and H. Nagamatsu, J. Acoust. Soc. Jpn. (E) 8, 183-190 (1987)] reported that potential correlation between left and right inferior colliculus at auditory path in the brain is in harmony with the correlation function of amplitude input into two ear-canal entrances. We assume that the degree of disturbance under the apparent moving noisy source is probably different from that being installed in front of us within a constant distance in a free field (no reflection). Then, we found there is a different influence on speech intelligibility between a moving and a fixed source generated by 1/3-octave narrow-band noise with the center frequency 2 kHz. However, the reasons for the moving speed and the masking effects on speech intelligibility were uncertain.
Sound source localization identification accuracy: Envelope dependencies.
Yost, William A
2017-07-01
Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.
Klump, Georg M.; Tollin, Daniel J.
2016-01-01
The auditory brainstem response (ABR) is a sound-evoked non-invasively measured electrical potential representing the sum of neuronal activity in the auditory brainstem and midbrain. ABR peak amplitudes and latencies are widely used in human and animal auditory research and for clinical screening. The binaural interaction component (BIC) of the ABR stands for the difference between the sum of the monaural ABRs and the ABR obtained with binaural stimulation. The BIC comprises a series of distinct waves, the largest of which (DN1) has been used for evaluating binaural hearing in both normal hearing and hearing-impaired listeners. Based on data from animal and human studies, we discuss the possible anatomical and physiological bases of the BIC (DN1 in particular). The effects of electrode placement and stimulus characteristics on the binaurally evoked ABR are evaluated. We review how inter-aural time and intensity differences affect the BIC and, analyzing these dependencies, draw conclusion about the mechanism underlying the generation of the BIC. Finally, the utility of the BIC for clinical diagnoses are summarized. PMID:27232077
Park, Hong Ju; Lee, In-Sik; Shin, Jung Eun; Lee, Yeo Jin; Park, Mun Su
2010-01-01
To better characterize both ocular and cervical vestibular evoked myogenic potentials (VEMP) responses at different frequencies of sound in 20 normal subjects. Cervical and ocular VEMPs were recorded. The intensities of sound stimulation decreased from the maximal intensity, until no responses were evoked. Thresholds, amplitudes, latencies and interaural amplitude difference ratio (IADR) at the maximal stimulation were calculated. Both tests showed the similar frequency tuning, with the lowest threshold and highest amplitude for 500-Hz tone-burst stimuli. Sound stimulation at 500Hz showed the response rates of 100% in both tests. Cervical VEMPs showed higher incidence than ocular VEMPs. Ocular VEMP thresholds were significantly higher than those of cervical VEMP. Cervical VEMP amplitudes were significantly higher than ocular VEMP amplitudes. IADRs of ocular and cervical VEMPs did not differ significantly. Ocular VEMP showed the similar frequency tuning to cervical VEMP. Cervical VEMP responses showed higher incidence, lower thresholds and larger amplitudes than ocular VEMP. Cervical VEMP is a more reliable measure than ocular VEMP, though the results of both tests will be complementary. Five hundred Hertz is the optimal frequency to use. Copyright 2009 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Roles for Coincidence Detection in Coding Amplitude-Modulated Sounds
Ashida, Go; Kretzberg, Jutta; Tollin, Daniel J.
2016-01-01
Many sensory neurons encode temporal information by detecting coincident arrivals of synaptic inputs. In the mammalian auditory brainstem, binaural neurons of the medial superior olive (MSO) are known to act as coincidence detectors, whereas in the lateral superior olive (LSO) roles of coincidence detection have remained unclear. LSO neurons receive excitatory and inhibitory inputs driven by ipsilateral and contralateral acoustic stimuli, respectively, and vary their output spike rates according to interaural level differences. In addition, LSO neurons are also sensitive to binaural phase differences of low-frequency tones and envelopes of amplitude-modulated (AM) sounds. Previous physiological recordings in vivo found considerable variations in monaural AM-tuning across neurons. To investigate the underlying mechanisms of the observed temporal tuning properties of LSO and their sources of variability, we used a simple coincidence counting model and examined how specific parameters of coincidence detection affect monaural and binaural AM coding. Spike rates and phase-locking of evoked excitatory and spontaneous inhibitory inputs had only minor effects on LSO output to monaural AM inputs. In contrast, the coincidence threshold of the model neuron affected both the overall spike rates and the half-peak positions of the AM-tuning curve, whereas the width of the coincidence window merely influenced the output spike rates. The duration of the refractory period affected only the low-frequency portion of the monaural AM-tuning curve. Unlike monaural AM coding, temporal factors, such as the coincidence window and the effective duration of inhibition, played a major role in determining the trough positions of simulated binaural phase-response curves. In addition, empirically-observed level-dependence of binaural phase-coding was reproduced in the framework of our minimalistic coincidence counting model. These modeling results suggest that coincidence detection of excitatory and inhibitory synaptic inputs is essential for LSO neurons to encode both monaural and binaural AM sounds. PMID:27322612
The precedence effect for lateralization at low sensation levels.
Goverts, S T; Houtgast, T; van Beek, H H
2000-10-01
Using dichotic signals presented by headphone, stimulus onset dominance (the precedence effect) for lateralization at low sensation levels was investigated for five normal hearing subjects. Stimuli were based on 2400-Hz low pass filtered 5-ms noise bursts. We used the paradigm, as described by Aoki and Houtgast (Hear. Res., 59 (1992) 25-30) and Houtgast and Aoki (Hear. Res., 72 (1994) 29-36), in which the stimulus is divided into a leading and a lagging part with opposite lateralization cues (i.e. an interaural time delay of 0.2 ms). The occurrence of onset dominance was investigated by measuring lateral perception of the stimulus, with fixed equal duration of leading and lagging part, while decreasing absolute signal level or adding a filtered white noise with the signal level set at 65 dBA. The dominance of the leading part was quantified by measuring the perceived lateral position of the stimulus as a function of the relative duration of the leading (and thus the lagging) part. This was done at about 45 dB SL without masking noise and also at a signal-to-noise ratio resulting in a sensation level of 10 dB. The occurrence and strength of the precedence effect was found to depend on sensation level, which was decreased either by lowering the signal level or by adding noise. With the present paradigm, besides a decreased lateralization accuracy, a decrease in the precedence effect was found for sensation levels below about 30-40 dB. In daily-life conditions, with a sensation level in noise of typically 10 dB, the onset dominance was still manifest, albeit degraded to some extent.
Wu, Yu-Hsiang; Stangl, Elizabeth; Pang, Carol; Zhang, Xuyang
2014-02-01
Little is known regarding the acoustic features of a stimulus used by listeners to determine the acceptable noise level (ANL). Features suggested by previous research include speech intelligibility (noise is unacceptable when it degrades speech intelligibility to a certain degree; the intelligibility hypothesis) and loudness (noise is unacceptable when the speech-to-noise loudness ratio is poorer than a certain level; the loudness hypothesis). The purpose of the study was to investigate if speech intelligibility or loudness is the criterion feature that determines ANL. To achieve this, test conditions were chosen so that the intelligibility and loudness hypotheses would predict different results. In Experiment 1, the effect of audiovisual (AV) and binaural listening on ANL was investigated; in Experiment 2, the effect of interaural correlation (ρ) on ANL was examined. A single-blinded, repeated-measures design was used. Thirty-two and twenty-five younger adults with normal hearing participated in Experiments 1 and 2, respectively. In Experiment 1, both ANL and speech recognition performance were measured using the AV version of the Connected Speech Test (CST) in three conditions: AV-binaural, auditory only (AO)-binaural, and AO-monaural. Lipreading skill was assessed using the Utley lipreading test. In Experiment 2, ANL and speech recognition performance were measured using the Hearing in Noise Test (HINT) in three binaural conditions, wherein the interaural correlation of noise was varied: ρ = 1 (N(o)S(o) [a listening condition wherein both speech and noise signals are identical across two ears]), -1 (NπS(o) [a listening condition wherein speech signals are identical across two ears whereas the noise signals of two ears are 180 degrees out of phase]), and 0 (N(u)S(o) [a listening condition wherein speech signals are identical across two ears whereas noise signals are uncorrelated across ears]). The results were compared to the predictions made based on the intelligibility and loudness hypotheses. The results of the AV and AO conditions appeared to support the intelligibility hypothesis due to the significant correlation between visual benefit in ANL (AV re: AO ANL) and (1) visual benefit in CST performance (AV re: AO CST) and (2) lipreading skill. The results of the N(o)S(o), NπS(o), and N(u)S(o) conditions negated the intelligibility hypothesis because binaural processing benefit (NπS(o) re: N(o)S(o), and N(u)S(o) re: N(o)S(o)) in ANL was not correlated to that in HINT performance. Instead, the results somewhat supported the loudness hypothesis because the pattern of ANL results across the three conditions (N(o)S(o) ≈ NπS(o) ≈ N(u)S(o) ANL) was more consistent with what was predicted by the loudness hypothesis (N(o)S(o) ≈ NπS(o) < N(u)S(o) ANL) than by the intelligibility hypothesis (NπS(o) < N(u)S(o) < N(o)S(o) ANL). The results of the binaural and monaural conditions supported neither hypothesis because (1) binaural benefit (binaural re: monaural) in ANL was not correlated to that in speech recognition performance, and (2) the pattern of ANL results across conditions (binaural < monaural ANL) was not consistent with the prediction made based on previous binaural loudness summation research (binaural ≥ monaural ANL). The study suggests that listeners may use multiple acoustic features to make ANL judgments. The binaural/monaural results showing that neither hypothesis was supported further indicate that factors other than speech intelligibility and loudness, such as psychological factors, may affect ANL. The weightings of different acoustic features in ANL judgments may vary widely across individuals and listening conditions. American Academy of Audiology.
NASA Astrophysics Data System (ADS)
Shimokura, Ryota; Soeta, Yoshiharu
2011-04-01
Railway stations can be principally classified by their locations, i.e., above-ground or underground stations, and by their platform styles, i.e., side or island platforms. However, the effect of the architectural elements on the train noise in stations is not well understood. The aim of the present study is to determine the different acoustical characteristics of the train noise for each station style. The train noise was evaluated by (1) the A-weighted equivalent continuous sound pressure level ( LAeq), (2) the amplitude of the maximum peak of the interaural cross-correlation function (IACC), (3) the delay time ( τ1) and amplitude ( ϕ1) of the first maximum peak of the autocorrelation function. The IACC, τ1 and ϕ1 are related to the subjective diffuseness, pitch and pitch strength, respectively. Regarding the locations, the LAeq in the underground stations was 6.4 dB higher than that in the above-ground stations, and the pitch in the underground stations was higher and stronger. Regarding the platform styles, the LAeq on the side platforms was 3.3 dB higher than on the island platforms of the above-ground stations. For the underground stations, the LAeq on the island platforms was 3.3 dB higher than that on the side platforms when a train entered the station. The IACC on the island platforms of the above-ground stations was higher than that in the other stations.
Statistics of natural binaural sounds.
Młynarski, Wiktor; Jost, Jürgen
2014-01-01
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.
Statistics of Natural Binaural Sounds
Młynarski, Wiktor; Jost, Jürgen
2014-01-01
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658
Kim, Min-Beom; Choi, Jeesun; Park, Ga Young; Cho, Yang-Sun; Hong, Sung Hwa
2013-01-01
Objectives Our goal was to find the clinical value of cervical vestibular evoked myogenic potential (VEMP) in Ménière's disease (MD) and to evaluate whether the VEMP results can be useful in assessing the stage of MD. Furthermore, we tried to evaluate the clinical effectiveness of VEMP in predicting hearing outcomes. Methods The amplitude, peak latency and interaural amplitude difference (IAD) ratio were obtained using cervical VEMP. The VEMP results of MD were compared with those of normal subjects, and the MD stages were compared with the IAD ratio. Finally, the hearing changes were analyzed according to their VEMP results. Results In clinically definite unilateral MD (n=41), the prevalence of cervical VEMP abnormality in the IAD ratio was 34.1%. When compared with normal subjects (n=33), the VEMP profile of MD patients showed a low amplitude and a similar latency. The mean IAD ratio in MD was 23%, which was significantly different from that of normal subjects (P=0.01). As the stage increased, the IAD ratio significantly increased (P=0.09). After stratification by initial hearing level, stage I and II subjects (hearing threshold, 0-40 dB) with an abnormal IAD ratio showed a decrease in hearing over time compared to those with a normal IAD ratio (P=0.08). Conclusion VEMP parameters have an important clinical role in MD. Especially, the IAD ratio can be used to assess the stage of MD. An abnormal IAD ratio may be used as a predictor of poor hearing outcomes in subjects with early stage MD. PMID:23799160
Ocular motor responses to abrupt interaural head translation in normal humans
NASA Technical Reports Server (NTRS)
Ramat, Stefano; Zee, David S.; Shelhamer, M. J. (Principal Investigator)
2003-01-01
We characterized the interaural translational vestibulo-ocular reflex (tVOR) in 6 normal humans to brief (approximately 200 ms), high-acceleration (0.4-1.4g) stimuli, while they fixed targets at 15 or 30 cm. The latency was 19 +/- 5 ms at 15-cm and 20 +/- 12 ms at 30-cm viewing. The gain was quantified using the ratio of actual to ideal behavior. The median position gain (at time of peak head velocity) was 0.38 and 0.37, and the median velocity gain, 0.52 and 0.62, at 15- and 30-cm viewing, respectively. These results suggest the tVOR scales proportionally at these viewing distances. Likewise, at both viewing distances, peak eye velocity scaled linearly with peak head velocity and gain was independent of peak head acceleration. A saccade commonly occurred in the compensatory direction, with a greater latency (165 vs. 145 ms) and lesser amplitude (1.8 vs. 3.2 deg) at 30- than 15-cm viewing. Even with saccades, the overall gain at the end of head movement was still considerably undercompensatory (medians 0.68 and 0.77 at 15- and 30-cm viewing). Monocular viewing was also assessed at 15-cm viewing. In 4 of 6 subjects, gains were the same as during binocular viewing and scaled closely with vergence angle. In sum the low tVOR gain and scaling of the response with viewing distance and head velocity extend previous results to higher acceleration stimuli. tVOR latency (approximately 20 ms) was lower than previously reported. Saccades are an integral part of the tVOR, and also scale with viewing distance.
Gifford, René H; Dorman, Michael F; Skarzynski, Henryk; Lorens, Artur; Polak, Marek; Driscoll, Colin L W; Roland, Peter; Buchman, Craig A
2013-01-01
The aim of this study was to assess the benefit of having preserved acoustic hearing in the implanted ear for speech recognition in complex listening environments. The present study included a within-subjects, repeated-measures design including 21 English-speaking and 17 Polish-speaking cochlear implant (CI) recipients with preserved acoustic hearing in the implanted ear. The patients were implanted with electrodes that varied in insertion depth from 10 to 31 mm. Mean preoperative low-frequency thresholds (average of 125, 250, and 500 Hz) in the implanted ear were 39.3 and 23.4 dB HL for the English- and Polish-speaking participants, respectively. In one condition, speech perception was assessed in an eight-loudspeaker environment in which the speech signals were presented from one loudspeaker and restaurant noise was presented from all loudspeakers. In another condition, the signals were presented in a simulation of a reverberant environment with a reverberation time of 0.6 sec. The response measures included speech reception thresholds (SRTs) and percent correct sentence understanding for two test conditions: CI plus low-frequency hearing in the contralateral ear (bimodal condition) and CI plus low-frequency hearing in both ears (best-aided condition). A subset of six English-speaking listeners were also assessed on measures of interaural time difference thresholds for a 250-Hz signal. Small, but significant, improvements in performance (1.7-2.1 dB and 6-10 percentage points) were found for the best-aided condition versus the bimodal condition. Postoperative thresholds in the implanted ear were correlated with the degree of electric and acoustic stimulation (EAS) benefit for speech recognition in diffuse noise. There was no reliable relationship among measures of audiometric threshold in the implanted ear nor elevation in threshold after surgery and improvement in speech understanding in reverberation. There was a significant correlation between interaural time difference threshold at 250 Hz and EAS-related benefit for the adaptive speech reception threshold. The findings of this study suggest that (1) preserved low-frequency hearing improves speech understanding for CI recipients, (2) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing, and (3) preservation of binaural timing cues, although poorer than observed for individuals with normal hearing, is possible after unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. The results of this study demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of CI criteria to include individuals with low-frequency thresholds in even the normal to near-normal range.
Spatial selectivity and binaural responses in the inferior colliculus of the great horned owl.
Volman, S F; Konishi, M
1989-09-01
In this study we have investigated the processing of auditory cues for sound localization in the great horned owl (Bubo virginianus). Previous studies have shown that the barn owl, whose ears are asymmetrically oriented in the vertical plane, has a 2-dimensional, topographic representation of auditory space in the external division of the inferior colliculus (ICx). As in the barn owl, the great horned owl's ICx is anatomically distinct and projects to the optic tectum. Neurons in ICx respond over only a small range of azimuths (mean = 32 degrees), and azimuth is topographically mapped. In contrast to the barn owl, the great horned owl has bilaterally symmetrical ears and its receptive fields are not restricted in elevation. The binaural cues available for sound localization were measured both with cochlear microphonic recordings and with a microphone attached to a probe tube in the auditory canal. Interaural time disparity (ITD) varied monotonically with azimuth. Interaural intensity differences (IID) also changed with azimuth, but the largest IIDs were less than 15 dB, and the variation was not monotonic. Neither ITD nor IID varied systematically with changes in the vertical position of a sound source. We used dichotic stimulation to determine the sensitivity of ICx neurons to these binaural cues. Best ITD of ICx units was topographically mapped and strongly correlated with receptive-field azimuth. The width of ITD tuning curves, measured at 50% of the maximum response, averaged 72 microseconds. All ICx neurons responded only to binaural stimulation and had nonmonotonic IID tuning curves. Best IID was weakly, but significantly, correlated with best ITD (r = 0.39, p less than 0.05). The IID tuning curves, however, were broad (mean 50% width = 24 dB), and 67% of the units had best IIDs within 5 dB of 0 dB IID. ITD tuning was sensitive to variations in IID in the direction opposite to that expected for time-intensity trading, but the magnitude of this effect was only 1.5 microseconds/dB IID. We conclude that, in the great horned owl, the spatial selectivity of ICx neurons arises primarily from their ITD tuning. Except for the absence of elevation selectivity and the narrow range of best IIDs, ICx in the great horned owl appears to be organized much the same as in the barn owl.
Noguchi, Yoshihiro; Takahashi, Masatoki; Ito, Taku; Fujikawa, Taro; Kawashima, Yoshiyuki; Kitamura, Ken
2016-10-01
To assess possible delayed recovery of the maximum speech discrimination score (SDS) when the audiometric threshold ceases to change. We retrospectively examined 20 patients with idiopathic sudden sensorineural hearing loss (ISSNHL) (gender: 9 males and 11 females, age: 24-71 years). The findings of pure-tone average (PTA), maximum SDS, auditory brainstem responses (ABRs), and tinnitus handicap inventory (THI) were compared among the three periods of 1-3 months, 6-8 months, and 11-13 months after ISSNHL onset. No significant differences were noted in PTA, whereas an increase of greater than or equal to 10% in maximum SDS was recognized in 9 patients (45%) from the period of 1-3 months to the period of 11-13 months. Four of the 9 patients showed 20% or more recovery of maximum SDS. No significant differences were observed in the interpeak latency difference between waves I and V and the interaural latency difference of wave V in ABRs, whereas an improvement in the THI grade was recognized in 11 patients (55%) from the period of 1-3 months to the period of 11-13 months. The present study suggested the incidence of maximum SDS restoration over 1 year after ISSNHL onset. These findings may be because of the effects of auditory plasticity via the central auditory pathway. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Chronic detachable headphones for acoustic stimulation in freely moving animals.
Nodal, Fernando R; Keating, Peter; King, Andrew J
2010-05-30
A growing number of studies of auditory processing are being carried out in awake, behaving animals, creating a need for precisely controlled sound delivery without restricting head movements. We have designed a system for closed-field stimulus presentation in freely moving ferrets, which comprises lightweight, adjustable headphones that can be consistently positioned over the ears via a small, skull-mounted implant. The invasiveness of the implant was minimized by simplifying its construction and using dental adhesive only for attaching it to the skull, thereby reducing the surgery required and avoiding the use of screws or other anchoring devices. Attaching the headphones to a chronic implant also reduced the amount of contact they had with the head and ears, increasing the willingness of the animals to wear them. We validated sound stimulation via the headphones in ferrets trained previously in a free-field task to localize stimuli presented from one of two loudspeakers. Noise bursts were delivered binaurally over the headphones and interaural level differences (ILDs) were introduced to allow the sound to be lateralized. Animals rapidly transferred from the free-field task to indicate the perceived location of the stimulus presented over headphones. They showed near perfect lateralization with a 5 dB ILD, matching the scores achieved in the free-field task. As expected, the ferrets' performance declined when the ILD was reduced in value. This closed-field system can easily be adapted for use in other species, and provides a reliable means of presenting closed-field stimuli whilst monitoring behavioral responses in freely moving animals. (c) 2010 Elsevier B.V. All rights reserved.
Effects of Telephone Ring on Two Mental Tasks Relative to AN Office
NASA Astrophysics Data System (ADS)
Mouri, K.; Akiyama, K.; Ando, Y.
2001-03-01
In many cases, there are a lot of noise sources in an office and particularly, telephone ringing often irritates the office workers. Effects of aircraft noise on the mental work of pupils were reported by Ando et al.[1]. In spite of its serious effect, it has not yet been found how the physical parameters of the wave form influence the perception of noise. The purpose of this study is to investigate the effects of telephone ringing on two mental tasks. This investigation is based on the human auditory-brain model consisting of the auto-correlation function (ACF) of sound source, the interaural cross-correlation function (IACF) for sound signals arriving at the two ears, and the specialization of the cerebral hemispheres. Under the stimulus of a telephone ringing, an adding task and a drawing task were performed. Results show that telephone ringing influences differently the two tasks: the V-type relaxation was observed only during the drawing task. It is revealed that the interference effect between the drawing task and the noise may occur in the right hemisphere.
Localizing the sources of two independent noises: Role of time varying amplitude differences
Yost, William A.; Brown, Christopher A.
2013-01-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597
Localizing the sources of two independent noises: role of time varying amplitude differences.
Yost, William A; Brown, Christopher A
2013-04-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.
Auditory brainstem response in neonates: influence of gender and weight/gestational age ratio
Angrisani, Rosanna M. Giaffredo; Bautzer, Ana Paula D.; Matas, Carla Gentile; de Azevedo, Marisa Frasson
2013-01-01
OBJECTIVE: To investigate the influence of gender and weight/gestational age ratio on the Auditory Brainstem Response (ABR) in preterm (PT) and term (T) newborns. METHODS: 176 newborns were evaluated by ABR; 88 were preterm infants - 44 females (22 small and 22 appropriate for gestational age) and 44 males (22 small and 22 appropriate for gestational age). The preterm infants were compared to 88 term infants - 44 females (22 small and 22 appropriate for gestational age) and 44 males (22 small and 22 appropriate for gestational age). All newborns had bilateral presence of transient otoacoustic emissions and type A tympanometry. RESULTS: No interaural differences were found. ABR response did not differentiate newborns regarding weight/gestational age in males and females. Term newborn females showed statistically shorter absolute latencies (except on wave I) than males. This finding did not occur in preterm infants, who had longer latencies than term newborns, regardless of gender. CONCLUSIONS: Gender and gestational age influence term infants' ABR, with lower responses in females. The weight/gestational age ratio did not influence ABR response in either groups. PMID:24473955
The Usefulness of Rectified VEMP.
Lee, Kang Jin; Kim, Min Soo; Son, Eun Jin; Lim, Hye Jin; Bang, Jung Hwan; Kang, Jae Goo
2008-09-01
For a reliable interpretation of left-right difference in Vestibular evoked myogenic potential (VEMP), the amount of sternocleidomastoid muscle (SCM) contraction has to be considered. Therefore, we can ensure that a difference in amplitude between the right and left VEMPs on a patient is due to vestibular abnormality, not due to individual differences of tonic muscle activity, fatigue or improper position. We used rectification to normalize electromyograph (EMG) based on pre-stimulus EMG activity. This study was designed to evaluate and compare the effect of rectification in two conventional ways of SCM contraction. Twenty-two normal subjects were included. Two methods were employed for SCM contraction in a subject. First, subjects were made to lie flat on their back, lifting the head off the table and turning to the opposite side. Secondly, subjects push with their jaw against the hand-held inflated cuff to generate cuff pressure of 40 mmHg. From the VEMP graphs, amplitude parameters and inter-aural difference ratio (IADR) were analyzed before and after EMG rectification. Before the rectification, the average IADR of the first method was not statistically different from that of the second method. The average IADRs from each method decreased in a rectified response, showing significant reduction in asymmetry ratio. The lowest average IADR could be obtained with the combination of both the first method and rectification. Rectified data show more reliable IADR and may help diagnose some vestibular disorders according to amplitude-associated parameters. The usage of rectification can be maximized with the proper SCM contraction method.
Vestibular evoked myogenic potential findings in multiple sclerosis.
Escorihuela García, Vicente; Llópez Carratalá, Ignacio; Orts Alborch, Miguel; Marco Algarra, Jaime
2013-01-01
Multiple sclerosis is an inflammatory disease involving the occurrence of demyelinating, chronic neurodegenerative lesions in the central nervous system. We studied vestibular evoked myogenic potentials (VEMPs) in this pathology, to allow us to evaluate the saccule, inferior vestibular nerve and vestibular-spinal pathway non-invasively. There were 23 patients diagnosed with multiple sclerosis who underwent VEMP recordings, comparing our results with a control group consisting of 35 healthy subjects. We registered p13 and n23 wave latencies, interaural amplitude difference and asymmetry ratio between both ears. Subjects also underwent an otoscopy and audiometric examination. The prolongation of p13 and n23 wave latencies was the most notable characteristic, with a mean p13 wave latency of 19.53 milliseconds and a mean latency of 30.06 milliseconds for n23. In contrast, the asymmetry index showed no significant differences with our control group. In case of multiple sclerosis, the prolongation of the p13 and n23 VEMP wave latencies is a feature that has been attributed to slowing of conduction by demyelination of the vestibular-spinal pathway. In this regard, alteration of the response or lack thereof in these potentials has a locator value of injury to the lower brainstem. Copyright © 2013 Elsevier España, S.L. All rights reserved.
Tonotopic tuning in a sound localization circuit.
Slee, Sean J; Higgs, Matthew H; Fairhall, Adrienne L; Spain, William J
2010-05-01
Nucleus laminaris (NL) neurons encode interaural time difference (ITD), the cue used to localize low-frequency sounds. A physiologically based model of NL input suggests that ITD information is contained in narrow frequency bands around harmonics of the sound frequency. This suggested a theory, which predicts that, for each tone frequency, there is an optimal time course for synaptic inputs to NL that will elicit the largest modulation of NL firing rate as a function of ITD. The theory also suggested that neurons in different tonotopic regions of NL require specialized tuning to take advantage of the input gradient. Tonotopic tuning in NL was investigated in brain slices by separating the nucleus into three regions based on its anatomical tonotopic map. Patch-clamp recordings in each region were used to measure both the synaptic and the intrinsic electrical properties. The data revealed a tonotopic gradient of synaptic time course that closely matched the theoretical predictions. We also found postsynaptic band-pass filtering. Analysis of the combined synaptic and postsynaptic filters revealed a frequency-dependent gradient of gain for the transformation of tone amplitude to NL firing rate modulation. Models constructed from the experimental data for each tonotopic region demonstrate that the tonotopic tuning measured in NL can improve ITD encoding across sound frequencies.
Auditory and visual orienting responses in listeners with and without hearing-impairment
Brimijoin, W. Owen; McShefferty, David; Akeroyd, Michael A.
2015-01-01
Head movements are intimately involved in sound localization and may provide information that could aid an impaired auditory system. Using an infrared camera system, head position and orientation was measured for 17 normal-hearing and 14 hearing-impaired listeners seated at the center of a ring of loudspeakers. Listeners were asked to orient their heads as quickly as was comfortable toward a sequence of visual targets, or were blindfolded and asked to orient toward a sequence of loudspeakers playing a short sentence. To attempt to elicit natural orienting responses, listeners were not asked to reorient their heads to the 0° loudspeaker between trials. The results demonstrate that hearing-impairment is associated with several changes in orienting responses. Hearing-impaired listeners showed a larger difference in auditory versus visual fixation position and a substantial increase in initial and fixation latency for auditory targets. Peak velocity reached roughly 140 degrees per second in both groups, corresponding to a rate of change of approximately 1 microsecond of interaural time difference per millisecond of time. Most notably, hearing-impairment was associated with a large change in the complexity of the movement, changing from smooth sigmoidal trajectories to ones characterized by abruptly-changing velocities, directional reversals, and frequent fixation angle corrections. PMID:20550266
Noise reduction of coincidence detector output by the inferior colliculus of the barn owl.
Christianson, G Björn; Peña, José Luis
2006-05-31
A recurring theme in theoretical work is that integration over populations of similarly tuned neurons can reduce neural noise. However, there are relatively few demonstrations of an explicit noise reduction mechanism in a neural network. Here we demonstrate that the brainstem of the barn owl includes a stage of processing apparently devoted to increasing the signal-to-noise ratio in the encoding of the interaural time difference (ITD), one of two primary binaural cues used to compute the position of a sound source in space. In the barn owl, the ITD is processed in a dedicated neural pathway that terminates at the core of the inferior colliculus (ICcc). The actual locus of the computation of the ITD is before ICcc in the nucleus laminaris (NL), and ICcc receives no inputs carrying information that did not originate in NL. Unlike in NL, the rate-ITD functions of ICcc neurons require as little as a single stimulus presentation per ITD to show coherent ITD tuning. ICcc neurons also displayed a greater dynamic range with a maximal difference in ITD response rates approximately double that seen in NL. These results indicate that ICcc neurons perform a computation functionally analogous to averaging across a population of similarly tuned NL neurons.
Dykstra, Andrew R; Burchard, Daniel; Starzynski, Christian; Riedel, Helmut; Rupp, Andre; Gutschalk, Alexander
2016-08-01
We used magnetoencephalography to examine lateralization and binaural interaction of the middle-latency and late-brainstem components of the auditory evoked response (the MLR and SN10, respectively). Click stimuli were presented either monaurally, or binaurally with left- or right-leading interaural time differences (ITDs). While early MLR components, including the N19 and P30, were larger for monaural stimuli presented contralaterally (by approximately 30 and 36 % in the left and right hemispheres, respectively), later components, including the N40 and P50, were larger ipsilaterally. In contrast, MLRs elicited by binaural clicks with left- or right-leading ITDs did not differ. Depending on filter settings, weak binaural interaction could be observed as early as the P13 but was clearly much larger for later components, beginning at the P30, indicating some degree of binaural linearity up to early stages of cortical processing. The SN10, an obscure late-brainstem component, was observed consistently in individuals and showed linear binaural additivity. The results indicate that while the MLR is lateralized in response to monaural stimuli-and not ITDs-this lateralization reverses from primarily contralateral to primarily ipsilateral as early as 40 ms post stimulus and is never as large as that seen with fMRI.
Monaural Congenital Deafness Affects Aural Dominance and Degrades Binaural Processing
Tillein, Jochen; Hubka, Peter; Kral, Andrej
2016-01-01
Cortical development extensively depends on sensory experience. Effects of congenital monaural and binaural deafness on cortical aural dominance and representation of binaural cues were investigated in the present study. We used an animal model that precisely mimics the clinical scenario of unilateral cochlear implantation in an individual with single-sided congenital deafness. Multiunit responses in cortical field A1 to cochlear implant stimulation were studied in normal-hearing cats, bilaterally congenitally deaf cats (CDCs), and unilaterally deaf cats (uCDCs). Binaural deafness reduced cortical responsiveness and decreased response thresholds and dynamic range. In contrast to CDCs, in uCDCs, cortical responsiveness was not reduced, but hemispheric-specific reorganization of aural dominance and binaural interactions were observed. Deafness led to a substantial drop in binaural facilitation in CDCs and uCDCs, demonstrating the inevitable role of experience for a binaural benefit. Sensitivity to interaural time differences was more reduced in uCDCs than in CDCs, particularly at the hemisphere ipsilateral to the hearing ear. Compared with binaural deafness, unilateral hearing prevented nonspecific reduction in cortical responsiveness, but extensively reorganized aural dominance and binaural responses. The deaf ear remained coupled with the cortex in uCDCs, demonstrating a significant difference to deprivation amblyopia in the visual system. PMID:26803166
Monaural Congenital Deafness Affects Aural Dominance and Degrades Binaural Processing.
Tillein, Jochen; Hubka, Peter; Kral, Andrej
2016-04-01
Cortical development extensively depends on sensory experience. Effects of congenital monaural and binaural deafness on cortical aural dominance and representation of binaural cues were investigated in the present study. We used an animal model that precisely mimics the clinical scenario of unilateral cochlear implantation in an individual with single-sided congenital deafness. Multiunit responses in cortical field A1 to cochlear implant stimulation were studied in normal-hearing cats, bilaterally congenitally deaf cats (CDCs), and unilaterally deaf cats (uCDCs). Binaural deafness reduced cortical responsiveness and decreased response thresholds and dynamic range. In contrast to CDCs, in uCDCs, cortical responsiveness was not reduced, but hemispheric-specific reorganization of aural dominance and binaural interactions were observed. Deafness led to a substantial drop in binaural facilitation in CDCs and uCDCs, demonstrating the inevitable role of experience for a binaural benefit. Sensitivity to interaural time differences was more reduced in uCDCs than in CDCs, particularly at the hemisphere ipsilateral to the hearing ear. Compared with binaural deafness, unilateral hearing prevented nonspecific reduction in cortical responsiveness, but extensively reorganized aural dominance and binaural responses. The deaf ear remained coupled with the cortex in uCDCs, demonstrating a significant difference to deprivation amblyopia in the visual system. © The Author 2016. Published by Oxford University Press.
Mechanisms underlying the temporal precision of sound coding at the inner hair cell ribbon synapse
Moser, Tobias; Neef, Andreas; Khimich, Darina
2006-01-01
Our auditory system is capable of perceiving the azimuthal location of a low frequency sound source with a precision of a few degrees. This requires the auditory system to detect time differences in sound arrival between the two ears down to tens of microseconds. The detection of these interaural time differences relies on network computation by auditory brainstem neurons sharpening the temporal precision of the afferent signals. Nevertheless, the system requires the hair cell synapse to encode sound with the highest possible temporal acuity. In mammals, each auditory nerve fibre receives input from only one inner hair cell (IHC) synapse. Hence, this single synapse determines the temporal precision of the fibre. As if this was not enough of a challenge, the auditory system is also capable of maintaining such high temporal fidelity with acoustic signals that vary greatly in their intensity. Recent research has started to uncover the cellular basis of sound coding. Functional and structural descriptions of synaptic vesicle pools and estimates for the number of Ca2+ channels at the ribbon synapse have been obtained, as have insights into how the receptor potential couples to the release of synaptic vesicles. Here, we review current concepts about the mechanisms that control the timing of transmitter release in inner hair cells of the cochlea. PMID:16901948
Binaural auditory beats affect long-term memory.
Garcia-Argibay, Miguel; Santed, Miguel A; Reales, José M
2017-12-08
The presentation of two pure tones to each ear separately with a slight difference in their frequency results in the perception of a single tone that fluctuates in amplitude at a frequency that equals the difference of interaural frequencies. This perceptual phenomenon is known as binaural auditory beats, and it is thought to entrain electrocortical activity and enhance cognition functions such as attention and memory. The aim of this study was to determine the effect of binaural auditory beats on long-term memory. Participants (n = 32) were kept blind to the goal of the study and performed both the free recall and recognition tasks after being exposed to binaural auditory beats, either in the beta (20 Hz) or theta (5 Hz) frequency bands and white noise as a control condition. Exposure to beta-frequency binaural beats yielded a greater proportion of correctly recalled words and a higher sensitivity index d' in recognition tasks, while theta-frequency binaural-beat presentation lessened the number of correctly remembered words and the sensitivity index. On the other hand, we could not find differences in the conditional probability for recall given recognition between beta and theta frequencies and white noise, suggesting that the observed changes in recognition were due to the recollection component. These findings indicate that the presentation of binaural auditory beats can affect long-term memory both positively and negatively, depending on the frequency used.
The representation of sound localization cues in the barn owl's inferior colliculus
Singheiser, Martin; Gutfreund, Yoram; Wagner, Hermann
2012-01-01
The barn owl is a well-known model system for studying auditory processing and sound localization. This article reviews the morphological and functional organization, as well as the role of the underlying microcircuits, of the barn owl's inferior colliculus (IC). We focus on the processing of frequency and interaural time (ITD) and level differences (ILD). We first summarize the morphology of the sub-nuclei belonging to the IC and their differentiation by antero- and retrograde labeling and by staining with various antibodies. We then focus on the response properties of neurons in the three major sub-nuclei of IC [core of the central nucleus of the IC (ICCc), lateral shell of the central nucleus of the IC (ICCls), and the external nucleus of the IC (ICX)]. ICCc projects to ICCls, which in turn sends its information to ICX. The responses of neurons in ICCc are sensitive to changes in ITD but not to changes in ILD. The distribution of ITD sensitivity with frequency in ICCc can only partly be explained by optimal coding. We continue with the tuning properties of ICCls neurons, the first station in the midbrain where the ITD and ILD pathways merge after they have split at the level of the cochlear nucleus. The ICCc and ICCls share similar ITD and frequency tuning. By contrast, ICCls shows sigmoidal ILD tuning which is absent in ICCc. Both ICCc and ICCls project to the forebrain, and ICCls also projects to ICX, where space-specific neurons are found. Space-specific neurons exhibit side peak suppression in ITD tuning, bell-shaped ILD tuning, and are broadly tuned to frequency. These neurons respond only to restricted positions of auditory space and form a map of two-dimensional auditory space. Finally, we briefly review major IC features, including multiplication-like computations, correlates of echo suppression, plasticity, and adaptation. PMID:22798945
Contralateral Effects and Binaural Interactions in Dorsal Cochlear Nucleus
2005-01-01
The dorsal cochlear nucleus (DCN) receives afferent input from the auditory nerve and is thus usually thought of as a monaural nucleus, but it also receives inputs from the contralateral cochlear nucleus as well as descending projections from binaural nuclei. Evidence suggests that some of these commissural and efferent projections are excitatory, whereas others are inhibitory. The goals of this study were to investigate the nature and effects of these inputs in the DCN by measuring DCN principal cell (type IV unit) responses to a variety of contralateral monaural and binaural stimuli. As expected, the results of contralateral stimulation demonstrate a mixture of excitatory and inhibitory influences, although inhibitory effects predominate. Most type IV units are weakly, if at all, inhibited by tones but are strongly inhibited by broadband noise (BBN). The inhibition evoked by BBN is also low threshold and short latency. This inhibition is abolished and excitation is revealed when strychnine, a glycine-receptor antagonist, is applied to the DCN; application of bicuculline, a GABAA-receptor antagonist, has similar effects but does not block the onset of inhibition. Manipulations of discrete fiber bundles suggest that the inhibitory, but not excitatory, inputs to DCN principal cells enter the DCN via its output pathway, and that the short latency inhibition is carried by commissural axons. Consistent with their respective monaural effects, responses to binaural tones as a function of interaural level difference are essentially the same as responses to ipsilateral tones, whereas binaural BBN responses decrease with increasing contralateral level. In comparison to monaural responses, binaural responses to virtual space stimuli show enhanced sensitivity to the elevation of a sound source in ipsilateral space but reduced sensitivity in contralateral space. These results show that the contralateral inputs to the DCN are functionally relevant in natural listening conditions, and that one role of these inputs is to enhance DCN processing of spectral sound localization cues produced by the pinna. PMID:16075189
Horizontal sound localization in cochlear implant users with a contralateral hearing aid.
Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A
2016-06-01
Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.
[The characteristics of VEMP in patients with acoustic neuroma].
Xue, Bin; Yang, Jun
2008-01-01
To establish the normal value of the vestibular evoked myogenic potential (VEMP), and to determine the characteristics of VEMP in patients with acoustic neuroma (AN) and to explore the significance of VEMP in diagnosis of AN. Click-evoked VEMP was recorded with surface electrodes attached on the sternocleidomastoid muscle. Latencies and amplitudes of specific waveform of VEMP were measured. The hearing normal subjects including 26 males and 20 females were chosen to establish the normal value of VEMP. VEMP was investigated in 14 patients with AN who underwent surgery during the period of 2006-2007 as well as auditory brainstem response (ABR) and vestibular caloric test. Of 46 subjects with normal hearing, VEMP was present in both ears in 43 subjects, absent in either ear in three subjects. The reducible rate is 93.5% (86/92). The nor-mal value obtained from 86 reducible ears were as follows (means +/- standard deviation): latency of p13 (11.86 +/- 2.11) ms, latency of n23 (18.57 +/- 2.19) ms, interval time between p13 and n23 (6.71 +/- 1.69) ms, amplitude of p13n23 (24.18 +/- 8.22) microV. Interaural variances in 43 subjects whose VEMP were available were as follows (means +/- standard deviation): /deltap13 (0.64 +/- 0.61) ms, /deltan23/(1.05 +/- 0.97) ms, interval time between /delta13n23/ (0.84 +/- 0.81) ms, amplitude ratio (max/min) 1.32 +/- 0. 37, interaural asymmetric ratio of VEMP 0.12 +/- 0.11. Of the 14 patients with AN, VEMP was absent on the affected side in eight patients, absent on either side in three patients, and present on the unaffected side in 11 patients. VEMP presented on the affected side in three patients was significantly prolonged in /deltapl3/ and /deltap13n23/. Patients with AN characterized with VEMP could be useful in the diagnosis of AN combined together with other tests.
Hemispheric asymmetry of ERPs and MMNs evoked by slow, fast and abrupt auditory motion.
Shestopalova, L B; Petropavlovskaia, E A; Vaitulevich, S Ph; Nikitin, N I
2016-10-01
The current MMN study investigates whether brain lateralization during automatic discrimination of sound stimuli moving at different velocities is consistent with one of the three models of asymmetry: the right-hemispheric dominance model, the contralateral dominance model, or the neglect model. Auditory event-related potentials (ERPs) were recorded for three patterns of sound motion produced by linear or abrupt changes of interaural time differences. The slow motion (450deg/s) was used as standard, and the fast motion (620deg/s) and the abrupt sound shift served as deviants in the oddball blocks. All stimuli had the same onset/offset spatial positions. We compared the effects of the recording side (left, right) and of the direction of sound displacement (ipsi- or contralateral with reference to the side of recording) on the ERPs and mismatch negativity (MMN). Our results indicated different patterns of asymmetry for the ERPs and MMN responses. The ERPs showed a velocity-independent right-hemispheric dominance that emerged at the descending limb of N1 wave (at around 120-160ms) and could be related to overall context of the preattentive spatial perception. The MMNs elicited in the left hemisphere (at around 230-270ms) exhibited a contralateral dominance, whereas the right-hemispheric MMNs were insensitive to the direction of sound displacement. These differences in contralaterality between MMN responses produced by the left and the right hemisphere favour the neglect model of the preattentive motion processing indexed by MMN. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tardif, Eric; Spierer, Lucas; Clarke, Stephanie; Murray, Micah M
2008-03-07
Partially segregated neuronal pathways ("what" and "where" pathways, respectively) are thought to mediate sound recognition and localization. Less studied are interactions between these pathways. In two experiments, we investigated whether near-threshold pitch discrimination sensitivity (d') is altered by supra-threshold task-irrelevant position differences and likewise whether near-threshold position discrimination sensitivity is altered by supra-threshold task-irrelevant pitch differences. Each experiment followed a 2 x 2 within-subjects design regarding changes/no change in the task-relevant and task-irrelevant stimulus dimensions. In Experiment 1, subjects discriminated between 750 Hz and 752 Hz pure tones, and d' for this near-threshold pitch change significantly increased by a factor of 1.09 when accompanied by a task-irrelevant position change of 65 micros interaural time difference (ITD). No response bias was induced by the task-irrelevant position change. In Experiment 2, subjects discriminated between 385 micros and 431 micros ITDs, and d' for this near-threshold position change significantly increased by a factor of 0.73 when accompanied by task-irrelevant pitch changes (6 Hz). In contrast to Experiment 1, task-irrelevant pitch changes induced a response criterion bias toward responding that the two stimuli differed. The collective results are indicative of facilitative interactions between "what" and "where" pathways. By demonstrating how these pathways may cooperate under impoverished listening conditions, our results bear implications for possible neuro-rehabilitation strategies. We discuss our results in terms of the dual-pathway model of auditory processing.
Heffner, Henry E; Heffner, Rickye S
2018-01-01
Branstetter and his colleagues present the audiograms of eight killer whales and provide a comprehensive review of previous killer whale audiograms. In their paper, they say that the present authors have reported a relationship between size and high-frequency hearing but that echolocating cetaceans might be a special case. The purpose of these comments is to clarify that the relationship of a species' high-frequency hearing is not to its size (mass) but to its "functional interaural distance" (a measure of the availability of sound-localization cues). Moreover, it has previously been noted that echolocating animals, cetaceans as well as bats, have extended their high-frequency hearing somewhat beyond the frequencies used by comparable non-echolocators for passive localization.
The Usefulness of Rectified VEMP
Kim, Min Soo; Son, Eun Jin; Lim, Hye Jin; Bang, Jung Hwan; Kang, Jae Goo
2008-01-01
Objectives For a reliable interpretation of left-right difference in Vestibular evoked myogenic potential (VEMP), the amount of sternocleidomastoid muscle (SCM) contraction has to be considered. Therefore, we can ensure that a difference in amplitude between the right and left VEMPs on a patient is due to vestibular abnormality, not due to individual differences of tonic muscle activity, fatigue or improper position. We used rectification to normalize electromyograph (EMG) based on pre-stimulus EMG activity. This study was designed to evaluate and compare the effect of rectification in two conventional ways of SCM contraction. Methods Twenty-two normal subjects were included. Two methods were employed for SCM contraction in a subject. First, subjects were made to lie flat on their back, lifting the head off the table and turning to the opposite side. Secondly, subjects push with their jaw against the hand-held inflated cuff to generate cuff pressure of 40 mmHg. From the VEMP graphs, amplitude parameters and inter-aural difference ratio (IADR) were analyzed before and after EMG rectification. Results Before the rectification, the average IADR of the first method was not statistically different from that of the second method. The average IADRs from each method decreased in a rectified response, showing significant reduction in asymmetry ratio. The lowest average IADR could be obtained with the combination of both the first method and rectification. Conclusion Rectified data show more reliable IADR and may help diagnose some vestibular disorders according to amplitude-associated parameters. The usage of rectification can be maximized with the proper SCM contraction method. PMID:19434246
Figure-background in dichotic task and their relation to skills untrained.
Cibian, Aline Priscila; Pereira, Liliane Desgualdo
2015-01-01
To evaluate the effectiveness of auditory training in dichotic task and to compare the responses of trained skills with the responses of untrained skills, after 4-8 weeks. Nineteen subjects, aged 12-15 years, underwent an auditory training based on dichotic interaural intensity difference (DIID), organized in eight sessions, each lasting 50 min. The assessment of auditory processing was conducted in three stages: before the intervention, after the intervention, and in the middle and at the end of the training. Data from this evaluation were analyzed as per group of disorder, according to the changes in the auditory processes evaluated: selective attention and temporal processing. Each of them was named selective attention group (SAG) and temporal processing group (TPG), and, for both the processes, selective attention and temporal processing group (SATPG). The training improved both the trained and untrained closing skill, normalizing all individuals. Untrained solving and temporal ordering skills did not reach normality for SATPG and TPG. Individuals reached normality for the trained figure-ground skill and for the untrained closing skill. The untrained solving and temporal ordering skills improved in some individuals but failed to reach normality.
Detecting and Quantifying Topography in Neural Maps
Yarrow, Stuart; Razak, Khaleel A.; Seitz, Aaron R.; Seriès, Peggy
2014-01-01
Topographic maps are an often-encountered feature in the brains of many species, yet there are no standard, objective procedures for quantifying topography. Topographic maps are typically identified and described subjectively, but in cases where the scale of the map is close to the resolution limit of the measurement technique, identifying the presence of a topographic map can be a challenging subjective task. In such cases, an objective topography detection test would be advantageous. To address these issues, we assessed seven measures (Pearson distance correlation, Spearman distance correlation, Zrehen's measure, topographic product, topological correlation, path length and wiring length) by quantifying topography in three classes of cortical map model: linear, orientation-like, and clusters. We found that all but one of these measures were effective at detecting statistically significant topography even in weakly-ordered maps, based on simulated noisy measurements of neuronal selectivity and sparse sampling of the maps. We demonstrate the practical applicability of these measures by using them to examine the arrangement of spatial cue selectivity in pallid bat A1. This analysis shows that significantly topographic arrangements of interaural intensity difference and azimuth selectivity exist at the scale of individual binaural clusters. PMID:24505279
Glackin, Brendan; Wall, Julie A.; McGinnity, Thomas M.; Maguire, Liam P.; McDaid, Liam J.
2010-01-01
Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz–1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of ±10° is used. For angular resolutions down to 2.5°, it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance. PMID:20802855
Interaction of Object Binding Cues in Binaural Masking Pattern Experiments.
Verhey, Jesko L; Lübken, Björn; van de Par, Steven
2016-01-01
Object binding cues such as binaural and across-frequency modulation cues are likely to be used by the auditory system to separate sounds from different sources in complex auditory scenes. The present study investigates the interaction of these cues in a binaural masking pattern paradigm where a sinusoidal target is masked by a narrowband noise. It was hypothesised that beating between signal and masker may contribute to signal detection when signal and masker do not spectrally overlap but that this cue could not be used in combination with interaural cues. To test this hypothesis an additional sinusoidal interferer was added to the noise masker with a lower frequency than the noise whereas the target had a higher frequency than the noise. Thresholds increase when the interferer is added. This effect is largest when the spectral interferer-masker and masker-target distances are equal. The result supports the hypothesis that modulation cues contribute to signal detection in the classical masking paradigm and that these are analysed with modulation bandpass filters. A monaural model including an across-frequency modulation process is presented that account for this effect. Interestingly, the interferer also affects dichotic thresholds indicating that modulation cues also play a role in binaural processing.
NASA Astrophysics Data System (ADS)
Shin, Ki Hoon; Park, Youngjin
Human's ability to perceive elevation of a sound and distinguish whether a sound is coming from the front or rear strongly depends on the monaural spectral features of the pinnae. In order to realize an effective virtual auditory display by HRTF (head-related transfer function) customization, the pinna responses were isolated from the median HRIRs (head-related impulse responses) of 45 individual HRIRs in the CIPIC HRTF database and modeled as linear combinations of 4 or 5 basic temporal shapes (basis functions) per each elevation on the median plane by PCA (principal components analysis) in the time domain. By tuning the weight of each basis function computed for a specific height to replace the pinna response in the KEMAR HRIR at the same height with the resulting customized pinna response and listening to the filtered stimuli over headphones, 4 individuals with normal hearing sensitivity were able to create a set of HRIRs that outperformed the KEMAR HRIRs in producing vertical effects with reduced front/back ambiguity in the median plane. Since the monaural spectral features of the pinnae are almost independent of azimuthal variation of the source direction, similar vertical effects could also be generated at different azimuthal directions simply by varying the ITD (interaural time difference) according to the direction as well as the size of each individual's own head.
Contextual effects on preattentive processing of sound motion as revealed by spatial MMN.
Shestopalova, L B; Petropavlovskaia, E A; Vaitulevich, S Ph; Nikitin, N I
2015-04-01
The magnitude of spatial distance between sound stimuli is critically important for their preattentive discrimination, yet the effect of stimulus context on auditory motion processing is not clear. This study investigated the effects of acoustical change and stimulus context on preattentive spatial change detection. Auditory event-related potentials (ERPs) were recorded for stationary midline noises and two patterns of sound motion produced by linear or abrupt changes of interaural time differences. Each of the three types of stimuli was used as standard or deviant in different blocks. Context effects on mismatch negativity (MMN) elicited by stationary and moving sound stimuli were investigated by reversing the role of standard and deviant stimuli, while the acoustical stimulus parameters were kept the same. That is, MMN amplitudes were calculated by subtracting ERPs to identical stimuli presented as standard in one block and deviant in another block. In contrast, effects of acoustical change on MMN amplitudes were calculated by subtracting ERPs of standards and deviants presented within the same block. Preattentive discrimination of moving and stationary sounds indexed by MMN was strongly dependent on the stimulus context. Higher MMNs were produced in oddball configurations where deviance represented increments of the sound velocity, as compared to configurations with velocity decrements. The effect of standard-deviant reversal was more pronounced with the abrupt sound displacement than with gradual sound motion. Copyright © 2015 Elsevier B.V. All rights reserved.
Frey, Johannes Daniel; Wendt, Mike; Löw, Andreas; Möller, Stephan; Zölzer, Udo; Jacobsen, Thomas
2017-02-15
Changes in room acoustics provide important clues about the environment of sound source-perceiver systems, for example, by indicating changes in the reflecting characteristics of surrounding objects. To study the detection of auditory irregularities brought about by a change in room acoustics, a passive oddball protocol with participants watching a movie was applied in this study. Acoustic stimuli were presented via headphones. Standards and deviants were created by modelling rooms of different sizes, keeping the values of the basic acoustic dimensions (e.g., frequency, duration, sound pressure, and sound source location) as constant as possible. In the first experiment, each standard and deviant stimulus consisted of sequences of three short sounds derived from sinusoidal tones, resulting in three onsets during each stimulus. Deviant stimuli elicited a Mismatch Negativity (MMN) as well as two additional negative deflections corresponding to the three onset peaks. In the second experiment, only one sound was used; the stimuli were otherwise identical to the ones used in the first experiment. Again, an MMN was observed, followed by an additional negative deflection. These results provide further support for the hypothesis of automatic detection of unattended changes in room acoustics, extending previous work by demonstrating the elicitation of an MMN by changes in room acoustics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Binaural Pitch Fusion in Bilateral Cochlear Implant Users.
Reiss, Lina A J; Fowler, Jennifer R; Hartling, Curtis L; Oh, Yonghee
Binaural pitch fusion is the fusion of stimuli that evoke different pitches between the ears into a single auditory image. Individuals who use hearing aids or bimodal cochlear implants (CIs) experience abnormally broad binaural pitch fusion, such that sounds differing in pitch by as much as 3-4 octaves are fused across ears, leading to spectral averaging and speech perception interference. The goal of this study was to determine if adult bilateral CI users also experience broad binaural pitch fusion. Stimuli were pulse trains delivered to individual electrodes. Fusion ranges were measured using simultaneous, dichotic presentation of reference and comparison stimuli in opposite ears, and varying the comparison stimulus to find the range that fused with the reference stimulus. Bilateral CI listeners had binaural pitch fusion ranges varying from 0 to 12 mm (average 6.1 ± 3.9 mm), where 12 mm indicates fusion over all electrodes in the array. No significant correlations of fusion range were observed with any subject factors related to age, hearing loss history, or hearing device history, or with any electrode factors including interaural electrode pitch mismatch, pitch match bandwidth, or within-ear electrode discrimination abilities. Bilateral CI listeners have abnormally broad fusion, similar to hearing aid and bimodal CI listeners. This broad fusion may explain the variability of binaural benefits for speech perception in quiet and in noise in bilateral CI users.
NASA Technical Reports Server (NTRS)
Angelaki, D. E.; Hess, B. J.
1996-01-01
1. The dynamic properties of otolith-ocular reflexes elicited by sinusoidal linear acceleration along the three cardinal head axes were studied during off-vertical axis rotations in rhesus monkeys. As the head rotates in space at constant velocity about an off-vertical axis, otolith-ocular reflexes are elicited in response to the sinusoidally varying linear acceleration (gravity) components along the interaural, nasooccipital, or vertical head axis. Because the frequency of these sinusoidal stimuli is proportional to the velocity of rotation, rotation at low and moderately fast speeds allows the study of the mid-and low-frequency dynamics of these otolith-ocular reflexes. 2. Animals were rotated in complete darkness in the yaw, pitch, and roll planes at velocities ranging between 7.4 and 184 degrees/s. Accordingly, otolith-ocular reflexes (manifested as sinusoidal modulations in eye position and/or slow-phase eye velocity) were quantitatively studied for stimulus frequencies ranging between 0.02 and 0.51 Hz. During yaw and roll rotation, torsional, vertical, and horizontal slow-phase eye velocity was sinusoidally modulated as a function of head position. The amplitudes of these responses were symmetric for rotations in opposite directions. In contrast, mainly vertical slow-phase eye velocity was modulated during pitch rotation. This modulation was asymmetric for rotations in opposite direction. 3. Each of these response components in a given rotation plane could be associated with an otolith-ocular response vector whose sensitivity, temporal phase, and spatial orientation were estimated on the basis of the amplitude and phase of sinusoidal modulations during both directions of rotation. Based on this analysis, which was performed either for slow-phase eye velocity alone or for total eye excursion (including both slow and fast eye movements), two distinct response patterns were observed: 1) response vectors with pronounced dynamics and spatial/temporal properties that could be characterized as the low-frequency range of "translational" otolith-ocular reflexes; and 2) response vectors associated with an eye position modulation in phase with head position ("tilt" otolith-ocular reflexes). 4. The responses associated with two otolith-ocular vectors with pronounced dynamics consisted of horizontal eye movements evoked as a function of gravity along the interaural axis and vertical eye movements elicited as a function of gravity along the vertical head axis. Both responses were characterized by a slow-phase eye velocity sensitivity that increased three- to five-fold and large phase changes of approximately 100-180 degrees between 0.02 and 0.51 Hz. These dynamic properties could suggest nontraditional temporal processing in utriculoocular and sacculoocular pathways, possibly involving spatiotemporal otolith-ocular interactions. 5. The two otolith-ocular vectors associated with eye position responses in phase with head position (tilt otolith-ocular reflexes) consisted of torsional eye movements in response to gravity along the interaural axis, and vertical eye movements in response to gravity along the nasooccipital head axis. These otolith-ocular responses did not result from an otolithic effect on slow eye movements alone. Particularly at high frequencies (i.e., high speed rotations), saccades were responsible for most of the modulation of torsional and vertical eye position, which was relatively large (on average +/- 8-10 degrees/g) and remained independent of frequency. Such reflex dynamics can be simulated by a direct coupling of primary otolith afferent inputs to the oculomotor plant. (ABSTRACT TRUNCATED).
Lüddemann, Helge; Kollmeier, Birger; Riedel, Helmut
2016-02-01
Brief deviations of interaural correlation (IAC) can provide valuable cues for detection, segregation and localization of acoustic signals. This study investigated the processing of such "binaural gaps" in continuously running noise (100-2000 Hz), in comparison to silent "monaural gaps", by measuring late auditory evoked potentials (LAEPs) and perceptual thresholds with novel, iteratively optimized stimuli. Mean perceptual binaural gap duration thresholds exhibited a major asymmetry: they were substantially shorter for uncorrelated gaps in correlated and anticorrelated reference noise (1.75 ms and 4.1 ms) than for correlated and anticorrelated gaps in uncorrelated reference noise (26.5 ms and 39.0 ms). The thresholds also showed a minor asymmetry: they were shorter in the positive than in the negative IAC range. The mean behavioral threshold for monaural gaps was 5.5 ms. For all five gap types, the amplitude of LAEP components N1 and P2 increased linearly with the logarithm of gap duration. While perceptual and electrophysiological thresholds matched for monaural gaps, LAEP thresholds were about twice as long as perceptual thresholds for uncorrelated gaps, but half as long for correlated and anticorrelated gaps. Nevertheless, LAEP thresholds showed the same asymmetries as perceptual thresholds. For gap durations below 30 ms, LAEPs were dominated by the processing of the leading edge of a gap. For longer gap durations, in contrast, both the leading and the lagging edge of a gap contributed to the evoked response. Formulae for the equivalent rectangular duration (ERD) of the binaural system's temporal window were derived for three common window shapes. The psychophysical ERD was 68 ms for diotic and about 40 ms for anti- and uncorrelated noise. After a nonlinear Z-transform of the stimulus IAC prior to temporal integration, ERDs were about 10 ms for reference correlations of ±1 and 80 ms for uncorrelated reference. Hence, a physiologically motivated peripheral nonlinearity changed the rank order of ERDs across experimental conditions in a plausible manner. Copyright © 2015 Elsevier B.V. All rights reserved.
Owren, M J; Hopp, S L; Sinnott, J M; Petersen, M R
1988-06-01
We investigated the absolute auditory sensitivities of three monkey species (Cercopithecus aethiops, C. neglectus, and Macaca fuscata) and humans (Homo sapiens). Results indicated that species-typical variation exists in these primates. Vervets, which have the smallest interaural distance of the species that we tested, exhibited the greatest high-frequency sensitivity. This result is consistent with Masterton, Heffner, and Ravizza's (1969) observations that head size and high-frequency acuity are inversely correlated in mammals. Vervets were also the most sensitive in the middle frequency range. Furthermore, we found that de Brazza's monkeys, though they produce a specialized, low-pitched boom call, did not show the enhanced low-frequency sensitivity that Brown and Waser (1984) showed for blue monkeys (C. mitis), a species with a similar sound. This discrepancy may be related to differences in the acoustics of the respective habitats of these animals or in the way their boom calls are used. The acuity of Japanese monkeys was found to closely resemble that of rhesus macaques (M. mulatta) that were tested in previous studies. Finally, humans tested in the same apparatus exhibited normative sensitivities. These subjects responded more readily to low frequencies than did the monkeys but rapidly became less sensitive in the high ranges.
Strategies to combat auditory overload during vehicular command and control.
Abel, Sharon M; Ho, Geoffrey; Nakashima, Ann; Smith, Ingrid
2014-09-01
Strategies to combat auditory overload were studied. Normal-hearing males were tested in a sound isolated room in a mock-up of a military land vehicle. Two tasks were presented concurrently, in quiet and vehicle noise. For Task 1 dichotic phrases were delivered over a communications headset. Participants encoded only those beginning with a preassigned call sign (Baron or Charlie). For Task 2, they agreed or disagreed with simple equations presented either over loudspeakers, as text on the laptop monitor, in both the audio and the visual modalities, or not at all. Accuracy was significantly better by 20% on Task 2 when the equations were presented visually or audiovisually. Scores were at least 78% correct for dichotic phrases presented over the headset, with a right ear advantage of 7%, given the 5 dB speech-to-noise ratio. The left ear disadvantage was particularly apparent in noise, where the interaural difference was 12%. Relatively lower scores in the left ear, in noise, were observed for phrases beginning with Charlie. These findings underscore the benefit of delivering higher priority communications to the dominant ear, the importance of selecting speech sounds that are resilient to noise masking, and the advantage of using text in cases of degraded audio. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
Dong, Junzi; Colburn, H. Steven
2016-01-01
In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem. PMID:26866056
Todd, Ann E.; Goupell, Matthew J.; Litovsky, Ruth Y.
2016-01-01
Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs. PMID:27475132
Todd, Ann E; Goupell, Matthew J; Litovsky, Ruth Y
2016-07-01
Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs.
Dong, Junzi; Colburn, H Steven; Sen, Kamal
2016-01-01
In multisource, "cocktail party" sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.
Vonderschen, Katrin; Wagner, Hermann
2012-04-25
Birds and mammals exploit interaural time differences (ITDs) for sound localization. Subsequent to ITD detection by brainstem neurons, ITD processing continues in parallel midbrain and forebrain pathways. In the barn owl, both ITD detection and processing in the midbrain are specialized to extract ITDs independent of frequency, which amounts to a pure time delay representation. Recent results have elucidated different mechanisms of ITD detection in mammals, which lead to a representation of small ITDs in high-frequency channels and large ITDs in low-frequency channels, resembling a phase delay representation. However, the detection mechanism does not prevent a change in ITD representation at higher processing stages. Here we analyze ITD tuning across frequency channels with pure tone and noise stimuli in neurons of the barn owl's auditory arcopallium, a nucleus at the endpoint of the forebrain pathway. To extend the analysis of ITD representation across frequency bands to a large neural population, we employed Fourier analysis for the spectral decomposition of ITD curves recorded with noise stimuli. This method was validated using physiological as well as model data. We found that low frequencies convey sensitivity to large ITDs, whereas high frequencies convey sensitivity to small ITDs. Moreover, different linear phase frequency regimes in the high-frequency and low-frequency ranges suggested an independent convergence of inputs from these frequency channels. Our results are consistent with ITD being remodeled toward a phase delay representation along the forebrain pathway. This indicates that sensory representations may undergo substantial reorganization, presumably in relation to specific behavioral output.
Auditory pathway maturational study in small for gestational age preterm infants.
Angrisani, Rosanna Giaffredo; Diniz, Edna Maria Albuquerque; Guinsburg, Ruth; Ferraro, Alexandre Archanjo; Azevedo, Marisa Frasson de; Matas, Carla Gentile
2014-01-01
To follow up the maturation of the auditory pathway in preterm infants small for gestational age (SGA), through the study of absolute and interpeak latencies of auditory brainstem response (ABR) in the first six months of age. This multicentric prospective cross-sectional and longitudinal study assessed 76 newborn infants, 35 SGA and 41 appropriate for gestational age (AGA), born between 33 and 36 weeks in the first evaluation. The ABR was carried out in three moments (neonatal period, three months and six months). Twenty-nine SGA and 33 AGA (62 infants), between 51 and 54 weeks (corrected age), returned for the second evaluation. In the third evaluation, 49 infants (23 SGA and 26 AGA), with age range from 63 to 65 weeks (corrected age), were assessed. The bilateral presence of Transient Evoked Otoacoustic Emissions and normal tympanogram were inclusion criteria. It was found interaural symmetry in both groups. The comparison between the two groups throughout the three periods studied showed no significant differences in the ABR parameters, except for the latencies of wave III in the period between three and six months. As for the maturation with tone burst 0.5 and 1 kHz, it was found that the groups did not differ. The findings suggest that, in the premature infants, the maturational process of the auditory pathway occurs in a similar rate for SGA and AGA. These results also suggest that prematurity is a more relevant factor for the maturation of the auditory pathway than birth weight.
Sound localization by echolocating bats
NASA Astrophysics Data System (ADS)
Aytekin, Murat
Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.
Spike-frequency adaptation in the inferior colliculus.
Ingham, Neil J; McAlpine, David
2004-02-01
We investigated spike-frequency adaptation of neurons sensitive to interaural phase disparities (IPDs) in the inferior colliculus (IC) of urethane-anesthetized guinea pigs using a stimulus paradigm designed to exclude the influence of adaptation below the level of binaural integration. The IPD-step stimulus consists of a binaural 3,000-ms tone, in which the first 1,000 ms is held at a neuron's least favorable ("worst") IPD, adapting out monaural components, before being stepped rapidly to a neuron's most favorable ("best") IPD for 300 ms. After some variable interval (1-1,000 ms), IPD is again stepped to the best IPD for 300 ms, before being returned to a neuron's worst IPD for the remainder of the stimulus. Exponential decay functions fitted to the response to best-IPD steps revealed an average adaptation time constant of 52.9 +/- 26.4 ms. Recovery from adaptation to best IPD steps showed an average time constant of 225.5 +/- 210.2 ms. Recovery time constants were not correlated with adaptation time constants. During the recovery period, adaptation to a 2nd best-IPD step followed similar kinetics to adaptation during the 1st best-IPD step. The mean adaptation time constant at stimulus onset (at worst IPD) was 34.8 +/- 19.7 ms, similar to the 38.4 +/- 22.1 ms recorded to contralateral stimulation alone. Individual time constants after stimulus onset were correlated with each other but not with time constants during the best-IPD step. We conclude that such binaurally derived measures of adaptation reflect processes that occur above the level of exclusively monaural pathways, and subsequent to the site of primary binaural interaction.
Learning for pitch and melody discrimination in congenital amusia.
Whiteford, Kelly L; Oxenham, Andrew J
2018-06-01
Congenital amusia is currently thought to be a life-long neurogenetic disorder in music perception, impervious to training in pitch or melody discrimination. This study provides an explicit test of whether amusic deficits can be reduced with training. Twenty amusics and 20 matched controls participated in four sessions of psychophysical training involving either pure-tone (500 Hz) pitch discrimination or a control task of lateralization (interaural level differences for bandpass white noise). Pure-tone pitch discrimination at low, medium, and high frequencies (500, 2000, and 8000 Hz) was measured before and after training (pretest and posttest) to determine the specificity of learning. Melody discrimination was also assessed before and after training using the full Montreal Battery of Evaluation of Amusia, the most widely used standardized test to diagnose amusia. Amusics performed more poorly than controls in pitch but not localization discrimination, but both groups improved with practice on the trained stimuli. Learning was broad, occurring across all three frequencies and melody discrimination for all groups, including those who trained on the non-pitch control task. Following training, 11 of 20 amusics no longer met the global diagnostic criteria for amusia. A separate group of untrained controls (n = 20), who also completed melody discrimination and pretest, improved by an equal amount as trained controls on all measures, suggesting that the bulk of learning for the control group occurred very rapidly from the pretest. Thirty-one trained participants (13 amusics) returned one year later to assess long-term maintenance of pitch and melody discrimination. On average, there was no change in performance between posttest and one-year follow-up, demonstrating that improvements on pitch- and melody-related tasks in amusics and controls can be maintained. The findings indicate that amusia is not always a life-long deficit when using the current standard diagnostic criteria. Copyright © 2018 Elsevier Ltd. All rights reserved.
Effects of Various Architectural Parameters on Six Room Acoustical Measures in Auditoria.
NASA Astrophysics Data System (ADS)
Chiang, Wei-Hwa
The effects of architectural parameters on six room acoustical measures were investigated by means of correlation analyses, factor analyses and multiple regression analyses based on data taken in twenty halls. Architectural parameters were used to estimate acoustical measures taken at individual locations within each room as well as the averages and standard deviations of all measured values in the rooms. The six acoustical measures were Early Decay Time (EDT10), Clarity Index (C80), Overall Level (G), Bass Ratio based on Early Decay Time (BR(EDT)), Treble Ratio based on Early Decay Time (TR(EDT)), and Early Inter-aural Cross Correlation (IACC80). A comprehensive method of quantifying various architectural characteristics of rooms was developed to define a large number of architectural parameters that were hypothesized to effect the acoustical measurements made in the rooms. This study quantitatively confirmed many of the principles used in the design of concert halls and auditoria. Three groups of room architectural parameters such as the parameters associated with the depth of diffusing surfaces were significantly correlated with the hall standard deviations of most of the acoustical measures. Significant differences of statistical relations among architectural parameters and receiver specific acoustical measures were found between a group of music halls and a group of lecture halls. For example, architectural parameters such as the relative distance from the receiver to the overhead ceiling increased the percentage of the variance of acoustical measures that was explained by Barron's revised theory from approximately 70% to 80% only when data were taken in the group of music halls. This study revealed the major architectural parameters which have strong relations with individual acoustical measures forming the basis for a more quantitative method for advancing the theoretical design of concert halls and other auditoria. The results of this study provide designers the information to predict acoustical measures in buildings at very early stages of the design process without using computer models or scale models.
Correlation Factors Describing Primary and Spatial Sensations of Sound Fields
NASA Astrophysics Data System (ADS)
ANDO, Y.
2002-11-01
The theory of subjective preference of the sound field in a concert hall is established based on the model of human auditory-brain system. The model consists of the autocorrelation function (ACF) mechanism and the interaural crosscorrelation function (IACF) mechanism for signals arriving at two ear entrances, and the specialization of human cerebral hemispheres. This theory can be developed to describe primary sensations such as pitch or missing fundamental, loudness, timbre and, in addition, duration sensation which is introduced here as a fourth. These four primary sensations may be formulated by the temporal factors extracted from the ACF associated with the left hemisphere and, spatial sensations such as localization in the horizontal plane, apparent source width and subjective diffuseness are described by the spatial factors extracted from the IACF associated with the right hemisphere. Any important subjective responses of sound fields may be described by both temporal and spatial factors.
Aurally-adequate time-frequency analysis for scattered sound in auditoria
NASA Astrophysics Data System (ADS)
Norris, Molly K.; Xiang, Ning; Kleiner, Mendel
2005-04-01
The goal of this work was to apply an aurally-adequate time-frequency analysis technique to the analysis of sound scattering effects in auditoria. Time-frequency representations were developed as a motivated effort that takes into account binaural hearing, with a specific implementation of interaural cross-correlation process. A model of the human auditory system was implemented in the MATLAB platform based on two previous models [A. Härmä and K. Palomäki, HUTear, Espoo, Finland; and M. A. Akeroyd, A. Binaural Cross-correlogram Toolbox for MATLAB (2001), University of Sussex, Brighton]. These stages include proper frequency selectivity, the conversion of the mechanical motion of the basilar membrane to neural impulses, and binaural hearing effects. The model was then used in the analysis of room impulse responses with varying scattering characteristics. This paper discusses the analysis results using simulated and measured room impulse responses. [Work supported by the Frank H. and Eva B. Buck Foundation.
Relating age and hearing loss to monaural, bilateral, and binaural temporal sensitivity1
Gallun, Frederick J.; McMillan, Garnett P.; Molis, Michelle R.; Kampel, Sean D.; Dann, Serena M.; Konrad-Martin, Dawn L.
2014-01-01
Older listeners are more likely than younger listeners to have difficulties in making temporal discriminations among auditory stimuli presented to one or both ears. In addition, the performance of older listeners is often observed to be more variable than that of younger listeners. The aim of this work was to relate age and hearing loss to temporal processing ability in a group of younger and older listeners with a range of hearing thresholds. Seventy-eight listeners were tested on a set of three temporal discrimination tasks (monaural gap discrimination, bilateral gap discrimination, and binaural discrimination of interaural differences in time). To examine the role of temporal fine structure in these tasks, four types of brief stimuli were used: tone bursts, broad-frequency chirps with rising or falling frequency contours, and random-phase noise bursts. Between-subject group analyses conducted separately for each task revealed substantial increases in temporal thresholds for the older listeners across all three tasks, regardless of stimulus type, as well as significant correlations among the performance of individual listeners across most combinations of tasks and stimuli. Differences in performance were associated with the stimuli in the monaural and binaural tasks, but not the bilateral task. Temporal fine structure differences among the stimuli had the greatest impact on monaural thresholds. Threshold estimate values across all tasks and stimuli did not show any greater variability for the older listeners as compared to the younger listeners. A linear mixed model applied to the data suggested that age and hearing loss are independent factors responsible for temporal processing ability, thus supporting the increasingly accepted hypothesis that temporal processing can be impaired for older compared to younger listeners with similar hearing and/or amounts of hearing loss. PMID:25009458
Klein-Hennig, Martin; Dietz, Mathias; Hohmann, Volker
2018-03-01
Both harmonic and binaural signal properties are relevant for auditory processing. To investigate how these cues combine in the auditory system, detection thresholds for an 800-Hz tone masked by a diotic (i.e., identical between the ears) harmonic complex tone were measured in six normal-hearing subjects. The target tone was presented either diotically or with an interaural phase difference (IPD) of 180° and in either harmonic or "mistuned" relationship to the diotic masker. Three different maskers were used, a resolved and an unresolved complex tone (fundamental frequency: 160 and 40 Hz) with four components below and above the target frequency and a broadband unresolved complex tone with 12 additional components. The target IPD provided release from masking in most masker conditions, whereas mistuning led to a significant release from masking only in the diotic conditions with the resolved and the narrowband unresolved maskers. A significant effect of mistuning was neither found in the diotic condition with the wideband unresolved masker nor in any of the dichotic conditions. An auditory model with a single analysis frequency band and different binaural processing schemes was employed to predict the data of the unresolved masker conditions. Sensitivity to modulation cues was achieved by including an auditory-motivated modulation filter in the processing pathway. The predictions of the diotic data were in line with the experimental results and literature data in the narrowband condition, but not in the broadband condition, suggesting that across-frequency processing is involved in processing modulation information. The experimental and model results in the dichotic conditions show that the binaural processor cannot exploit modulation information in binaurally unmasked conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
From microseconds to seconds and minutes—time computation in insect hearing
Hartbauer, Manfred; Römer, Heiner
2014-01-01
The computation of time in the auditory system of insects is of relevance at rather different time scales, covering a large range from microseconds to several minutes. At the one end of this range, only a few microseconds of interaural time differences are available for directional hearing, due to the small distance between the ears, usually considered too small to be processed reliably by simple nervous systems. Synapses of interneurons in the afferent auditory pathway are, however, very sensitive to a time difference of only 1–2 ms provided by the latency shift of afferent activity with changing sound direction. At a much larger time scale of several tens of milliseconds to seconds, time processing is important in the context species recognition, but also for those insects where males produce acoustic signals within choruses, and the temporal relationship between song elements strongly deviates from a random distribution. In these situations, some species exhibit a more or less strict phase relationship of song elements, based on phase response properties of their song oscillator. Here we review evidence on how this may influence mate choice decisions. In the same dimension of some tens of milliseconds we find species of katydids with a duetting communication scheme, where one sex only performs phonotaxis to the other sex if the acoustic response falls within a very short time window after its own call. Such time windows show some features unique to insects, and although its neuronal implementation is unknown so far, the similarity with time processing for target range detection in bat echolocation will be discussed. Finally, the time scale being processed must be extended into the range of many minutes, since some acoustic insects produce singing bouts lasting quite long, and female preferences may be based on total signaling time. PMID:24782783
Diversity of acoustic tracheal system and its role for directional hearing in crickets
2013-01-01
Background Sound localization in small insects can be a challenging task due to physical constraints in deriving sufficiently large interaural intensity differences (IIDs) between both ears. In crickets, sound source localization is achieved by a complex type of pressure difference receiver consisting of four potential sound inputs. Sound acts on the external side of two tympana but additionally reaches the internal tympanal surface via two external sound entrances. Conduction of internal sound is realized by the anatomical arrangement of connecting trachea. A key structure is a trachea coupling both ears which is characterized by an enlarged part in its midline (i.e., the acoustic vesicle) accompanied with a thin membrane (septum). This facilitates directional sensitivity despite an unfavorable relationship between wavelength of sound and body size. Here we studied the morphological differences of the acoustic tracheal system in 40 cricket species (Gryllidae, Mogoplistidae) and species of outgroup taxa (Gryllotalpidae, Rhaphidophoridae, Gryllacrididae) of the suborder Ensifera comprising hearing and non hearing species. Results We found a surprisingly high variation of acoustic tracheal systems and almost all investigated species using intraspecific acoustic communication were characterized by an acoustic vesicle associated with a medial septum. The relative size of the acoustic vesicle - a structure most crucial for deriving high IIDs - implies an important role for sound localization. Most remarkable in this respect was the size difference of the acoustic vesicle between species; those with a more unfavorable ratio of body size to sound wavelength tend to exhibit a larger acoustic vesicle. On the other hand, secondary loss of acoustic signaling was nearly exclusively associated with the absence of both acoustic vesicle and septum. Conclusion The high diversity of acoustic tracheal morphology observed between species might reflect different steps in the evolution of the pressure difference receiver; with a precursor structure already present in ancestral non-hearing species. In addition, morphological transitions of the acoustic vesicle suggest a possible adaptive role for the generation of binaural directional cues. PMID:24131512
Extinction of auditory stimuli in hemineglect: Space versus ear.
Spierer, Lucas; Meuli, Reto; Clarke, Stephanie
2007-02-01
Unilateral extinction of auditory stimuli, a key feature of the neglect syndrome, was investigated in 15 patients with right (11), left (3) or bilateral (1) hemispheric lesions using a verbal dichotic condition, in which each ear received simultaneously one word, and a interaural-time-difference (ITD) diotic condition, in which both ears received both words lateralised by means of ITD. Additional investigations included sound localisation, visuo-spatial attention and general cognitive status. Five patients presented a significant asymmetry in the ITD diotic test, due to a decrease of left hemispace reporting but no asymmetry was found in dichotic listening. Six other patients presented a significant asymmetry in the dichotic test due to a significant decrease of left or right ear reporting, but no asymmetry in diotic listening. Ten of the above patients presented mild to severe deficits in sound localisation and eight signs of visuo-spatial neglect (three with selective asymmetry in the diotic and five in the dichotic task). Four other patients presented a significant asymmetry in both the diotic and dichotic listening tasks. Three of them presented moderate deficits in localisation and all four moderate visuo-spatial neglect. Thus, extinction for left ear and left hemispace can double dissociate, suggesting distinct underlying neural processes. Furthermore, the co-occurrence with sound localisation disturbance and with visuo-spatial hemineglect speaks in favour of the involvement of multisensory attentional representations.
Verrecchia, Luca; Westin, Magnus; Duan, Maoli; Brantberg, Krister
2016-04-01
To explore ocular vestibular evoked myogenic potentials (oVEMP) to low-frequency vertex vibration (125 Hz) as a diagnostic test for superior canal dehiscence (SCD) syndrome. The oVEMP using 125 Hz single cycle bone-conducted vertex vibration were tested in 15 patients with unilateral superior canal dehiscence (SCD) syndrome, 15 healthy controls and in 20 patients with unilateral vestibular loss due to vestibular neuritis. Amplitude, amplitude asymmetry ratio, latency and interaural latency difference were parameters of interest. The oVEMP amplitude was significantly larger in SCD patients when affected sides (53 μVolts) were compared to non-affected (17.2 μVolts) or compared to healthy controls (13.6 μVolts). Amplitude larger than 33.8 μVolts separates effectively the SCD ears from the healthy ones with sensitivity of 87% and specificity of 93%. The other three parameters showed an overlap between affected SCD ears and non-affected as well as between SCD ears and those in the two control groups. oVEMP amplitude distinguishes SCD ears from healthy ones using low-frequency vibration stimuli at vertex. Amplitude analysis of oVEMP evoked by low-frequency vertex bone vibration stimulation is an additional indicator of SCD syndrome and might serve for diagnosing SCD patients with coexistent conductive middle ear problems. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Predicting binaural responses from monaural responses in the gerbil medial superior olive
Plauška, Andrius; Borst, J. Gerard
2016-01-01
Accurate sound source localization of low-frequency sounds in the horizontal plane depends critically on the comparison of arrival times at both ears. A specialized brainstem circuit containing the principal neurons of the medial superior olive (MSO) is dedicated to this comparison. MSO neurons are innervated by segregated inputs from both ears. The coincident arrival of excitatory inputs from both ears is thought to trigger action potentials, with differences in internal delays creating a unique sensitivity to interaural time differences (ITDs) for each cell. How the inputs from both ears are integrated by the MSO neurons is still debated. Using juxtacellular recordings, we tested to what extent MSO neurons from anesthetized Mongolian gerbils function as simple cross-correlators of their bilateral inputs. From the measured subthreshold responses to monaural wideband stimuli we predicted the rate-ITD functions obtained from the same MSO neuron, which have a damped oscillatory shape. The rate of the oscillations and the position of the peaks and troughs were accurately predicted. The amplitude ratio between dominant and secondary peaks of the rate-ITD function, captured in the width of its envelope, was not always exactly reproduced. This minor imperfection pointed to the methodological limitation of using a linear representation of the monaural inputs, which disregards any temporal sharpening occurring in the cochlear nucleus. The successful prediction of the major aspects of rate-ITD curves supports a simple scheme in which the ITD sensitivity of MSO neurons is realized by the coincidence detection of excitatory monaural inputs. PMID:27009164
McAlpine, D; Jiang, D; Shackleton, T M; Palmer, A R
1998-08-01
Responses of low-frequency neurons in the inferior colliculus (IC) of anesthetized guinea pigs were studied with binaural beats to assess their mean best interaural phase (BP) to a range of stimulating frequencies. Phase plots (stimulating frequency vs BP) were produced, from which measures of characteristic delay (CD) and characteristic phase (CP) for each neuron were obtained. The CD provides an estimate of the difference in travel time from each ear to coincidence-detector neurons in the brainstem. The CP indicates the mechanism underpinning the coincidence detector responses. A linear phase plot indicates a single, constant delay between the coincidence-detector inputs from the two ears. In more than half (54 of 90) of the neurons, the phase plot was not linear. We hypothesized that neurons with nonlinear phase plots received convergent input from brainstem coincidence detectors with different CDs. Presentation of a second tone with a fixed, unfavorable delay suppressed the response of one input, linearizing the phase plot and revealing other inputs to be relatively simple coincidence detectors. For some neurons with highly complex phase plots, the suppressor tone altered BP values, but did not resolve the nature of the inputs. For neurons with linear phase plots, the suppressor tone either completely abolished their responses or reduced their discharge rate with no change in BP. By selectively suppressing inputs with a second tone, we are able to reveal the nature of underlying binaural inputs to IC neurons, confirming the hypothesis that the complex phase plots of many IC neurons are a result of convergence from simple brainstem coincidence detectors.
Xiao, Jun
2007-05-15
Traditionally, the skull landmarks, i.e., bregma, lambda, and the interaural line, are the origins of the coordinate system for almost all rodent brain atlases. The disadvantages of using a skull landmark as an origin are: (i) there are differences among individuals in the alignment between the skull and the brain; (ii) the shapes of sutures, on which a skull landmark is determined, are different for different animals; (iii) the skull landmark is not clear for some animals. Recently, the extreme point of the entire brain (the tip of the olfactory bulb) has also been used as the origin for an atlas coordinate system. The accuracy of stereotaxically locating a brain structure depends on the relative distance between the structure and the reference point of the coordinate. The disadvantages of using the brain extreme as an origin are that it is located far from most brain structures and is not readily exposed during most in vivo procedures. To overcome these disadvantages, this paper introduces a new coordinate system for the brain of the naked mole-rat. The origin of this new coordinate system is a landmark directly on the brain: the intersection point of the posterior edges of the two cerebral hemispheres. This new coordinate system is readily applicable to other rodent species and is statistically better than using bragma and lambda as reference points. It is found that the body weight of old naked mole-rats is significantly bigger than that of young animals. However, the old naked mole-rat brain is not significantly heavier than that of young animal. Both brain weight and brain length vary little among animals of different weights. The disadvantages of current definition of "significant" are briefly discussed and a new expression that describes more objectively the result of statistical test is brought up and used.
Inertial processing of vestibulo-ocular signals
NASA Technical Reports Server (NTRS)
Hess, B. J.; Angelaki, D. E.
1999-01-01
New evidence for a central resolution of gravito-inertial signals has been recently obtained by analyzing the properties of the vestibulo-ocular reflex (VOR) in response to combined lateral translations and roll tilts of the head. It is found that the VOR generates robust compensatory horizontal eye movements independent of whether or not the interaural translatory acceleration component is canceled out by a gravitational acceleration component due to simultaneous roll-tilt. This response property of the VOR depends on functional semicircular canals, suggesting that the brain uses both otolith and semicircular canal signals to estimate head motion relative to inertial space. Vestibular information about dynamic head attitude relative to gravity is the basis for computing head (and body) angular velocity relative to inertial space. Available evidence suggests that the inertial vestibular system controls both head attitude and velocity with respect to a gravity-centered reference frame. The basic computational principles underlying the inertial processing of otolith and semicircular canal afferent signals are outlined.
Amplitude-modulation detection by gerbils in reverberant sound fields.
Lingner, Andrea; Kugler, Kathrin; Grothe, Benedikt; Wiegrebe, Lutz
2013-08-01
Reverberation can dramatically reduce the depth of amplitude modulations which are critical for speech intelligibility. Psychophysical experiments indicate that humans' sensitivity to amplitude modulation in reverberation is better than predicted from the acoustic modulation depth at the receiver position. Electrophysiological studies on reverberation in rabbits highlight the contribution of neurons sensitive to interaural correlation. Here, we use a prepulse-inhibition paradigm to quantify the gerbils' amplitude modulation threshold in both anechoic and reverberant virtual environments. Data show that prepulse inhibition provides a reliable method for determining the gerbils' AM sensitivity. However, we find no evidence for perceptual restoration of amplitude modulation in reverberation. Instead, the deterioration of AM sensitivity in reverberant conditions can be quantitatively explained by the reduced modulation depth at the receiver position. We suggest that the lack of perceptual restoration is related to physical properties of the gerbil's ear input signals and inner-ear processing as opposed to shortcomings of their binaural neural processing. Copyright © 2013 Elsevier B.V. All rights reserved.
Perception of tilt and ocular torsion of vestibular patients during eccentric rotation.
Clément, Gilles; Deguine, Olivier
2010-01-04
Four patients following unilateral vestibular loss and four patients complaining of otolith-dependent vertigo were tested during eccentric yaw rotation generating 1 x g centripetal acceleration directed along the interaural axis. Perception of body tilt in roll and in pitch was recorded in darkness using a somatosensory plate that the subjects maintained parallel to the perceived horizon. Ocular torsion was recorded by a video camera. Unilateral vestibular-defective patients underestimated the magnitude of the roll tilt and had a smaller torsion when the centrifugal force was towards the operated ear compared to the intact ear and healthy subjects. Patients with otolithic-dependent vertigo overestimated the magnitude of roll tilt in both directions of eccentric rotation relative to healthy subjects, and their ocular torsion was smaller than in healthy subjects. Eccentric rotation is a promising tool for the evaluation of vestibular dysfunction in patients. Eye torsion and perception of tilt during this stimulation are objective and subjective measurements, which could be used to determine alterations in spatial processing in the CNS.
Exploring auditory neglect: Anatomo-clinical correlations of auditory extinction.
Tissieres, Isabel; Crottaz-Herbette, Sonia; Clarke, Stephanie
2018-05-23
The key symptoms of auditory neglect include left extinction on tasks of dichotic and/or diotic listening and rightward shift in locating sounds. The anatomical correlates of the latter are relatively well understood, but no systematic studies have examined auditory extinction. Here, we performed a systematic study of anatomo-clinical correlates of extinction by using dichotic and/or diotic listening tasks. In total, 20 patients with right hemispheric damage (RHD) and 19 with left hemispheric damage (LHD) performed dichotic and diotic listening tasks. Either task consists of the simultaneous presentation of word pairs; in the dichotic task, 1 word is presented to each ear, and in the diotic task, each word is lateralized by means of interaural time differences and presented to one side. RHD was associated with exclusively contralesional extinction in dichotic or diotic listening, whereas in selected cases, LHD led to contra- or ipsilesional extinction. Bilateral symmetrical extinction occurred in RHD or LHD, with dichotic or diotic listening. The anatomical correlates of these extinction profiles offer an insight into the organisation of the auditory and attentional systems. First, left extinction in dichotic versus diotic listening involves different parts of the right hemisphere, which explains the double dissociation between these 2 neglect symptoms. Second, contralesional extinction in the dichotic task relies on homologous regions in either hemisphere. Third, ipsilesional extinction in dichotic listening after LHD was associated with lesions of the intrahemispheric white matter, interrupting callosal fibres outside their midsagittal or periventricular trajectory. Fourth, bilateral symmetrical extinction was associated with large parieto-fronto-temporal LHD or smaller parieto-temporal RHD, which suggests that divided attention, supported by the right hemisphere, and auditory streaming, supported by the left, likely play a critical role. Copyright © 2018. Published by Elsevier Masson SAS.
Wang, Le; Devore, Sasha; Delgutte, Bertrand
2013-01-01
Human listeners are sensitive to interaural time differences (ITDs) in the envelopes of sounds, which can serve as a cue for sound localization. Many high-frequency neurons in the mammalian inferior colliculus (IC) are sensitive to envelope-ITDs of sinusoidally amplitude-modulated (SAM) sounds. Typically, envelope-ITD-sensitive IC neurons exhibit either peak-type sensitivity, discharging maximally at the same delay across frequencies, or trough-type sensitivity, discharging minimally at the same delay across frequencies, consistent with responses observed at the primary site of binaural interaction in the medial and lateral superior olives (MSO and LSO), respectively. However, some high-frequency IC neurons exhibit dual types of envelope-ITD sensitivity in their responses to SAM tones, that is, they exhibit peak-type sensitivity at some modulation frequencies and trough-type sensitivity at other frequencies. Here we show that high-frequency IC neurons in the unanesthetized rabbit can also exhibit dual types of envelope-ITD sensitivity in their responses to SAM noise. Such complex responses to SAM stimuli could be achieved by convergent inputs from MSO and LSO onto single IC neurons. We test this hypothesis by implementing a physiologically explicit, computational model of the binaural pathway. Specifically, we examined envelope-ITD sensitivity of a simple model IC neuron that receives convergent inputs from MSO and LSO model neurons. We show that dual envelope-ITD sensitivity emerges in the IC when convergent MSO and LSO inputs are differentially tuned for modulation frequency. PMID:24155013
Ashida, Go; Funabiki, Kazuo; Carr, Catherine E.
2013-01-01
A wide variety of neurons encode temporal information via phase-locked spikes. In the avian auditory brainstem, neurons in the cochlear nucleus magnocellularis (NM) send phase-locked synaptic inputs to coincidence detector neurons in the nucleus laminaris (NL) that mediate sound localization. Previous modeling studies suggested that converging phase-locked synaptic inputs may give rise to a periodic oscillation in the membrane potential of their target neuron. Recent physiological recordings in vivo revealed that owl NL neurons changed their spike rates almost linearly with the amplitude of this oscillatory potential. The oscillatory potential was termed the sound analog potential, because of its resemblance to the waveform of the stimulus tone. The amplitude of the sound analog potential recorded in NL varied systematically with the interaural time difference (ITD), which is one of the most important cues for sound localization. In order to investigate the mechanisms underlying ITD computation in the NM-NL circuit, we provide detailed theoretical descriptions of how phase-locked inputs form oscillating membrane potentials. We derive analytical expressions that relate presynaptic, synaptic, and postsynaptic factors to the signal and noise components of the oscillation in both the synaptic conductance and the membrane potential. Numerical simulations demonstrate the validity of the theoretical formulations for the entire frequency ranges tested (1–8 kHz) and potential effects of higher harmonics on NL neurons with low best frequencies (<2 kHz). PMID:24265616
Development of kinesthetic-motor and auditory-motor representations in school-aged children.
Kagerer, Florian A; Clark, Jane E
2015-07-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age.
Development of kinesthetic-motor and auditory-motor representations in school-aged children
Clark, Jane E.
2015-01-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age. PMID:25912609
Kim, Kun Woo; Jung, Jae Yun; Lee, Jeong Hyun
2013-01-01
Objectives Rectified vestibular evoked myogenic potential (rVEMP) is new method that simultaneously measures the muscle contraction power during VEMP recordings. Although there are a few studies that have evaluated the effect of the rVEMP, there is no study that has evaluated the capacity of rVEMP during asymmetrical muscle contraction. Methods Thirty VEMP measurements were performed among 20 normal subjects (mean age, 28.2±2.1 years; male, 16). VEMP was measured in the supine position. The head was turned to the right side by 0°, 15°, 30°, and 45° and the VEMPs were recorded in each position. The interaural amplitude difference (IAD) ratio was calculated by the conventional non-rectified VEMP (nVEMP) and rVEMP. Results The nVEMP IAD increased significantly according to increasing neck rotation. The IAD in rVEMP was almost similar from 0° to 30°. However, the IAD was significantly larger than the other positions when the neck was rotated 45°. When IAD during 0° was set as a standard, the IAD of the rVEMP was significantly smaller that the nVEMP only during the 30°rotaion. Conclusion Rectified VEMP is capable of correcting asymmetrical muscle contraction power. In contrast, it cannot correct the asymmetry if muscle contraction power asymmetry is 44.8% or larger. Also, it is not necessary if muscle contraction power asymmetry is 22.5% or smaller. PMID:24353859
Wang, Chi-Te; Fang, Kai-Min; Young, Yi-Ho; Cheng, Po-Wen
2010-04-01
Click and galvanic stimulations of vestibular-evoked myogenic potential (c-VEMP and g-VEMP) were applied to measure the interaural difference (IAD) of saccular responses in patients with acute low-tone sensorineural hearing loss (ALHL). This study intended to explore the relationship between saccular asymmetry and final hearing recovery. We hypothesize that greater extent of saccular dysfunction may be associated with lesser hearing recovery. Twenty-one patients with unilateral ALHL were prospectively enrolled to receive c-VEMP and g-VEMP tests in a random sequence. The IAD of the saccular responses for each patient was measured using three parameters-the raw and corrected amplitudes of c-VEMP, and corrected c-VEMP to g-VEMP amplitude ratio (C/G ratio). The IAD for each parameter was classified as depressed, normal, or augmented by calculating the difference between the affected and unaffected ears and dividing by its sum for both ears. After 3 consecutive months of oral medication and follow-up, 19 patients displayed a hearing recovery of >50%; only two had a recovery of <50%. The significant correlation between the IAD of corrected C/G ratios and hearing recovery demonstrated that subjects with depressed responses had a worse hearing outcome (percent recovery: 51% [45-80%], median [minimum-maximum]), compared with those with normal responses, who exhibited the best recovery (87% [56-100%]), whereas patients with augmented response showed an intermediate recovery (67% [54-100%]; p = 0.02, Kruskal-Wallis test). On the contrary, the raw and corrected amplitudes of c-VEMP did not reveal a significantly different hearing recovery among the three groups of saccular responses. The extent of saccular dysfunction in ALHL might be better explored by combining the results of c-VEMP and g-VEMP. Outcome analysis indicated that the corrected C/G ratio might be a promising prognostic factor for hearing recovery in ALHL.
Bonnard, Damien; Lautissier, Sylvie; Bosset-Audoit, Amélie; Coriat, Géraldine; Beraha, Max; Maunoury, Antoine; Martel, Jacques; Darrouzet, Vincent; Bébéar, Jean-Pierre; Dauman, René
2013-01-01
An alternative to bilateral cochlear implantation is offered by the Neurelec Digisonic(®) SP Binaural cochlear implant, which allows stimulation of both cochleae within a single device. The purpose of this prospective study was to compare a group of Neurelec Digisonic(®) SP Binaural implant users (denoted BINAURAL group, n = 7) with a group of bilateral adult cochlear implant users (denoted BILATERAL group, n = 6) in terms of speech perception, sound localization, and self-assessment of health status and hearing disability. Speech perception was assessed using word recognition at 60 dB SPL in quiet and in a 'cocktail party' noise delivered through five loudspeakers in the hemi-sound field facing the patient (signal-to-noise ratio = +10 dB). The sound localization task was to determine the source of a sound stimulus among five speakers positioned between -90° and +90° from midline. Change in health status was assessed using the Glasgow Benefit Inventory and hearing disability was evaluated with the Abbreviated Profile of Hearing Aid Benefit. Speech perception was not statistically different between the two groups, even though there was a trend in favor of the BINAURAL group (mean percent word recognition in the BINAURAL and BILATERAL groups: 70 vs. 56.7% in quiet, 55.7 vs. 43.3% in noise). There was also no significant difference with regard to performance in sound localization and self-assessment of health status and hearing disability. On the basis of the BINAURAL group's performance in hearing tasks involving the detection of interaural differences, implantation with the Neurelec Digisonic(®) SP Binaural implant may be considered to restore effective binaural hearing. Based on these first comparative results, this device seems to provide benefits similar to those of traditional bilateral cochlear implantation, with a new approach to stimulate both auditory nerves. Copyright © 2013 S. Karger AG, Basel.
Availability of binaural cues for pediatric bilateral cochlear implant recipients.
Sheffield, Sterling W; Haynes, David S; Wanna, George B; Labadie, Robert F; Gifford, René H
2015-03-01
Bilateral implant recipients theoretically have access to binaural cues. Research in postlingually deafened adults with cochlear implants (CIs) indicates minimal evidence for true binaural hearing. Congenitally deafened children who experience spatial hearing with bilateral CIs, however, might perceive binaural cues in the CI signal differently. There is limited research examining binaural hearing in children with CIs, and the few published studies are limited by the use of unrealistic speech stimuli and background noise. The purposes of this study were to (1) replicate our previous study of binaural hearing in postlingually deafened adults with AzBio sentences in prelingually deafened children with the pediatric version of the AzBio sentences, and (2) replicate previous studies of binaural hearing in children with CIs using more open-set sentences and more realistic background noise (i.e., multitalker babble). The study was a within-participant, repeated-measures design. The study sample consisted of 14 children with bilateral CIs with at least 25 mo of listening experience. Speech recognition was assessed using sentences presented in multitalker babble at a fixed signal-to-noise ratio. Test conditions included speech at 0° with noise presented at 0° (S0N0), on the side of the first CI (90° or 270°) (S0N1stCI), and on the side of the second CI (S0N2ndCI) as well as speech presented at 0° with noise presented semidiffusely from eight speakers at 45° intervals. Estimates of summation, head shadow, squelch, and spatial release from masking were calculated. Results of test conditions commonly reported in the literature (S0N0, S0N1stCI, S0N2ndCI) are consistent with results from previous research in adults and children with bilateral CIs, showing minimal summation and squelch but typical head shadow and spatial release from masking. However, bilateral benefit over the better CI with speech at 0° was much larger with semidiffuse noise. Congenitally deafened children with CIs have similar availability of binaural hearing cues to postlingually deafened adults with CIs within the same experimental design. It is possible that the use of realistic listening environments, such as semidiffuse background noise as in Experiment II, would reveal greater binaural hearing benefit for bilateral CI recipients. Future research is needed to determine whether (1) availability of binaural cues for children correlates with interaural time and level differences, (2) different listening environments are more sensitive to binaural hearing benefits, and (3) differences exist between pediatric bilateral recipients receiving implants in the same or sequential surgeries. American Academy of Audiology.
NASA Technical Reports Server (NTRS)
Kaufman, Galen D.; Wood, Scott J.; Gianna, Claire C.; Black, F. Owen; Paloski, William H.
2000-01-01
Eight chronic vestibular deficient (VD) patients (bilateral N = 4, unilateral N = 4, ages 18-67 were exposed to an interaural centripetal acceleration of 1 G (resultant 45 degree roll tilt of 1.4 G) on a 0.8 meter radius centrifuge for up to 90 minutes in the dark. The patients sat with head fixed upright, except every 4 of 10 minutes when instructed to point their nose and eyes towards a visual target (switched on every 3 to 5 seconds at random places within plus or minus 30 deg) in the Earth horizontal plane. Eye movements, including directed saccades for subjective Earth-and head-referenced planes, were recorded before, during, and after centrifugation using electro-oculography. Postural sway was measured before and within ten minutes after centrifugation using a sway-referenced or earth-fixed support surface, and with or without a head movement sequence. The protocol was selected for each patient based on the most challenging condition in which the patient was able to maintain balance with eyes closed.
A real-time biomimetic acoustic localizing system using time-shared architecture
NASA Astrophysics Data System (ADS)
Nourzad Karl, Marianne; Karl, Christian; Hubbard, Allyn
2008-04-01
In this paper a real-time sound source localizing system is proposed, which is based on previously developed mammalian auditory models. Traditionally, following the models, which use interaural time delay (ITD) estimates, the amount of parallel computations needed by a system to achieve real-time sound source localization is a limiting factor and a design challenge for hardware implementations. Therefore a new approach using a time-shared architecture implementation is introduced. The proposed architecture is a purely sample-base-driven digital system, and it follows closely the continuous-time approach described in the models. Rather than having dedicated hardware on a per frequency channel basis, a specialized core channel, shared for all frequency bands is used. Having an optimized execution time, which is much less than the system's sample rate, the proposed time-shared solution allows the same number of virtual channels to be processed as the dedicated channels in the traditional approach. Hence, the time-shared approach achieves a highly economical and flexible implementation using minimal silicon area. These aspects are particularly important in efficient hardware implementation of a real time biomimetic sound source localization system.
An Auditory Illusion of Proximity of the Source Induced by Sonic Crystals
Spiousas, Ignacio; Etchemendy, Pablo E.; Vergara, Ramiro O.; Calcagno, Esteban R.; Eguia, Manuel C.
2015-01-01
In this work we report an illusion of proximity of a sound source created by a sonic crystal placed between the source and a listener. This effect seems, at first, paradoxical to naïve listeners since the sonic crystal is an obstacle formed by almost densely packed cylindrical scatterers. Even when the singular acoustical properties of these periodic composite materials have been studied extensively (including band gaps, deaf bands, negative refraction, and birrefringence), the possible perceptual effects remain unexplored. The illusion reported here is studied through acoustical measurements and a psychophysical experiment. The results of the acoustical measurements showed that, for a certain frequency range and region in space where the focusing phenomenon takes place, the sonic crystal induces substantial increases in binaural intensity, direct-to-reverberant energy ratio and interaural cross-correlation values, all cues involved in the auditory perception of distance. Consistently, the results of the psychophysical experiment revealed that the presence of the sonic crystal between the sound source and the listener produces a significant reduction of the perceived relative distance to the sound source. PMID:26222281
Effects of sound source location and direction on acoustic parameters in Japanese churches.
Soeta, Yoshiharu; Ito, Ken; Shimokura, Ryota; Sato, Shin-ichi; Ohsawa, Tomohiro; Ando, Yoichi
2012-02-01
In 1965, the Catholic Church liturgy changed to allow priests to face the congregation. Whereas Church tradition, teaching, and participation have been much discussed with respect to priest orientation at Mass, the acoustical changes in this regard have not yet been examined scientifically. To discuss acoustic desired within churches, it is necessary to know the acoustical characteristics appropriate for each phase of the liturgy. In this study, acoustic measurements were taken at various source locations and directions using both old and new liturgies performed in Japanese churches. A directional loudspeaker was used as the source to provide vocal and organ acoustic fields, and impulse responses were measured. Various acoustical parameters such as reverberation time and early decay time were analyzed. The speech transmission index was higher for the new Catholic liturgy, suggesting that the change in liturgy has improved speech intelligibility. Moreover, the interaural cross-correlation coefficient and early lateral energy fraction were higher and lower, respectively, suggesting that the change in liturgy has made the apparent source width smaller. © 2012 Acoustical Society of America
An Auditory Illusion of Proximity of the Source Induced by Sonic Crystals.
Spiousas, Ignacio; Etchemendy, Pablo E; Vergara, Ramiro O; Calcagno, Esteban R; Eguia, Manuel C
2015-01-01
In this work we report an illusion of proximity of a sound source created by a sonic crystal placed between the source and a listener. This effect seems, at first, paradoxical to naïve listeners since the sonic crystal is an obstacle formed by almost densely packed cylindrical scatterers. Even when the singular acoustical properties of these periodic composite materials have been studied extensively (including band gaps, deaf bands, negative refraction, and birrefringence), the possible perceptual effects remain unexplored. The illusion reported here is studied through acoustical measurements and a psychophysical experiment. The results of the acoustical measurements showed that, for a certain frequency range and region in space where the focusing phenomenon takes place, the sonic crystal induces substantial increases in binaural intensity, direct-to-reverberant energy ratio and interaural cross-correlation values, all cues involved in the auditory perception of distance. Consistently, the results of the psychophysical experiment revealed that the presence of the sonic crystal between the sound source and the listener produces a significant reduction of the perceived relative distance to the sound source.
Monaural Sound Localization Revisited
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Kistler, Doris J.
1997-01-01
Research reported during the past few decades has revealed the importance for human sound localization of the so-called 'monaural spectral cues.' These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.
Binaural speech processing in individuals with auditory neuropathy.
Rance, G; Ryan, M M; Carew, P; Corben, L A; Yiu, E; Tan, J; Delatycki, M B
2012-12-13
Auditory neuropathy disrupts the neural representation of sound and may therefore impair processes contingent upon inter-aural integration. The aims of this study were to investigate binaural auditory processing in individuals with axonal (Friedreich ataxia) and demyelinating (Charcot-Marie-Tooth disease type 1A) auditory neuropathy and to evaluate the relationship between the degree of auditory deficit and overall clinical severity in patients with neuropathic disorders. Twenty-three subjects with genetically confirmed Friedreich ataxia and 12 subjects with Charcot-Marie-Tooth disease type 1A underwent psychophysical evaluation of basic auditory processing (intensity discrimination/temporal resolution) and binaural speech perception assessment using the Listening in Spatialized Noise test. Age, gender and hearing-level-matched controls were also tested. Speech perception in noise for individuals with auditory neuropathy was abnormal for each listening condition, but was particularly affected in circumstances where binaural processing might have improved perception through spatial segregation. Ability to use spatial cues was correlated with temporal resolution suggesting that the binaural-processing deficit was the result of disordered representation of timing cues in the left and right auditory nerves. Spatial processing was also related to overall disease severity (as measured by the Friedreich Ataxia Rating Scale and Charcot-Marie-Tooth Neuropathy Score) suggesting that the degree of neural dysfunction in the auditory system accurately reflects generalized neuropathic changes. Measures of binaural speech processing show promise for application in the neurology clinic. In individuals with auditory neuropathy due to both axonal and demyelinating mechanisms the assessment provides a measure of functional hearing ability, a biomarker capable of tracking the natural history of progressive disease and a potential means of evaluating the effectiveness of interventions. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
Monaural sound localization revisited.
Wightman, F L; Kistler, D J
1997-02-01
Research reported during the past few decades has revealed the importance for human sound localization of the so-called "monaural spectral cues." These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.
NASA Technical Reports Server (NTRS)
Shelhamer, Mark; Peng, Grace C Y.; Ramat, Stefano; Patel, Vivek
2002-01-01
Previous studies established that vestibular and oculomotor behaviors can have two adapted states (e.g., gain) simultaneously, and that a context cue (e.g., vertical eye position) can switch between the two states. The present study examined this phenomenon of context-specific adaptation for the oculomotor response to interaural translation (which we term "linear vestibulo-ocular reflex" or LVOR even though it may have extravestibular components). Subjects sat upright on a linear sled and were translated at 0.7 Hz and 0.3 gpeak acceleration while a visual-vestibular mismatch paradigm was used to adaptively increase (x2) or decrease (x0) the gain of the LVOR. In each experimental session, gain increase was asked for in one context, and gain decrease in another context. Testing in darkness with steps and sines before and after adaptation, in each context, assessed the extent to which the context itself could recall the gain state that was imposed in that context during adaptation. Two different contexts were used: head pitch (26 degrees forward and backward) and head roll (26 degrees or 45 degrees, right and left). Head roll tilt worked well as a context cue: with the head rolled to the right the LVOR could be made to have a higher gain than with the head rolled to the left. Head pitch tilt was less effective as a context cue. This suggests that the more closely related a context cue is to the response being adapted, the more effective it is.
Lin, Nan; Wei, Min
2014-01-01
After vestibular labyrinth injury, behavioral deficits partially recover through the process of vestibular compensation. The present study was performed to improve our understanding of the physiology of the macaque vestibular system in the compensated state (>7 wk) after unilateral labyrinthectomy (UL). Three groups of vestibular nucleus neurons were included: pre-UL control neurons, neurons ipsilateral to the lesion, and neurons contralateral to the lesion. The firing responses of neurons sensitive to linear acceleration in the horizontal plane were recorded during sinusoidal horizontal translation directed along six different orientations (30° apart) at 0.5 Hz and 0.2 g peak acceleration (196 cm/s2). This data defined the vector of best response for each neuron in the horizontal plane, along which sensitivity, symmetry, detection threshold, and variability of firing were determined. Additionally, the responses of the same cells to translation over a series of frequencies (0.25–5.0 Hz) either in the interaural or naso-occipital orientation were obtained to define the frequency response characteristics in each group. We found a decrease in sensitivity, increase in threshold, and alteration in orientation of best responses in the vestibular nuclei after UL. Additionally, the phase relationship of the best neural response to translational stimulation changed with UL. The symmetry of individual neuron responses in the excitatory and inhibitory directions was unchanged by UL. Bilateral central utricular neurons still demonstrated two-dimension tuning after UL, consistent with spatio-temporal convergence from a single vestibular end-organ. These neuronal data correlate with known behavioral deficits after unilateral vestibular compromise. PMID:24717349
Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C
2006-03-20
In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.
Nisha, Kavassery Venkateswaran; Kumar, Ajith Uppunda
2017-04-01
Localization involves processing of subtle yet highly enriched monaural and binaural spatial cues. Remediation programs aimed at resolving spatial deficits are surprisingly scanty in literature. The present study is designed to explore the changes that occur in the spatial performance of normal-hearing listeners before and after subjecting them to virtual acoustic space (VAS) training paradigm using behavioral and electrophysiological measures. Ten normal-hearing listeners participated in the study, which was conducted in three phases, including a pre-training, training, and post-training phase. At the pre- and post-training phases both behavioral measures of spatial acuity and electrophysiological P300 were administered. The spatial acuity of the participants in the free field and closed field were measured apart from quantifying their binaural processing abilities. The training phase consisted of 5-8 sessions (20 min each) carried out using a hierarchy of graded VAS stimuli. The results obtained from descriptive statistics were indicative of an improvement in all the spatial acuity measures in the post-training phase. Statistically, significant changes were noted in interaural time difference (ITD) and virtual acoustic space identification scores measured in the post-training phase. Effect sizes (r) for all of these measures were substantially large, indicating the clinical relevance of these measures in documenting the impact of training. However, the same was not reflected in P300. The training protocol used in the present study on a preliminary basis proves to be effective in normal-hearing listeners, and its implications can be extended to other clinical population as well.
Bremen, Peter; Joris, Philip X
2013-10-30
Interaural time differences (ITDs) are a major cue for localizing low-frequency (<1.5 kHz) sounds. Sensitivity to this cue first occurs in the medial superior olive (MSO), which is thought to perform a coincidence analysis on its monaural inputs. Extracellular single-neuron recordings in MSO are difficult to obtain because (1) MSO action potentials are small and (2) a large field potential locked to the stimulus waveform hampers spike isolation. Consequently, only a limited number of studies report MSO data, and even in these studies data are limited in the variety of stimuli used, in the number of neurons studied, and in spike isolation. More high-quality data are needed to better understand the mechanisms underlying neuronal ITD-sensitivity. We circumvented these difficulties by recording from the axons of MSO neurons in the lateral lemniscus (LL) of the chinchilla, a species with pronounced low-frequency sensitivity. Employing sharp glass electrodes we successfully recorded from neurons with ITD sensitivity: the location, response properties, latency, and spike shape were consistent with an MSO axonal origin. The main difficulty encountered was mechanical stability. We obtained responses to binaural beats and dichotic noise bursts to characterize the best delay versus characteristic frequency distribution, and compared the data to recordings we obtained in the inferior colliculus (IC). In contrast to most reports in other rodents, many best delays were close to zero ITD, both in MSO and IC, with a majority of the neurons recorded in the LL firing maximally within the presumed ethological ITD range.
Li, Na; Pollak, George D.
2013-01-01
Neurons excited by stimulation of one ear and suppressed by the other, called EI neurons, are sensitive to interaural intensity disparities (IIDs), the cues animals use to localize high frequencies. EI neurons are first formed in lateral superior olive (LSO), which then sends excitatory projections to the dorsal nucleus of the lateral lemniscus (DNLL) and the inferior colliculus (IC), both of which contain large populations of EI cells. We evaluate the inputs that innervate EI cells in the IC of Mexican free-tailed bats, Tadarida brasilensis mexicana, with in vivo whole cell recordings from which we derived excitatory and inhibitory conductances. We show that the basic EI property in the majority of IC cells is inherited from LSO, but each type of EI cell is also innervated by the ipsi- or contralateral DNLL, as well as additional excitatory and inhibitory inputs from monaural nuclei. We identify three EI types, where each type receives a set of projections that are different from the other types. To evaluate the role that the various projections played in generating binaural responses, we used modeling to compute a predicted response from the conductances. We then omitted one of the conductances from the computation to evaluate the degree to which that input contributed to the binaural response. We show that formation of the EI property in the various types is complex, and that some projections exert such subtle influences that they could not have been detected with extracellular recordings or even from intracellular recordings of post-synaptic potentials. PMID:23575835
NASA Technical Reports Server (NTRS)
Wood, Scott; Clement, Gilles; Denise, Pierre; Reschke, Millard
2005-01-01
Constant velocity Off-Vertical Axis Rotation (OVAR) imposes a continuously varying orientation of the head and body relative to gravity. The ensuing ocular reflexes include modulation of both horizontal and torsional eye velocity as a function of the varying linear acceleration along the lateral plane. The purpose of this study was to examine whether the modulation of these ocular reflexes would be modified by different head-on-trunk positions. Ten human subjects were rotated in darkness about their longitudinal axis 20 deg off-vertical at constant rates of 45 and 180 deg/s, corresponding to 0.125 and 0.5 Hz. Binocular responses were obtained with video-oculography with the head and trunk aligned, and then with the head turned relative to the trunk 40 deg to the right or left of center. Sinusoidal curve fits were used to derive amplitude, phase and bias velocity of the eye movements across multiple cycles for each head-on-trunk position. Consistent with previous studies, the modulation of torsional eye movements was greater at 0.125 Hz while the modulation of horizontal eye movements was greater at 0.5 Hz. Neither amplitude nor bias velocities were significantly altered by head-on-trunk position. The phases of both torsional and horizontal ocular reflexes, on the other hand, shifted towards alignment with the head. These results are consistent with the modulation of torsional and horizontal ocular reflexes during OVAR being primarily mediated by the otoliths in response to the sinusoidally varying linear acceleration along the interaural head axis.
Effects of hair, clothing, and headgear on localization of three-dimensional sounds Part IIb
NASA Astrophysics Data System (ADS)
Riederer, Klaus A. J.
2003-10-01
Seven 20-25-year-old normal hearing (<=20 dBHL) native male-undergraduates listened twice to treatments of 85 virtual source locations in a large dark anechoic chamber. The 3-D-stimuli were anew-calculated white noise bursts, amplitude modulated (40-Hz sine), repeated after a pause (total duration 3×275=825 ms), HRTF-convolved and headphone-equalized (Sennheiser HD580). The HRTFs were measured from a Cortex dummy head wearing different garments: 1=alpaca pullover only; 2=1+curly pony-tailed thick-hair+eye-glasses 3=1+long thin-hair (ear-covering) 4=1+mens trilby; 5=2+bicycle helmet+jacket [Riederer, J. Acoust. Soc. Am., this issue]. Perceived directions were signified by placing a tailored digitizer-stylus over an illuminated ball darkened after the response. Subjects did the experiments during three days, each consisting of a 2-h session of several randomized sets with multiple breaks. Azimuth and elevation errors were investigated separately in factorial within-subjects ANOVA showing strong dependence p(<=0.004) on all main effects and interactions (garment, elevation, azimuth). The grand mean errors were approximately 16°-19°. Confused angles were retained around the +/-90°-interaural axis and cos(elev)-weighting was applied to azimuth errors. The total front-back/back-front confusion rate was 18.38% and up-down/down-up 12.21%. The confusions (except left-right/right-left, 2.07%) and reaction times depended strongly on azimuth (main effect) and garment (interaction). [Work supported by Graduate School of Electronics, Telecommunication and Automation.
Development of a method to evaluate glutamate receptor function in rat barrel cortex slices.
Lehohla, M; Russell, V; Kellaway, L; Govender, A
2000-12-01
The rat is a nocturnal animal and uses its vibrissae extensively to navigate its environment. The vibrissae are linked to a highly organized part of the sensory cortex, called the barrel cortex which contains spiny neurons that receive whisker specific thalamic input and distribute their output mainly within the cortical column. The aim of the present study was to develop a method to evaluate glutamate receptor function in the rat barrel cortex. Long Evans rats (90-160 g) were killed by cervical dislocation and decapitated. The brain was rapidly removed, cooled in a continuously oxygenated, ice-cold Hepes buffer (pH 7.4) and sliced using a vibratome to produce 0.35 mm slices. The barrel cortex was dissected from slices corresponding to 8.6 to 4.8 mm anterior to the interaural line and divided into rostral, middle and caudal regions. Depolarization-induced uptake of 45Ca2+ was achieved by incubating test slices in a high K+ (62.5 mM) buffer for 2 minutes at 35 degrees C. Potassium-stimulated uptake of 45Ca2+ into the rostral region was significantly lower than into middle and caudal regions of the barrel cortex. Glutamate had no effect. NMDA significantly increased uptake of 45Ca2+ into all regions of the barrel cortex. The technique is useful in determining NMDA receptor function and will be applied to study differences between spontaneously hypertensive rats (SHR) that are used as a model for attention deficit disorder and their normotensive control rats.
Hamlet, William R.; Liu, Yu-Wei; Tang, Zheng-Quan; Lu, Yong
2014-01-01
Central auditory neurons that localize sound in horizontal space have specialized intrinsic and synaptic cellular mechanisms to tightly control the threshold and timing for action potential generation. However, the critical interplay between intrinsic voltage-gated conductances and extrinsic synaptic conductances in determining neuronal output are not well understood. In chicken, neurons in the nucleus laminaris (NL) encode sound location using interaural time difference (ITD) as a cue. Along the tonotopic axis of NL, there exist robust differences among low, middle, and high frequency (LF, MF, and HF, respectively) neurons in a variety of neuronal properties such as low threshold voltage-gated K+ (LTK) channels and depolarizing inhibition. This establishes NL as an ideal model to examine the interactions between LTK currents and synaptic inhibition across the tonotopic axis. Using whole-cell patch clamp recordings prepared from chicken embryos (E17–E18), we found that LTK currents were larger in MF and HF neurons than in LF neurons. Kinetic analysis revealed that LTK currents in MF neurons activated at lower voltages than in LF and HF neurons, whereas the inactivation of the currents was similar across the tonotopic axis. Surprisingly, blockade of LTK currents using dendrotoxin-I (DTX) tended to broaden the duration and increase the amplitude of the depolarizing inhibitory postsynaptic potentials (IPSPs) in NL neurons without dependence on coding frequency regions. Analyses of the effects of DTX on inhibitory postsynaptic currents led us to interpret this unexpected observation as a result of primarily postsynaptic effects of LTK currents on MF and HF neurons, and combined presynaptic and postsynaptic effects in LF neurons. Furthermore, DTX transferred subthreshold IPSPs to spikes. Taken together, the results suggest a critical role for LTK currents in regulating inhibitory synaptic strength in ITD-coding neurons at various frequencies. PMID:24904297
Hamlet, William R; Liu, Yu-Wei; Tang, Zheng-Quan; Lu, Yong
2014-01-01
Central auditory neurons that localize sound in horizontal space have specialized intrinsic and synaptic cellular mechanisms to tightly control the threshold and timing for action potential generation. However, the critical interplay between intrinsic voltage-gated conductances and extrinsic synaptic conductances in determining neuronal output are not well understood. In chicken, neurons in the nucleus laminaris (NL) encode sound location using interaural time difference (ITD) as a cue. Along the tonotopic axis of NL, there exist robust differences among low, middle, and high frequency (LF, MF, and HF, respectively) neurons in a variety of neuronal properties such as low threshold voltage-gated K(+) (LTK) channels and depolarizing inhibition. This establishes NL as an ideal model to examine the interactions between LTK currents and synaptic inhibition across the tonotopic axis. Using whole-cell patch clamp recordings prepared from chicken embryos (E17-E18), we found that LTK currents were larger in MF and HF neurons than in LF neurons. Kinetic analysis revealed that LTK currents in MF neurons activated at lower voltages than in LF and HF neurons, whereas the inactivation of the currents was similar across the tonotopic axis. Surprisingly, blockade of LTK currents using dendrotoxin-I (DTX) tended to broaden the duration and increase the amplitude of the depolarizing inhibitory postsynaptic potentials (IPSPs) in NL neurons without dependence on coding frequency regions. Analyses of the effects of DTX on inhibitory postsynaptic currents led us to interpret this unexpected observation as a result of primarily postsynaptic effects of LTK currents on MF and HF neurons, and combined presynaptic and postsynaptic effects in LF neurons. Furthermore, DTX transferred subthreshold IPSPs to spikes. Taken together, the results suggest a critical role for LTK currents in regulating inhibitory synaptic strength in ITD-coding neurons at various frequencies.
Binaural masking release in children with Down syndrome.
Porter, Heather L; Grantham, D Wesley; Ashmead, Daniel H; Tharpe, Anne Marie
2014-01-01
Binaural hearing results in a number of listening advantages relative to monaural hearing, including enhanced hearing sensitivity and better speech understanding in adverse listening conditions. These advantages are facilitated in part by the ability to detect and use interaural cues within the central auditory system. Binaural hearing for children with Down syndrome could be impacted by multiple factors including, structural anomalies within the peripheral and central auditory system, alterations in synaptic communication, and chronic otitis media with effusion. However, binaural hearing capabilities have not been investigated in these children. This study tested the hypothesis that children with Down syndrome experience less binaural benefit than typically developing peers. Participants included children with Down syndrome aged 6 to 16 years (n = 11), typically developing children aged 3 to 12 years (n = 46), adults with Down syndrome (n = 3), and adults with no known neurological delays (n = 6). Inclusionary criteria included normal to near-normal hearing sensitivity. Two tasks were used to assess binaural ability. Masking level difference (MLD) was calculated by comparing threshold for a 500-Hz pure-tone signal in 300-Hz wide Gaussian noise for N0S0 and N0Sπ signal configurations. Binaural intelligibility level difference was calculated using simulated free-field conditions. Speech recognition threshold was measured for closed-set spondees presented from 0-degree azimuth in speech-shaped noise presented from 0-, 45- and 90-degree azimuth, respectively. The developmental ability of children with Down syndrome was estimated and information regarding history of otitis media was obtained for all child participants via parent survey. Individuals with Down syndrome had higher masked thresholds for pure-tone and speech stimuli than typically developing individuals. Children with Down syndrome had significantly smaller MLDs than typically developing children. Adults with Down syndrome and control adults had similar MLDs. Similarities in simulated spatial release from masking were observed for all groups for the experimental parameters used in this study. No association was observed for any measure of binaural ability and developmental age for children with Down syndrome. Similar group psychometric functions were observed for children with Down syndrome and typically developing children in most instances, suggesting that attentiveness and motivation contributed equally to performance for both groups on most tasks. The binaural advantages afforded to typically developing children, such as enhanced hearing sensitivity in noise, were not as robust for children with Down syndrome in this study. Children with Down syndrome experienced less binaural benefit than typically developing peers for some stimuli, suggesting that they could require more favorable signal-to-noise ratios to achieve optimal performance in some adverse listening conditions. The reduced release from masking observed for children with Down syndrome could represent a delay in ability rather than a deficit that persists into adulthood. This could have implications for the planning of interventions for individuals with Down syndrome.
Do humans show velocity-storage in the vertical rVOR?
Bertolini, G; Bockisch, C J; Straumann, D; Zee, D S; Ramat, S
2008-01-01
To investigate the contribution of the vestibular velocity-storage mechanism (VSM) to the vertical rotational vestibulo-ocular reflex (rVOR) we recorded eye movements evoked by off-vertical axis rotation (OVAR) using whole-body constant-velocity pitch rotations about an earth-horizontal, interaural axis in four healthy human subjects. Subjects were tumbled forward, and backward, at 60 deg/s for over 1 min using a 3D turntable. Slow-phase velocity (SPV) responses were similar to the horizontal responses elicited by OVAR along the body longitudinal axis, ('barbecue' rotation), with exponentially decaying amplitudes and a residual, otolith-driven sinusoidal response with a bias. The time constants of the vertical SPV ranged from 6 to 9 s. These values are closer to those that reflect the dynamic properties of vestibular afferents than the typical 20 s produced by the VSM in the horizontal plane, confirming the relatively smaller contribution of the VSM to these vertical responses. Our preliminary results also agree with the idea that the VSM velocity response aligns with the direction of gravity. The horizontal and torsional eye velocity traces were also sinusoidally modulated by the change in gravity, but showed no exponential decay.
Adaptive spatial filtering improves speech reception in noise while preserving binaural cues.
Bissmeyer, Susan R S; Goldsworthy, Raymond L
2017-09-01
Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.
The role of reverberation-related binaural cues in the externalization of speech.
Catic, Jasmina; Santurette, Sébastien; Dau, Torsten
2015-08-01
The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners' ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones. The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient for the externalization of a lateral sound source. In contrast, for a frontal source, an increased amount of binaural cues from reflections was required in order to obtain well externalized sound images. It was demonstrated that the interaction between the interaural cues of the direct sound and the reverberation strongly affects the perception of externalization. An analysis of the short-term binaural cues showed that the amount of fluctuations of the binaural cues corresponded well to the externalization ratings obtained in the listening tests. The results further suggested that the precedence effect is involved in the auditory processing of the dynamic binaural cues that are utilized for externalization perception.
Lane, Courtney C.; Delgutte, Bertrand
2007-01-01
Spatial release from masking (SRM), a factor in listening in noisy environments, is the improvement in auditory signal detection obtained when a signal is separated in space from a masker. To study the neural mechanisms of SRM, we recorded from single units in the inferior colliculus (IC) of barbiturate-anesthetized cats, focusing on low-frequency neurons sensitive to interaural time differences. The stimulus was a broadband chirp train with a 40-Hz repetition rate in continuous broadband noise, and the unit responses were measured for several signal and masker (virtual) locations. Masked thresholds (the lowest signal-to-noise ratio, SNR, for which the signal could be detected for 75% of the stimulus presentations) changed systematically with signal and masker location. Single-unit thresholds did not necessarily improve with signal and masker separation; instead, they tended to reflect the units’ azimuth preference. Both how the signal was detected (through a rate increase or decrease) and how the noise masked the signal response (suppressive or excitatory masking) changed with signal and masker azimuth, consistent with a cross-correlator model of binaural processing. However, additional processing, perhaps related to the signal’s amplitude modulation rate, appeared to influence the units’ responses. The population masked thresholds (the most sensitive unit’s threshold at each signal and masker location) did improve with signal and masker separation as a result of the variety of azimuth preferences in our unit sample. The population thresholds were similar to human behavioral thresholds in both SNR value and shape, indicating that these units may provide a neural substrate for low-frequency SRM. PMID:15857966
Relation Between Cochlear Mechanics and Performance of Temporal Fine Structure-Based Tasks.
Otsuka, Sho; Furukawa, Shigeto; Yamagishi, Shimpei; Hirota, Koich; Kashino, Makio
2016-12-01
This study examined whether the mechanical characteristics of the cochlea could influence individual variation in the ability to use temporal fine structure (TFS) information. Cochlear mechanical functioning was evaluated by swept-tone evoked otoacoustic emissions (OAEs), which are thought to comprise linear reflection by micromechanical impedance perturbations, such as spatial variations in the number or geometry of outer hair cells, on the basilar membrane (BM). Low-rate (2 Hz) frequency modulation detection limens (FMDLs) were measured for carrier frequency of 1000 Hz and interaural phase difference (IPD) thresholds as indices of TFS sensitivity and high-rate (16 Hz) FMDLs and amplitude modulation detection limens (AMDLs) as indices of sensitivity to non-TFS cues. Significant correlations were found among low-rate FMDLs, low-rate AMDLs, and IPD thresholds (R = 0.47-0.59). A principal component analysis was used to show a common factor that could account for 81.1, 74.1, and 62.9 % of the variance in low-rate FMDLs, low-rate AMDLs, and IPD thresholds, respectively. An OAE feature, specifically a characteristic dip around 2-2.5 kHz in OAE spectra, showed a significant correlation with the common factor (R = 0.54). High-rate FMDLs and AMDLs were correlated with each other (R = 0.56) but not with the other measures. The results can be interpreted as indicating that (1) the low-rate AMDLs, as well as the IPD thresholds and low-rate FMDLs, depend on the use of TFS information coded in neural phase locking and (2) the use of TFS information is influenced by a particular aspect of cochlear mechanics, such as mechanical irregularity along the BM.
Influence of aging on human sound localization
Dobreva, Marina S.; O'Neill, William E.
2011-01-01
Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age. PMID:21368004
Subjective evaluation and electroacoustic theoretical validation of a new approach to audio upmixing
NASA Astrophysics Data System (ADS)
Usher, John S.
Audio signal processing systems for converting two-channel (stereo) recordings to four or five channels are increasingly relevant. These audio upmixers can be used with conventional stereo sound recordings and reproduced with multichannel home theatre or automotive loudspeaker audio systems to create a more engaging and natural-sounding listening experience. This dissertation discusses existing approaches to audio upmixing for recordings of musical performances and presents specific design criteria for a system to enhance spatial sound quality. A new upmixing system is proposed and evaluated according to these criteria and a theoretical model for its behavior is validated using empirical measurements. The new system removes short-term correlated components from two electronic audio signals using a pair of adaptive filters, updated according to a frequency domain implementation of the normalized-least-means-square algorithm. The major difference of the new system with all extant audio upmixers is that unsupervised time-alignment of the input signals (typically, by up to +/-10 ms) as a function of frequency (typically, using a 1024-band equalizer) is accomplished due to the non-minimum phase adaptive filter. Two new signals are created from the weighted difference of the inputs, and are then radiated with two loudspeakers behind the listener. According to the consensus in the literature on the effect of interaural correlation on auditory image formation, the self-orthogonalizing properties of the algorithm ensure minimal distortion of the frontal source imagery and natural-sounding, enveloping reverberance (ambiance) imagery. Performance evaluation of the new upmix system was accomplished in two ways: Firstly, using empirical electroacoustic measurements which validate a theoretical model of the system; and secondly, with formal listening tests which investigated auditory spatial imagery with a graphical mapping tool and a preference experiment. Both electroacoustic and subjective methods investigated system performance with a variety of test stimuli for solo musical performances reproduced using a loudspeaker in an orchestral concert-hall and recorded using different microphone techniques. The objective and subjective evaluations combined with a comparative study with two commercial systems demonstrate that the proposed system provides a new, computationally practical, high sound quality solution to upmixing.
Bogle, Jamie M; Zapala, David A; Criter, Robin; Burkard, Robert
2013-02-01
The cervical vestibular evoked myogenic potential (cVEMP) is a reflexive change in sternocleidomastoid (SCM) muscle contraction activity thought to be mediated by a saccular vestibulo-collic reflex. CVEMP amplitude varies with the state of the afferent (vestibular) limb of the vestibulo-collic reflex pathway, as well as with the level of SCM muscle contraction. It follows that in order for cVEMP amplitude to reflect the status of the afferent portion of the reflex pathway, muscle contraction level must be controlled. Historically, this has been accomplished by volitionally controlling muscle contraction level either with the aid of a biofeedback method, or by an a posteriori method that normalizes cVEMP amplitude by the level of muscle contraction. A posteriori normalization methods make the implicit assumption that mathematical normalization precisely removes the influence of the efferent limb of the vestibulo-collic pathway. With the cVEMP, however, we are violating basic assumptions of signal averaging: specifically, the background noise and the response are not independent. The influence of this signal-averaging violation on our ability to normalize cVEMP amplitude using a posteriori methods is not well understood. The aims of this investigation were to describe the effect of muscle contraction, as measured by a prestimulus electromyogenic estimate, on cVEMP amplitude and interaural amplitude asymmetry ratio, and to evaluate the benefit of using a commonly advocated a posteriori normalization method on cVEMP amplitude and asymmetry ratio variability. Prospective, repeated-measures design using a convenience sample. Ten healthy adult participants between 25 and 61 yr of age. cVEMP responses to 500 Hz tone bursts (120 dB pSPL) for three conditions describing maximum, moderate, and minimal muscle contraction. Mean (standard deviation) cVEMP amplitude and asymmetry ratios were calculated for each muscle-contraction condition. Repeated measures analysis of variance and t-tests compared the variability in cVEMP amplitude between sides and conditions. Linear regression analyses compared asymmetry ratios. Polynomial regression analyses described the corrected and uncorrected cVEMP amplitude growth functions. While cVEMP amplitude increased with increased muscle contraction, the relationship was not linear or even proportionate. In the majority of cases, once muscle contraction reached a certain "threshold" level, cVEMP amplitude increased rapidly and then saturated. Normalizing cVEMP amplitudes did not remove the relationship between cVEMP amplitude and muscle contraction level. As muscle contraction increased, the normalized amplitude increased, and then decreased, corresponding with the observed amplitude saturation. Abnormal asymmetry ratios (based on values reported in the literature) were noted for four instances of uncorrected amplitude asymmetry at less than maximum muscle contraction levels. Amplitude normalization did not substantially change the number of observed asymmetry ratios. Because cVEMP amplitude did not typically grow proportionally with muscle contraction level, amplitude normalization did not lead to stable cVEMP amplitudes or asymmetry ratios across varying muscle contraction levels. Until we better understand the relationships between muscle contraction level, surface electromyography (EMG) estimates of muscle contraction level, and cVEMP amplitude, the application of normalization methods to correct cVEMP amplitude appears unjustified. American Academy of Audiology.
Parthasarathy, Aravindakshan; Bartlett, Edward
2012-07-01
Auditory brainstem responses (ABRs), and envelope and frequency following responses (EFRs and FFRs) are widely used to study aberrant auditory processing in conditions such as aging. We have previously reported age-related deficits in auditory processing for rapid amplitude modulation (AM) frequencies using EFRs recorded from a single channel. However, sensitive testing of EFRs along a wide range of modulation frequencies is required to gain a more complete understanding of the auditory processing deficits. In this study, ABRs and EFRs were recorded simultaneously from two electrode configurations in young and old Fischer-344 rats, a common auditory aging model. Analysis shows that the two channels respond most sensitively to complementary AM frequencies. Channel 1, recorded from Fz to mastoid, responds better to faster AM frequencies in the 100-700 Hz range of frequencies, while Channel 2, recorded from the inter-aural line to the mastoid, responds better to slower AM frequencies in the 16-100 Hz range. Simultaneous recording of Channels 1 and 2 using AM stimuli with varying sound levels and modulation depths show that age-related deficits in temporal processing are not present at slower AM frequencies but only at more rapid ones, which would not have been apparent recording from either channel alone. Comparison of EFRs between un-anesthetized and isoflurane-anesthetized recordings in young animals, as well as comparison with previously published ABR waveforms, suggests that the generators of Channel 1 may emphasize more caudal brainstem structures while those of Channel 2 may emphasize more rostral auditory nuclei including the inferior colliculus and the forebrain, with the boundary of separation potentially along the cochlear nucleus/superior olivary complex. Simultaneous two-channel recording of EFRs help to give a more complete understanding of the properties of auditory temporal processing over a wide range of modulation frequencies which is useful in understanding neural representations of sound stimuli in normal, developmental or pathological conditions. Copyright © 2012 Elsevier B.V. All rights reserved.
Motion perception during variable-radius swing motion in darkness.
Rader, A A; Oman, C M; Merfeld, D M
2009-10-01
Using a variable-radius roll swing motion paradigm, we examined the influence of interaural (y-axis) and dorsoventral (z-axis) force modulation on perceived tilt and translation by measuring perception of horizontal translation, roll tilt, and distance from center of rotation (radius) at 0.45 and 0.8 Hz using standard magnitude estimation techniques (primarily verbal reports) in darkness. Results show that motion perception was significantly influenced by both y- and z-axis forces. During constant radius trials, subjects' perceptions of tilt and translation were generally almost veridical. By selectively pairing radius (1.22 and 0.38 m) and frequency (0.45 and 0.8 Hz, respectively), the y-axis acceleration could be tailored in opposition to gravity so that the combined y-axis gravitoinertial force (GIF) variation at the subject's ears was reduced to approximately 0.035 m/s(2) - in effect, the y-axis GIF was "nulled" below putative perceptual threshold levels. With y-axis force nulling, subjects overestimated their tilt angle and underestimated their horizontal translation and radius. For some y-axis nulling trials, a radial linear acceleration at twice the tilt frequency (0.25 m/s(2) at 0.9 Hz, 0.13 m/s(2) at 1.6 Hz) was simultaneously applied to reduce the z-axis force variations caused by centripetal acceleration and by changes in the z-axis component of gravity during tilt. For other trials, the phase of this radial linear acceleration was altered to double the magnitude of the z-axis force variations. z-axis force nulling further increased the perceived tilt angle and further decreased perceived horizontal translation and radius relative to the y-axis nulling trials, while z-axis force doubling had the opposite effect. Subject reports were remarkably geometrically consistent; an observer model-based analysis suggests that perception was influenced by knowledge of swing geometry.
Magnetoencephalographic responses in relation to temporal and spatial factors of sound fields
NASA Astrophysics Data System (ADS)
Soeta, Yoshiharu; Nakagawa, Seiji; Tonoike, Mitsuo; Hotehama, Takuya; Ando, Yoichi
2004-05-01
To establish the guidelines based on brain functions for designing sound fields such as a concert hall and an opera house, the activities of the human brain to the temporal and spatial factors of the sound field have been investigated using magnetoencephalography (MEG). MEG is a noninvasive technique for investigating neuronal activity in human brain. First of all, the auditory evoked responses in change of the magnitude of the interaural cross-correlation (IACC) were analyzed. IACC is one of the spatial factors, which has great influence on the degree of subjective preference and diffuseness for sound fields. The results indicated that the peak amplitude of N1m, which was found over the left and right temporal lobes around 100 ms after the stimulus onset, decreased with increasing the IACC. Second, the responses corresponding to subjective preference for one of the typical temporal factors, i.e., the initial delay gap between a direct sound and the first reflection, were investigated. The results showed that the effective duration of the autocorrelation function of MEG between 8 and 13 Hz became longer during presentations of a preferred stimulus. These results indicate that the brain may be relaxed, and repeat a similar temporal rhythm under preferred sound fields.
Fly-ear inspired acoustic sensors for gunshot localization
NASA Astrophysics Data System (ADS)
Liu, Haijun; Currano, Luke; Gee, Danny; Yang, Benjamin; Yu, Miao
2009-05-01
The supersensitive ears of the parasitoid fly Ormia ochracea have inspired researchers to develop bio-inspired directional microphone for sound localization. Although the fly ear is optimized for localizing the narrow-band calling song of crickets at 5 kHz, experiments and simulation have shown that it can amplify directional cues for a wide frequency range. In this article, a theoretical investigation is presented to study the use of fly-ear inspired directional microphones for gunshot localization. Using an equivalent 2-DOF model of the fly ear, the time responses of the fly ear structure to a typical shock wave are obtained and the associated time delay is estimated by using cross-correlation. Both near-field and far-field scenarios are considered. The simulation shows that the fly ear can greatly amplify the time delay by ~20 times, which indicates that with an interaural distance of only 1.2 mm the fly ear is able to generate a time delay comparable to that obtained by a conventional microphone pair with a separation as large as 24 mm. Since the parameters of the fly ear structure can also be tuned for muzzle blast and other impulse stimulus, fly-ear inspired acoustic sensors offers great potential for developing portable gunshot localization systems.
Hamlet, William R.; Lu, Yong
2016-01-01
Intrinsic plasticity has emerged as an important mechanism regulating neuronal excitability and output under physiological and pathological conditions. Here, we report a novel form of intrinsic plasticity. Using perforated patch clamp recordings, we examined the modulatory effects of group II metabotropic glutamate receptors (mGluR II) on voltage-gated potassium (KV) currents and the firing properties of neurons in the chicken nucleus laminaris (NL), the first central auditory station where interaural time cues are analyzed for sound localization. We found that activation of mGluR II by synthetic agonists resulted in a selective increase of the high threshold KV currents. More importantly, synaptically released glutamate (with reuptake blocked) also enhanced the high threshold KV currents. The enhancement was frequency-coding region dependent, being more pronounced in low frequency neurons compared to middle and high frequency neurons. The intracellular mechanism involved the Gβγ signaling pathway associated with phospholipase C and protein kinase C. The modulation strengthened membrane outward rectification, sharpened action potentials, and improved the ability of NL neurons to follow high frequency inputs. These data suggest that mGluR II provides a feedforward modulatory mechanism that may regulate temporal processing under the condition of heightened synaptic inputs. PMID:26964678
NASA Astrophysics Data System (ADS)
Azarpour, Masoumeh; Enzner, Gerald
2017-12-01
Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.
Li, Huahui; Kong, Lingzhi; Wu, Xihong; Li, Liang
2013-01-01
In reverberant rooms with multiple-people talking, spatial separation between speech sources improves recognition of attended speech, even though both the head-shadowing and interaural-interaction unmasking cues are limited by numerous reflections. It is the perceptual integration between the direct wave and its reflections that bridges the direct-reflection temporal gaps and results in the spatial unmasking under reverberant conditions. This study further investigated (1) the temporal dynamic of the direct-reflection-integration-based spatial unmasking as a function of the reflection delay, and (2) whether this temporal dynamic is correlated with the listeners’ auditory ability to temporally retain raw acoustic signals (i.e., the fast decaying primitive auditory memory, PAM). The results showed that recognition of the target speech against the speech-masker background is a descending exponential function of the delay of the simulated target reflection. In addition, the temporal extent of PAM is frequency dependent and markedly longer than that for perceptual fusion. More importantly, the temporal dynamic of the speech-recognition function is significantly correlated with the temporal extent of the PAM of low-frequency raw signals. Thus, we propose that a chain process, which links the earlier-stage PAM with the later-stage correlation computation, perceptual integration, and attention facilitation, plays a role in spatially unmasking target speech under reverberant conditions. PMID:23658664
Winters, Bradley D.; Jin, Shan-Xue; Ledford, Kenneth R.
2017-01-01
The principal neurons of the medial superior olive (MSO) encode cues for horizontal sound localization through comparisons of the relative timing of EPSPs. To understand how the timing and amplitude of EPSPs are maintained during propagation in the dendrites, we made dendritic and somatic whole-cell recordings from MSO principal neurons in brain slices from Mongolian gerbils. In somatic recordings, EPSP amplitudes were largely uniform following minimal stimulation of excitatory synapses at visualized locations along the dendrites. Similar results were obtained when excitatory synaptic transmission was eliminated in a low calcium solution and then restored at specific dendritic sites by pairing input stimulation and focal application of a higher calcium solution. We performed dual dendritic and somatic whole-cell recordings to measure spontaneous EPSPs using a dual-channel template-matching algorithm to separate out those events initiated at or distal to the dendritic recording location. Local dendritic spontaneous EPSP amplitudes increased sharply in the dendrite with distance from the soma (length constant, 53.6 μm), but their attenuation during propagation resulted in a uniform amplitude of ∼0.2 mV at the soma. The amplitude gradient of dendritic EPSPs was also apparent in responses to injections of identical simulated excitatory synaptic currents in the dendrites. Compartmental models support the view that these results extensively reflect the influence of dendritic cable properties. With relatively few excitatory axons innervating MSO neurons, the normalization of dendritic EPSPs at the soma would increase the importance of input timing versus location during the processing of interaural time difference cues in vivo. SIGNIFICANCE STATEMENT The neurons of the medial superior olive analyze cues for sound localization by detecting the coincidence of binaural excitatory synaptic inputs distributed along the dendrites. Previous studies have shown that dendritic voltages undergo severe attenuation as they propagate to the soma, potentially reducing the influence of distal inputs. However, using dendritic and somatic patch recordings, we found that dendritic EPSP amplitude increased with distance from the soma, compensating for dendritic attenuation and normalizing EPSP amplitude at the soma. Much of this normalization reflected the influence of dendritic morphology. As different combinations of presynaptic axons may be active during consecutive cycles of sound stimuli, somatic EPSP normalization renders spike initiation more sensitive to synapse timing than dendritic location. PMID:28213442
Role of Binaural Temporal Fine Structure and Envelope Cues in Cocktail-Party Listening.
Swaminathan, Jayaganesh; Mason, Christine R; Streeter, Timothy M; Best, Virginia; Roverud, Elin; Kidd, Gerald
2016-08-03
While conversing in a crowded social setting, a listener is often required to follow a target speech signal amid multiple competing speech signals (the so-called "cocktail party" problem). In such situations, separation of the target speech signal in azimuth from the interfering masker signals can lead to an improvement in target intelligibility, an effect known as spatial release from masking (SRM). This study assessed the contributions of two stimulus properties that vary with separation of sound sources, binaural envelope (ENV) and temporal fine structure (TFS), to SRM in normal-hearing (NH) human listeners. Target speech was presented from the front and speech maskers were either colocated with or symmetrically separated from the target in azimuth. The target and maskers were presented either as natural speech or as "noise-vocoded" speech in which the intelligibility was conveyed only by the speech ENVs from several frequency bands; the speech TFS within each band was replaced with noise carriers. The experiments were designed to preserve the spatial cues in the speech ENVs while retaining/eliminating them from the TFS. This was achieved by using the same/different noise carriers in the two ears. A phenomenological auditory-nerve model was used to verify that the interaural correlations in TFS differed across conditions, whereas the ENVs retained a high degree of correlation, as intended. Overall, the results from this study revealed that binaural TFS cues, especially for frequency regions below 1500 Hz, are critical for achieving SRM in NH listeners. Potential implications for studying SRM in hearing-impaired listeners are discussed. Acoustic signals received by the auditory system pass first through an array of physiologically based band-pass filters. Conceptually, at the output of each filter, there are two principal forms of temporal information: slowly varying fluctuations in the envelope (ENV) and rapidly varying fluctuations in the temporal fine structure (TFS). The importance of these two types of information in everyday listening (e.g., conversing in a noisy social situation; the "cocktail-party" problem) has not been established. This study assessed the contributions of binaural ENV and TFS cues for understanding speech in multiple-talker situations. Results suggest that, whereas the ENV cues are important for speech intelligibility, binaural TFS cues are critical for perceptually segregating the different talkers and thus for solving the cocktail party problem. Copyright © 2016 the authors 0270-6474/16/368250-08$15.00/0.
NASA Astrophysics Data System (ADS)
Fujii, Kenji
2002-06-01
In this dissertation, the correlation mechanism in modeling the process in the visual perception is introduced. It has been well described that the correlation mechanism is effective for describing subjective attributes in auditory perception. The main result is that it is possible to apply the correlation mechanism to the process in temporal vision and spatial vision, as well as in audition. (1) The psychophysical experiment was performed on subjective flicker rates for complex waveforms. A remarkable result is that the phenomenon of missing fundamental is found in temporal vision as analogous to the auditory pitch perception. This implies the existence of correlation mechanism in visual system. (2) For spatial vision, the autocorrelation analysis provides useful measures for describing three primary perceptual properties of visual texture: contrast, coarseness, and regularity. Another experiment showed that the degree of regularity is a salient cue for texture preference judgment. (3) In addition, the autocorrelation function (ACF) and inter-aural cross-correlation function (IACF) were applied for analysis of the temporal and spatial properties of environmental noise. It was confirmed that the acoustical properties of aircraft noise and traffic noise are well described. These analyses provided useful parameters extracted from the ACF and IACF in assessing the subjective annoyance for noise. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Junko Atagi, 6813 Mosonou, Saijo-cho, Higashi-Hiroshima 739-0024, Japan. E-mail address: atagi\\@urban.ne.jp.
Stereotactic Radiosurgery for Acoustic Neuromas: What Happens Long Term?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, Daniel E., E-mail: daniel.roos@health.sa.gov.au; University of Adelaide School of Medicine, Adelaide, South Australia; Potter, Andrew E.
2012-03-15
Purpose: To determine the clinical outcomes for acoustic neuroma treated with low-dose linear accelerator stereotactic radiosurgery (SRS) >10 years earlier at the Royal Adelaide Hospital using data collected prospectively at a dedicated SRS clinic. Methods and Materials: Between November 1993 and December 2000, 51 patients underwent SRS for acoustic neuroma. For the 44 patients with primary SRS for sporadic (unilateral) lesions, the median age was 63 years, the median of the maximal tumor diameter was 21 mm (range, 11-34), and the marginal dose was 14 Gy for the first 4 patients and 12 Gy for the other 40. Results: Themore » crude tumor control rate was 97.7% (1 patient required salvage surgery for progression at 9.75 years). Only 8 (29%) of 28 patients ultimately retained useful hearing (interaural pure tone average {<=}50 dB). Also, although the Kaplan-Meier estimated rate of hearing preservation at 5 years was 57% (95% confidence interval, 38-74%), this decreased to 24% (95% confidence interval, 11-44%) at 10 years. New or worsened V and VII cranial neuropathy occurred in 11% and 2% of patients, respectively; all cases were transient. No case of radiation oncogenesis developed. Conclusions: The long-term follow-up data of low-dose (12-14 Gy) linear accelerator SRS for acoustic neuroma have confirmed excellent tumor control and acceptable cranial neuropathy rates but a continual decrease in hearing preservation out to {>=}10 years.« less
Physiology of primary saccular afferents of goldfish: implications for Mauthner cell response.
Fay, R R
1995-01-01
Mauthner cells receive neurally coded information from the otolith organs in fishes, and it is most likely that initiation and directional characteristics of the C-start response depend on this input. In the goldfish, saccular afferents are sensitive to sound pressure (< -30 dB re: 1 dyne cm-2) in the most sensitive frequency range (200 to 800 Hz). This input arises from volume fluctuations of the swimbladder in response to the sound pressure waveform and is thus nondirectional. Primary afferents of the saccule, lagena, and utricle of the goldfish also respond with great sensitivity to acoustic particle motion (< 1 nanometer between 100 and 200 Hz). This input arises from the acceleration of the fish in a sound field and is inherently directional. Saccular afferents can be divided into two groups based on their tuning: one group is tuned at about 250 Hz, and the other tuned between 400 Hz and 1 kHz. All otolithic primary afferents phaselock to sinusoids throughout the frequency range of hearing (up to about 2 kHz). Based on physiological and behavioral studies on Mauthner cells, it appears that highly correlated binaural input to the M-cell, from the sacculi responding to sound pressure, may be required for a decision to respond but that the direction of the response is extracted from small deviations from a perfect interaural correlation arising from the directional response of otolith organs to acoustic particle motion.
A model of head-related transfer functions based on a state-space analysis
NASA Astrophysics Data System (ADS)
Adams, Norman Herkamp
This dissertation develops and validates a novel state-space method for binaural auditory display. Binaural displays seek to immerse a listener in a 3D virtual auditory scene with a pair of headphones. The challenge for any binaural display is to compute the two signals to supply to the headphones. The present work considers a general framework capable of synthesizing a wide variety of auditory scenes. The framework models collections of head-related transfer functions (HRTFs) simultaneously. This framework improves the flexibility of contemporary displays, but it also compounds the steep computational cost of the display. The cost is reduced dramatically by formulating the collection of HRTFs in the state-space and employing order-reduction techniques to design efficient approximants. Order-reduction techniques based on the Hankel-operator are found to yield accurate low-cost approximants. However, the inter-aural time difference (ITD) of the HRTFs degrades the time-domain response of the approximants. Fortunately, this problem can be circumvented by employing a state-space architecture that allows the ITD to be modeled outside of the state-space. Accordingly, three state-space architectures are considered. Overall, a multiple-input, single-output (MISO) architecture yields the best compromise between performance and flexibility. The state-space approximants are evaluated both empirically and psychoacoustically. An array of truncated FIR filters is used as a pragmatic reference system for comparison. For a fixed cost bound, the state-space systems yield lower approximation error than FIR arrays for D>10, where D is the number of directions in the HRTF collection. A series of headphone listening tests are also performed to validate the state-space approach, and to estimate the minimum order N of indiscriminable approximants. For D = 50, the state-space systems yield order thresholds less than half those of the FIR arrays. Depending upon the stimulus uncertainty, a minimum state-space order of 7≤N≤23 appears to be adequate. In conclusion, the proposed state-space method enables a more flexible and immersive binaural display with low computational cost.
Perception of the dynamic visual vertical during sinusoidal linear motion.
Pomante, A; Selen, L P J; Medendorp, W P
2017-10-01
The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical-as a proxy for the tilt percept-during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s 2 peak acceleration, 80 cm displacement). While subjects ( n =10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model's prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical. NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the disambiguation of linear acceleration and spatial orientation. We discuss the dynamics of these illusory percepts in terms of a dynamic Bayesian model that combines uncertainty in the vestibular signals with priors based on the natural statistics of head motion. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Leon, Angel Luis
2003-11-01
This thesis reports on the study of the acoustic properties of 18 theaters belonging to the Andalusian historical and architectural heritage. These theaters have undergone recent renovations to modernize and equip them appropriately. Coincident with this work, evaluations and qualification assessments with regard to their acoustic properties have been carried out for the individual theaters and for the group as a whole. Data measurements for this purpose consisted of acoustic measurements in situ, both before the renovation and after the renovation. These results have been compared with computer simulations of sound fields. Variables and parameters considered include the following: reverberation time, rapid speech transition index, back-ground noise, definition, clarity, strength, lateral efficiency, interaural cross-correlation coefficient, volume/seat ratio, volume/audience-area ratio. Based on the measurements and analysis, general conclusions are given in regard to the acoustic performance of theaters whose typology and size are comparable to those that were used in this study (between 800 and 8000 cubic meters). It is noted that these properties are comparable to those of the majority of European theaters. The results and conclusions are presented so that they should be of interest to architectural acoustics practitioners and to architects who are involved in the planning of renovation projects for theaters Thesis advisors: Juan J. Sendra and Jaime Navarro Copies of this thesis written in Spanish may be obtained by contacting the author, Angel L. Leon, E.T.S. de Arquitectura de Sevilla, Dpto. de Construcciones Arquitectonicas I, Av. Reina Mercedes, 2, 41012 Sevilla, Spain. E-mail address: leonr@us.es
Binocular Coordination of the Human Vestibulo-Ocular Reflex during Off-axis Pitch Rotation
NASA Technical Reports Server (NTRS)
Wood, S. J.; Reschke, M. F.; Kaufman, G. D.; Black, F. O.; Paloski, W. H.
2006-01-01
Head movements in the sagittal pitch plane typically involve off-axis rotation requiring both vertical and horizontal vergence ocular reflexes to compensate for angular and translational motion relative to visual targets of interest. The purpose of this study was to compare passive pitch VOR responses during rotation about an Earth-vertical axis (canal only cues) with off-axis rotation (canal and otolith cues). Methods. Eleven human subjects were oscillated sinusoidally at 0.13, 0.3 and 0.56 Hz while lying left-side down with the interaural axis either aligned with the axis of rotation or offset by 50 cm. In a second set of measurements, twelve subjects were also tested during sinusoidally varying centrifugation over the same frequency range. The modulation of vertical and horizontal vergence ocular responses was measured with a binocular videography system. Results. Off-axis pitch rotation enhanced the vertical VOR at lower frequencies and enhanced the vergence VOR at higher frequencies. During sinusoidally varying centrifugation, the opposite trend was observed for vergence, with both vertical and vergence vestibulo-ocular reflexes being suppressed at the highest frequency. Discussion. These differential effects of off-axis rotation over the 0.13 to 0.56 Hz range are consistent with the hypothesis that otolith-ocular reflexes are segregated in part on the basis of stimulus frequency. At the lower frequencies, tilt otolith-ocular responses compensate for declining canal input. At higher frequencies, translational otolith-ocular reflexes compensate for declining visual contributions to the kinematic demands required for fixating near targets.
Winters, Bradley D; Jin, Shan-Xue; Ledford, Kenneth R; Golding, Nace L
2017-03-22
The principal neurons of the medial superior olive (MSO) encode cues for horizontal sound localization through comparisons of the relative timing of EPSPs. To understand how the timing and amplitude of EPSPs are maintained during propagation in the dendrites, we made dendritic and somatic whole-cell recordings from MSO principal neurons in brain slices from Mongolian gerbils. In somatic recordings, EPSP amplitudes were largely uniform following minimal stimulation of excitatory synapses at visualized locations along the dendrites. Similar results were obtained when excitatory synaptic transmission was eliminated in a low calcium solution and then restored at specific dendritic sites by pairing input stimulation and focal application of a higher calcium solution. We performed dual dendritic and somatic whole-cell recordings to measure spontaneous EPSPs using a dual-channel template-matching algorithm to separate out those events initiated at or distal to the dendritic recording location. Local dendritic spontaneous EPSP amplitudes increased sharply in the dendrite with distance from the soma (length constant, 53.6 μm), but their attenuation during propagation resulted in a uniform amplitude of ∼0.2 mV at the soma. The amplitude gradient of dendritic EPSPs was also apparent in responses to injections of identical simulated excitatory synaptic currents in the dendrites. Compartmental models support the view that these results extensively reflect the influence of dendritic cable properties. With relatively few excitatory axons innervating MSO neurons, the normalization of dendritic EPSPs at the soma would increase the importance of input timing versus location during the processing of interaural time difference cues in vivo SIGNIFICANCE STATEMENT The neurons of the medial superior olive analyze cues for sound localization by detecting the coincidence of binaural excitatory synaptic inputs distributed along the dendrites. Previous studies have shown that dendritic voltages undergo severe attenuation as they propagate to the soma, potentially reducing the influence of distal inputs. However, using dendritic and somatic patch recordings, we found that dendritic EPSP amplitude increased with distance from the soma, compensating for dendritic attenuation and normalizing EPSP amplitude at the soma. Much of this normalization reflected the influence of dendritic morphology. As different combinations of presynaptic axons may be active during consecutive cycles of sound stimuli, somatic EPSP normalization renders spike initiation more sensitive to synapse timing than dendritic location. Copyright © 2017 the authors 0270-6474/17/373138-12$15.00/0.
Wada, Yoshiro; Nishiike, Suetaka; Kitahara, Tadashi; Yamanaka, Toshiaki; Imai, Takao; Ito, Taeko; Sato, Go; Matsuda, Kazunori; Kitamura, Yoshiaki; Takeda, Noriaki
2016-11-01
After repeated snowboard exercises in the virtual reality (VR) world with increasing time lags in trials 3-8, it is suggested that the adaptation to repeated visual-vestibulosomatosensory conflict in the VR world improved dynamic posture control and motor performance in the real world without the development of motion sickness. The VR technology was used and the effects of repeated snowboard exercise examined in the VR world with time lags between visual scene and body rotation on the head stability and slalom run performance during exercise in healthy subjects. Forty-two healthy young subjects participated in the study. After trials 1 and 2 of snowboard exercise in the VR world without time lag, trials 3-8 were conducted with 0.1, 0.2, 0.3, 0.4, 0.5, and 0.6 s time lags of the visual scene that the computer creates behind board rotation, respectively. Finally, trial 9 was conducted without time lag. Head linear accelerations and subjective slalom run performance were evaluated. The standard deviations of head linear accelerations in inter-aural direction were significantly increased in trial 8, with a time lag of 0.6 s, but significantly decreased in trial 9 without a time lag, compared with those in trial 2 without a time lag. The subjective scores of slalom run performance were significantly decreased in trial 8, with a time lag of 0.6 s, but significantly increased in trial 9 without a time lag, compared with those in trial 2 without a time lag. Motion sickness was not induced in any subjects.
On the temporal window of auditory-brain system in connection with subjective responses
NASA Astrophysics Data System (ADS)
Mouri, Kiminori
2003-08-01
The human auditory-brain system processes information extracted from autocorrelation function (ACF) of the source signal and interaural cross correlation function (IACF) of binaural sound signals which are associated with the left and right cerebral hemispheres, respectively. The purpose of this dissertation is to determine the desirable temporal window (2T: integration interval) for ACF and IACF mechanisms. For the ACF mechanism, the visual change of Φ(0), i.e., the power of ACF, was associated with the change of loudness, and it is shown that the recommended temporal window is given as about 30(τe)min [s]. The value of (τe)min is the minimum value of effective duration of the running ACF of the source signal. It is worth noticing from the experiment of EEG that the most preferred delay time of the first reflection sound is determined by the piece indicating (τe)min in the source signal. For the IACF mechanism, the temporal window is determined as below: The measured range of τIACC corresponding to subjective angle for the moving image sound depends on the temporal window. Here, the moving image was simulated by the use of two loudspeakers located at +/-20° in the horizontal plane, reproducing amplitude modulated band-limited noise alternatively. It is found that the temporal window has a wide range of values from 0.03 to 1 [s] for the modulation frequency below 0.2 Hz. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Kiminori Mouri, 5-3-3-1110 Harayama-dai, Sakai city, Osaka 590-0132, Japan. E-mail address: km529756@aol.com
Compound gravity receptor polarization vectors evidenced by linear vestibular evoked potentials
NASA Technical Reports Server (NTRS)
Jones, S. M.; Jones, T. A.; Bell, P. L.; Taylor, M. J.
2001-01-01
The utricle and saccule are gravity receptor organs of the vestibular system. These receptors rely on a high-density otoconial membrane to detect linear acceleration and the position of the cranium relative to Earth's gravitational vector. The linear vestibular evoked potential (VsEP) has been shown to be an effective non-invasive functional test specifically for otoconial gravity receptors (Jones et al., 1999). Moreover, there is some evidence that the VsEP can be used to independently test utricular and saccular function (Taylor et al., 1997; Jones et al., 1998). Here we characterize compound macular polarization vectors for the utricle and saccule in hatchling chickens. Pulsed linear acceleration stimuli were presented in two axes, the dorsoventral (DV, +/- Z axis) to isolate the saccule, and the interaural (IA, +/- Y axis) to isolate the utricle. Traditional signal averaging was used to resolve responses recorded from the surface of the skull. Latency and amplitude of eighth nerve components of the linear VsEP were measured. Gravity receptor responses exhibited clear preferences for one stimulus direction in each axis. With respect to each utricular macula, lateral translation in the IA axis produced maximum ipsilateral response amplitudes with substantially greater amplitude intensity (AI) slopes than medially directed movement. Downward caudal motions in the DV axis produced substantially larger response amplitudes and AI slopes. The results show that the macula lagena does not contribute to the VsEP compound polarization vectors of the sacculus and utricle. The findings suggest further that preferred compound vectors for the utricle depend on the pars externa (i.e. lateral hair cell field) whereas for the saccule they depend on pars interna (i.e. superior hair cell fields). These data provide evidence that maculae saccule and utricle can be selectively evaluated using the linear VsEP.
Multivariate Analyses of Balance Test Performance, Vestibular Thresholds, and Age
Karmali, Faisal; Bermúdez Rey, María Carolina; Clark, Torin K.; Wang, Wei; Merfeld, Daniel M.
2017-01-01
We previously published vestibular perceptual thresholds and performance in the Modified Romberg Test of Standing Balance in 105 healthy humans ranging from ages 18 to 80 (1). Self-motion thresholds in the dark included roll tilt about an earth-horizontal axis at 0.2 and 1 Hz, yaw rotation about an earth-vertical axis at 1 Hz, y-translation (interaural/lateral) at 1 Hz, and z-translation (vertical) at 1 Hz. In this study, we focus on multiple variable analyses not reported in the earlier study. Specifically, we investigate correlations (1) among the five thresholds measured and (2) between thresholds, age, and the chance of failing condition 4 of the balance test, which increases vestibular reliance by having subjects stand on foam with eyes closed. We found moderate correlations (0.30–0.51) between vestibular thresholds for different motions, both before and after using our published aging regression to remove age effects. We found that lower or higher thresholds across all threshold measures are an individual trait that account for about 60% of the variation in the population. This can be further distributed into two components with about 20% of the variation explained by aging and 40% of variation explained by a single principal component that includes similar contributions from all threshold measures. When only roll tilt 0.2 Hz thresholds and age were analyzed together, we found that the chance of failing condition 4 depends significantly on both (p = 0.006 and p = 0.013, respectively). An analysis incorporating more variables found that the chance of failing condition 4 depended significantly only on roll tilt 0.2 Hz thresholds (p = 0.046) and not age (p = 0.10), sex nor any of the other four threshold measures, suggesting that some of the age effect might be captured by the fact that vestibular thresholds increase with age. For example, at 60 years of age, the chance of failing is roughly 5% for the lowest roll tilt thresholds in our population, but this increases to 80% for the highest roll tilt thresholds. These findings demonstrate the importance of roll tilt vestibular cues for balance, even in individuals reporting no vestibular symptoms and with no evidence of vestibular dysfunction. PMID:29167656
Fagerson, M H; Barmack, N H
1995-06-01
1. Because the nucleus reticularis gigantocellularis (NRGc) receives a substantial descending projection from the caudal vestibular nuclei, we used extracellular single-unit recording combined with natural vestibular stimulation to examine the possible peripheral origins of the vestibularly modulated activity of caudal NRGc neurons located within 500 microns of the midline. Chloralose-urethan anesthetized rabbits were stimulated with an exponential "step" and/or static head-tilt stimulus, as well as sinusoidal rotation about the longitudinal or interaural axes providing various combinations of roll or pitch, respectively. Recording sites were reconstructed from electrolytic lesions confirmed histologically. 2. More than 85% of the 151 neurons, in the medial aspect of the caudal NRGc, responded to vertical vestibular stimulation. Ninety-six percent of these responded to rotation onto the contralateral side (beta responses). Only a few also responded to horizontal stimulation. Seventy-eight percent of the neurons that responded to vestibular stimulation responded during static roll-tilt. One-half of these neurons also responded transiently to the change in head position during exponential "step" stimulation, suggesting input mediated by otolith and semicircular canal receptors or tonic-phasic otolith neurons. 3. Seventy-five percent of the responsive neurons had a "null plane." The planes of stimulation resulting in maximal responses, for cells that responded to static stimulation, were distributed throughout 150 degrees in both roll and pitch quadrants. Five of these cells responded only transiently during exponential "step" stimulation and responded maximally when stimulated in the plane of one of the vertical semicircular canals. 4. The phase of the response of the 25% of medial NRGc neurons that lacked "null planes" gradually shifted approximately 180 degrees during sinusoidal vestibular stimulation as the plane of stimulation was shifted about the vertical axis. These neurons likely received convergent input with differing spatial and temporal properties. 5. The activity of neurons in the medial aspect of the caudal NRGc of rabbits was modulated by both otolithic macular and vertical semicircular canal receptor stimulation. This vestibular information may be important for controlling the intensity of the muscle activity in muscles such as neck muscles where the load on the muscle is affected by the position of the head with respect to gravity. Some of these neurons may also shift muscle function from an agonist to an antagonist as the direction of head tilt changes.
Gifford, René H.; Dorman, Michael F.; Skarzynski, Henryk; Lorens, Artur; Polak, Marek; Driscoll, Colin L. W.; Roland, Peter; Buchman, Craig A.
2012-01-01
Objective The aim of this study was to assess the benefit of having preserved acoustic hearing in the implanted ear for speech recognition in complex listening environments. Design The current study included a within subjects, repeated-measures design including 21 English speaking and 17 Polish speaking cochlear implant recipients with preserved acoustic hearing in the implanted ear. The patients were implanted with electrodes that varied in insertion depth from 10 to 31 mm. Mean preoperative low-frequency thresholds (average of 125, 250 and 500 Hz) in the implanted ear were 39.3 and 23.4 dB HL for the English- and Polish-speaking participants, respectively. In one condition, speech perception was assessed in an 8-loudspeaker environment in which the speech signals were presented from one loudspeaker and restaurant noise was presented from all loudspeakers. In another condition, the signals were presented in a simulation of a reverberant environment with a reverberation time of 0.6 sec. The response measures included speech reception thresholds (SRTs) and percent correct sentence understanding for two test conditions: cochlear implant (CI) plus low-frequency hearing in the contralateral ear (bimodal condition) and CI plus low-frequency hearing in both ears (best aided condition). A subset of 6 English-speaking listeners were also assessed on measures of interaural time difference (ITD) thresholds for a 250-Hz signal. Results Small, but significant, improvements in performance (1.7 – 2.1 dB and 6 – 10 percentage points) were found for the best-aided condition vs. the bimodal condition. Postoperative thresholds in the implanted ear were correlated with the degree of EAS benefit for speech recognition in diffuse noise. There was no reliable relationship among measures of audiometric threshold in the implanted ear nor elevation in threshold following surgery and improvement in speech understanding in reverberation. There was a significant correlation between ITD threshold at 250 Hz and EAS-related benefit for the adaptive SRT. Conclusions Our results suggest that (i) preserved low-frequency hearing improves speech understanding for CI recipients (ii) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing and (iii) preservation of binaural timing cues, albeit poorer than observed for individuals with normal hearing, is possible following unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. Our results demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of cochlear implant criteria to include individuals with low-frequency thresholds in even the normal to near-normal range. PMID:23446225
Brenneman, Lauren; Cash, Elizabeth; Chermak, Gail D; Guenette, Linda; Masters, Gay; Musiek, Frank E; Brown, Mallory; Ceruti, Julianne; Fitzegerald, Krista; Geissler, Kristin; Gonzalez, Jennifer; Weihing, Jeffrey
2017-09-01
Pediatric central auditory processing disorder (CAPD) is frequently comorbid with other childhood disorders. However, few studies have examined the relationship between commonly used CAPD, language, and cognition tests within the same sample. The present study examined the relationship between diagnostic CAPD tests and "gold standard" measures of language and cognitive ability, the Clinical Evaluation of Language Fundamentals (CELF) and the Wechsler Intelligence Scale for Children (WISC). A retrospective study. Twenty-seven patients referred for CAPD testing who scored average or better on the CELF and low average or better on the WISC were initially included. Seven children who scored below the CELF and/or WISC inclusion criteria were then added to the dataset for a second analysis, yielding a sample size of 34. Participants were administered a CAPD battery that included at least the following three CAPD tests: Frequency Patterns (FP), Dichotic Digits (DD), and Competing Sentences (CS). In addition, they were administered the CELF and WISC. Relationships between scores on CAPD, language (CELF), and cognition (WISC) tests were examined using correlation analysis. DD and FP showed significant correlations with Full Scale Intelligence Quotient, and the DD left ear and the DD interaural difference measures both showed significant correlations with working memory. However, ∼80% or more of the variance in these CAPD tests was unexplained by language and cognition measures. Language and cognition measures were more strongly correlated with each other than were the CAPD tests with any CELF or WISC scale. Additional correlations with the CAPD tests were revealed when patients who scored in the mild-moderate deficit range on the CELF and/or in the borderline low intellectual functioning range on the WISC were included in the analysis. While both the DD and FP tests showed significant correlations with one or more cognition measures, the majority of the variance in these CAPD measures went unexplained by cognition. Unlike DD and FP, the CS test was not correlated with cognition. Additionally, language measures were not significantly correlated with any of the CAPD tests. Our findings emphasize that the outcomes and interpretation of results vary as a function of the subject inclusion criteria that are applied for the CELF and WISC. Including participants with poorer cognition and/or language scores increased the number of significant correlations observed. For this reason, it is important that studies investigating the relationship between CAPD and other domains or disorders report the specific inclusion criteria used for all tests. American Academy of Audiology
Parabrachial nucleus neuronal responses to off-vertical axis rotation in macaques
McCandless, Cyrus H.; Balaban, Carey D.
2010-01-01
The caudal aspect of the parabrachial nucleus (PBN) contains neurons responsive to whole body, periodic rotational stimulation in alert monkeys. This study characterizes the angular and linear motion-sensitive response properties of PBN unit responses during off-vertical axis rotation (OVAR) and position trapezoid stimulation. The OVAR responses displayed a constant firing component which varied from the firing rate at rest. Nearly two-thirds of the units also modulated their discharges with respect to head orientation (re: gravity) during constant velocity OVAR stimulation. The modulated response magnitudes were equal during ipsilateral and contralateral OVARs, indicative of a one-dimensional accelerometer. These response orientations during OVAR divided the units into three spatially tuned populations, with peak modulation responses centered in the ipsilateral ear down, contralateral anterior semicircular canal down, and occiput down orientations. Because the orientation of the OVAR modulation response was opposite in polarity to the orientation of the static tilt component of responses to position trapezoids for the majority of units, the linear acceleration responses were divided into colinear dynamic linear and static tilt components. The orientations of these unit responses formed two distinct population response axes: (1) units with an interaural linear response axis and (2) units with an ipsilateral anterior semicircular canal-contralateral posterior semicircular canal plane linear response axis. The angular rotation sensitivity of these units is in a head-vertical plane that either contains the linear acceleration response axis or is perpendicular to the linear acceleration axis. Hence, these units behave like head-based (‘strap-down’) inertial guidance sensors. Because the PBN contributes to sensory and interoceptive processing, it is suggested that vestibulo-recipient caudal PBN units may detect potentially dangerous anomalies in control of postural stability during locomotion. In particular, these signals may contribute to the range of affective and emotional responses that include panic associated with falling, malaise associated with motion sickness and mal-de-debarquement, and comorbid balance and anxiety disorders. PMID:20039027
NASA Technical Reports Server (NTRS)
Merfeld, D. M.; Paloski, W. H. (Principal Investigator)
1996-01-01
The vestibulo-ocular reflexes (VOR) are determined not only by angular acceleration, but also by the presence of gravity and linear acceleration. This phenomenon was studied by measuring three-dimensional nystagmic eye movements, with implanted search coils, in four male squirrel monkeys. Monkeys were rotated in the dark at 200 degrees/s, centrally or 79 cm off-axis, with the axis of rotation always aligned with gravity and the spinal axis of the upright monkeys. The monkey's position relative to the centripetal acceleration (facing center or back to center) had a dramatic influence on the VOR. These studies show that a torsional response was always elicited that acted to shift the axis of eye rotation toward alignment with gravito-inertial force. On the other hand, a slow phase downward vertical response usually existed, which shifted the axis of eye rotation away from the gravito-inertial force. These findings were consistent across all monkeys. In another set of tests, the same monkeys were rapidly tilted about their interaural (pitch) axis. Tilt orientations of 45 degrees and 90 degrees were maintained for 1 min. Other than a compensatory angular VOR during the rotation, no consistent eye velocity response was ever observed during or following the tilt. The absence of any response following tilt proves that the observed torsional and vertical responses were not a positional nystagmus. Model simulations qualitatively predict all components of these eccentric rotation and tilt responses. These simulations support the conclusion that the VOR during eccentric rotation may consist of two components: a linear VOR and a rotational VOR. The model predicts a slow phase downward, vertical, linear VOR during eccentric rotation even though there was never a change in the force aligned with monkey's spinal (Z) axis. The model also predicts the torsional components of the response that shift the rotation axis of the angular VOR toward alignment with gravito-inertial force.
Predicting the Overall Spatial Quality of Automotive Audio Systems
NASA Astrophysics Data System (ADS)
Koya, Daisuke
The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial quality of 2- and 5-channel automotive audio systems with a cross-validation performance of R. 2 = 0.85 and root-mean-squareerror (RMSE) = 11.03%.
Merfeld, D M
1996-01-01
The vestibulo-ocular reflexes (VOR) are determined not only by angular acceleration, but also by the presence of gravity and linear acceleration. This phenomenon was studied by measuring three-dimensional nystagmic eye movements, with implanted search coils, in four male squirrel monkeys. Monkeys were rotated in the dark at 200 degrees/s, centrally or 79 cm off-axis, with the axis of rotation always aligned with gravity and the spinal axis of the upright monkeys. The monkey's position relative to the centripetal acceleration (facing center or back to center) had a dramatic influence on the VOR. These studies show that a torsional response was always elicited that acted to shift the axis of eye rotation toward alignment with gravito-inertial force. On the other hand, a slow phase downward vertical response usually existed, which shifted the axis of eye rotation away from the gravito-inertial force. These findings were consistent across all monkeys. In another set of tests, the same monkeys were rapidly tilted about their interaural (pitch) axis. Tilt orientations of 45 degrees and 90 degrees were maintained for 1 min. Other than a compensatory angular VOR during the rotation, no consistent eye velocity response was ever observed during or following the tilt. The absence of any response following tilt proves that the observed torsional and vertical responses were not a positional nystagmus. Model simulations qualitatively predict all components of these eccentric rotation and tilt responses. These simulations support the conclusion that the VOR during eccentric rotation may consist of two components: a linear VOR and a rotational VOR. The model predicts a slow phase downward, vertical, linear VOR during eccentric rotation even though there was never a change in the force aligned with monkey's spinal (Z) axis. The model also predicts the torsional components of the response that shift the rotation axis of the angular VOR toward alignment with gravito-inertial force.
Frequency tagging to track the neural processing of contrast in fast, continuous sound sequences.
Nozaradan, Sylvie; Mouraux, André; Cousineau, Marion
2017-07-01
The human auditory system presents a remarkable ability to detect rapid changes in fast, continuous acoustic sequences, as best illustrated in speech and music. However, the neural processing of rapid auditory contrast remains largely unclear, probably due to the lack of methods to objectively dissociate the response components specifically related to the contrast from the other components in response to the sequence of fast continuous sounds. To overcome this issue, we tested a novel use of the frequency-tagging approach allowing contrast-specific neural responses to be tracked based on their expected frequencies. The EEG was recorded while participants listened to 40-s sequences of sounds presented at 8Hz. A tone or interaural time contrast was embedded every fifth sound (AAAAB), such that a response observed in the EEG at exactly 8 Hz/5 (1.6 Hz) or harmonics should be the signature of contrast processing by neural populations. Contrast-related responses were successfully identified, even in the case of very fine contrasts. Moreover, analysis of the time course of the responses revealed a stable amplitude over repetitions of the AAAAB patterns in the sequence, except for the response to perceptually salient contrasts that showed a buildup and decay across repetitions of the sounds. Overall, this new combination of frequency-tagging with an oddball design provides a valuable complement to the classic, transient, evoked potentials approach, especially in the context of rapid auditory information. Specifically, we provide objective evidence on the neural processing of contrast embedded in fast, continuous sound sequences. NEW & NOTEWORTHY Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia might be an impaired processing of fast auditory changes, highlighting how the encoding of rapid acoustic information is critical for auditory communication. Here, we present a novel electrophysiological approach to capture in humans neural markers of contrasts in fast continuous tone sequences. Contrast-specific responses were successfully identified, even for very fine contrasts, providing direct insight on the encoding of rapid auditory information. Copyright © 2017 the American Physiological Society.
Gravito-Inertial Force Resolution in Perception of Synchronized Tilt and Translation
NASA Technical Reports Server (NTRS)
Wood, Scott J.; Holly, Jan; Zhang, Guen-Lu
2011-01-01
Natural movements in the sagittal plane involve pitch tilt relative to gravity combined with translation motion. The Gravito-Inertial Force (GIF) resolution hypothesis states that the resultant force on the body is perceptually resolved into tilt and translation consistently with the laws of physics. The purpose of this study was to test this hypothesis for human perception during combined tilt and translation motion. EXPERIMENTAL METHODS: Twelve subjects provided verbal reports during 0.3 Hz motion in the dark with 4 types of tilt and/or translation motion: 1) pitch tilt about an interaural axis at +/-10deg or +/-20deg, 2) fore-aft translation with acceleration equivalent to +/-10deg or +/-20deg, 3) combined "in phase" tilt and translation motion resulting in acceleration equivalent to +/-20deg, and 4) "out of phase" tilt and translation motion that maintained the resultant gravito-inertial force aligned with the longitudinal body axis. The amplitude of perceived pitch tilt and translation at the head were obtained during separate trials. MODELING METHODS: Three-dimensional mathematical modeling was performed to test the GIF-resolution hypothesis using a dynamical model. The model encoded GIF-resolution using the standard vector equation, and used an internal model of motion parameters, including gravity. Differential equations conveyed time-varying predictions. The six motion profiles were tested, resulting in predicted perceived amplitude of tilt and translation for each. RESULTS: The modeling results exhibited the same pattern as the experimental results. Most importantly, both modeling and experimental results showed greater perceived tilt during the "in phase" profile than the "out of phase" profile, and greater perceived tilt during combined "in phase" motion than during pure tilt of the same amplitude. However, the model did not predict as much perceived translation as reported by subjects during pure tilt. CONCLUSION: Human perception is consistent with the GIF-resolution hypothesis even when the gravito-inertial force vector remains aligned with the body during periodic motion. Perception is also consistent with GIF-resolution in the opposite condition, when the gravito-inertial force vector angle is enhanced by synchronized tilt and translation.
NASA Astrophysics Data System (ADS)
Wuyts, Floris; Clement, Gilles; Naumov, Ivan; Kornilova, Ludmila; Glukhikh, Dmitriy; Hallgren, Emma; MacDougall, Hamish; Migeotte, Pierre-Francois; Delière, Quentin; Weerts, Aurelie; Moore, Steven; Diedrich, Andre
In 13 cosmonauts, the vestibulo-autonomic reflex was investigated before and after 6 months duration spaceflight. Cosmonauts were rotated on the mini-centrifuge VVIS, which is installed in Star City. Initially, this mini-centrifuge flew on board of the Neurolab mission (STS-90), and served to generate intermittent artificial gravity during that mission, with apparent very positive effects on the preservation of the orthostatic tolerance upon return to earth in the 4 crew members that were subjected to the rotations in space. The current experiments SPIN and GAZE-SPIN are control experiments to test the hypothesis that intermittent artificial gravity in space can serve as a counter measure against several deleterious effects of microgravity. Additionally, the effect of microgravity on the gaze holding system is studied as well. Cosmonauts from a long duration stay in the International Space Station were tested on the VVIS (1 g centripetal interaural acceleration; consecutive right-ear-out anti-clockwise and left-ear-out clockwise measurement) on 5 different days. Two measurements were scheduled about one month and a half prior to launch and the remaining three immediately after their return from space (typically on R+2, R+4, R+9; R = return day from space). The ocular counter roll (OCR) as a measure of otolith function was measured on before, during and after the rotation in the mini centrifuge, using infrared video goggles. The perception of verticality was monitored using an ultrasound system. Gaze holding was tested before, during and after rotation. After the centrifugation part, the crew was installed on a tilt table, and instrumented with several cardiovascular recording equipment (ECG, continuous blood pressure monitoring, respiratory monitoring), as well as with impedance measurement devices to investigate fluid redistribution throughout the operational tilt test. To measure heart rate variability parameters, imposed breathing periods were included in the test protocol. The subjects were subjected to a passive tilt test of 60 degrees, during 15 minutes. The results show that cosmonauts clearly have a statistically significantly reduced ocular counter rolling during rotation upon return from space, when compared to the pre-flight condition, indicating a reduced sensitivity of the otolith system to gravito intertial acceleration. None of the subjects fainted or even approached presyncope. However, the resistance in the calf, measured with the impedance method, showed a significant increased pooling in the lower limbs. Additionally, this was statistically significantly correlated (p=0.024) with a reduced otolith response, when comparing for each subject the vestibular and autonomic data. This result shows that the vestibulo-autonomic reflex is reduced after 6 months of spaceflight. When compared with Neurolab, the otolith response in the current group of crew members that were not subjected to in-flight centrifugation is significantly reduced, corroborating the hypothesis that in-flight artificial gravity may be of great importance to mitigate the deleterious effects of microgravity. Projects are funded by PRODEX-BELSPO, ESA, IBMP
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang
2018-04-01
A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.
Busettini, C; Miles, F A; Schwarz, U; Carl, J R
1994-01-01
Recent experiments on monkeys have indicated that the eye movements induced by brief translation of either the observer or the visual scene are a linear function of the inverse of the viewing distance. For the movements of the observer, the room was dark and responses were attributed to a translational vestibulo-ocular reflex (TVOR) that senses the motion through the otolith organs; for the movements of the scene, which elicit ocular following, the scene was projected and adjusted in size and speed so that the retinal stimulation was the same at all distances. The shared dependence on viewing distance was consistent with the hypothesis that the TVOR and ocular following are synergistic and share central pathways. The present experiments looked for such dependencies on viewing distance in human subjects. When briefly accelerated along the interaural axis in the dark, human subjects generated compensatory eye movements that were also a linear function of the inverse of the viewing distance to a previously fixated target. These responses, which were attributed to the TVOR, were somewhat weaker than those previously recorded from monkeys using similar methods. When human subjects faced a tangent screen onto which patterned images were projected, brief motion of those images evoked ocular following responses that showed statistically significant dependence on viewing distance only with low-speed stimuli (10 degrees/s). This dependence was at best weak and in the reverse direction of that seen with the TVOR, i.e., responses increased as viewing distance increased. We suggest that in generating an internal estimate of viewing distance subjects may have used a confounding cue in the ocular-following paradigm--the size of the projected scene--which was varied directly with the viewing distance in these experiments (in order to preserve the size of the retinal image). When movements of the subject were randomly interleaved with the movements of the scene--to encourage the expectation of ego-motion--the dependence of ocular following on viewing distance altered significantly: with higher speed stimuli (40 degrees/s) many responses (63%) now increased significantly as viewing distance decreased, though less vigorously than the TVOR. We suggest that the expectation of motion results in the subject placing greater weight on cues such as vergence and accommodation that provide veridical distance information in our experimental situation: cue selection is context specific.
Lateral Attenuation of Aircraft Flight Noise.
1985-03-01
levels with elevation angle. Comparisons of different Imodels are made in terms of the differences in A - levels for a flyover with the observer directly...attenuation adjustment to be applied to the basic noise data is the same when applied to maximum levels (maximum A - levels for example) or to integrated...attenuation values were applied to sets of one-third octave band spectra for different aircraft The resulting differences in A - levels for these noise spectra
[Arginase Level in Suspended Red Blood Cells Storaged for Different Time].
Fan, Li-Ping; Huang, Hao-Bo; Wei, Shi-Jin; Fu, Dan-Hui; Zeng, Feng; Huang, Qing-Hua; Hong, Jin-Quan
2015-10-01
To explore the effect of storage time on arginase level, and the possible source of arginase in suspended red blood cells (RBC). The arginase and myeloperoxidase (MPO) levels in suspended RBC and control plasma were detected by ELISA. The free hemoglobin level in suspended RBC and control plasma were detected by colorimetric method. The relationship between arginase level, MPO level and free hemoglobin level in suspended RBC was analyzed by the related methods. The arginase and free hemoglobin levels in suspended RBC were higher than those in control plasma. Otherwise, MPO level was not significantly different between suspended RBC and control plasma. All of them did not increase along with prolonging of storage time. There was not a significant correlation between arginase level and free hemoglobin level in suspended RBC of different storage time (r = 0.03), but arginase level positively correlated with MPO level in the suspended RBC of different storage time (r = 0.76). The arginase level in suspended RBC storaged for different time increases significantly, but not along with prolonging of storage time. The main possible source of arginase in the suspended RBC is the residual white blood cell, especially neutrophils.
Curtin, Stephen E.; Staley, Andrew W.; Andreasen, David C.
2016-01-01
Key Results This report presents potentiometric-surface maps of the Aquia and Magothy aquifers and the Upper Patapsco, Lower Patapsco, and Patuxent aquifer systems using water levels measured during September 2015. Water-level difference maps are also presented for these aquifers. The water-level differences in the Aquia aquifer are shown using groundwater-level data from 1982 and 2015, while the water-level differences are shown for the Magothy aquifer using data from 1975 and 2015. Water-level difference maps for both the Upper Patapsco and Lower Patapsco aquifer systems are shown using data from 1990 and 2015. The water-level differences in the Patuxent aquifer system are shown using groundwater-level data from 2007 and 2015. The potentiometric surface maps show water levels ranging from 53 feet above sea level to 164 feet below sea level in the Aquia aquifer, from 86 feet above sea level to 106 feet below sea level in the Magothy aquifer, from 115 feet above sea level to 115 feet below sea level in the Upper Patapsco aquifer system, from 106 feet above sea level to 194 feet below sea level in the Lower Patapsco aquifer system, and from 165 feet above sea level to 171 feet below sea level in the Patuxent aquifer system. Water levels have declined by as much as 116 feet in the Aquia aquifer since 1982, 99 feet in the Magothy aquifer since 1975, 66 and 83 feet in the Upper Patapsco and Lower Patapsco aquifer systems, respectively, since 1990, and 80 feet in the Patuxent aquifer system since 2007.
Toppe, Jogeir; Albrektsen, Sissel; Hope, Britt; Aksnes, Anders
2007-03-01
The chemical composition, content of minerals and the profiles of amino acids and fatty acids were analyzed in fish bones from eight different species of fish. Fish bones varied significantly in chemical composition. The main difference was lipid content ranging from 23 g/kg in cod (Gadus morhua) to 509 g/kg in mackerel (Scomber scombrus). In general fatty fish species showed higher lipid levels in the bones compared to lean fish species. Similarly, lower levels of protein and ash were observed in bones from fatty fish species. Protein levels differed from 363 g/kg lipid free dry matter (dm) to 568 g/kg lipid free dm with a concomitant inverse difference in ash content. Ash to protein ratio differed from 0.78 to 1.71 with the lowest level in fish that naturally have highest swimming and physical activity. Saithe (Pollachius virens) and salmon (Salmo salar) were found to be significantly different in the levels of lipid, protein and ash, and ash/protein ratio in the bones. Only small differences were observed in the level of amino acids although species specific differences were observed. The levels of Ca and P in lipid free fish bones were about the same in all species analyzed. Fatty acid profile differed in relation to total lipid levels in the fish bones, but some minor differences between fish species were observed.
Seismic response of reinforced concrete frames at different damage levels
NASA Astrophysics Data System (ADS)
Morales-González, Merangeli; Vidot-Vega, Aidcer L.
2017-03-01
Performance-based seismic engineering is focused on the definition of limit states to represent different levels of damage, which can be described by material strains, drifts, displacements or even changes in dissipating properties and stiffness of the structure. This study presents a research plan to evaluate the behavior of reinforced concrete (RC) moment resistant frames at different performance levels established by the ASCE 41-06 seismic rehabilitation code. Sixteen RC plane moment frames with different span-to-depth ratios and three 3D RC frames were analyzed to evaluate their seismic behavior at different damage levels established by the ASCE 41-06. For each span-to-depth ratio, four different beam longitudinal reinforcement steel ratios were used that varied from 0.85 to 2.5% for the 2D frames. Nonlinear time history analyses of the frames were performed using scaled ground motions. The impact of different span-to-depth and reinforcement ratios on the damage levels was evaluated. Material strains, rotations and seismic hysteretic energy changes at different damage levels were studied.
Practicality of performing medical procedures in chemical protective ensembles.
Garner, Alan; Laurence, Helen; Lee, Anna
2004-04-01
To determine whether certain life saving medical procedures can be successfully performed while wearing different levels of personal protective equipment (PPE), and whether these procedures can be performed in a clinically useful time frame. We assessed the capability of eight medical personnel to perform airway maintenance and antidote administration procedures on manikins, in all four described levels of PPE. The levels are: Level A--a fully encapsulated chemically resistant suit; Level B--a chemically resistant suit, gloves and boots with a full-faced positive pressure supplied air respirator; Level C--a chemically resistant splash suit, boots and gloves with an air-purifying positive or negative pressure respirator; Level D--a work uniform. Time in seconds to inflate the lungs of the manikin with bag-valve-mask, laryngeal mask airway (LMA) and endotracheal tube (ETT) were determined, as was the time to secure LMAs and ETTs with either tape or linen ties. Time to insert a cannula in a manikin was also determined. There was a significant difference in time taken to perform procedures in differing levels of personal protective equipment (F21,72 = 1.75, P = 0.04). Significant differences were found in: time to lung inflation using an endotracheal tube (A vs. C mean difference and standard error 75.6 +/- 23.9 s, P = 0.03; A vs. D mean difference and standard error 78.6 +/- 23.9 s, P = 0.03); time to insert a cannula (A vs. D mean difference and standard error 63.6 +/- 11.1 s, P < 0.001; C vs. D mean difference and standard error 40.0 +/- 11.1 s, P = 0.01). A significantly greater time to complete procedures was documented in Level A PPE (fully encapsulated suits) compared with Levels C and D. There was however, no significant difference in times between Level B and Level C. The common practice of equipping hospital and medical staff with only Level C protection should be re-evaluated.
Vocal economy in vocally trained actresses and untrained female subjects.
Master, Suely; Guzman, Marco; Dowdall, Jayme
2013-11-01
Vocally trained actresses are expected to have more vocal economy than nonactresses. Therefore, we hypothesize that there will be differences in the electroglottogram-based voice economy parameter quasi-output cost ratio (QOCR) between actresses and nonactresses. This difference should remain across different levels of intensity. A total of 30 actresses and 30 nonactresses were recruited for this study. Participants from both groups were required to sustain the vowels /a/, /i/, and /u/, in habitual, moderate, and high intensity levels. Acoustic variables such as sound pressure level (SPL), fundamental frequency (F0), and glottal contact quotient (CQ) were obtained. The QOCR was then calculated. There were no significant differences among the groups for QOCR. Positive correlations were observed for QOCR versus SPL and QOCR versus F0 in all intensity levels. Negative correlation was found between QOCR and CQ in all intensity levels. Considering the differences among intensity levels, from habitual to moderate and from moderate to loud, only the CQ did not differ significantly. The QOCR, SPL, and F0 presented significant differences throughout the different intensity levels. The QOCR did not reflect the level of vocal training when comparing trained and nontrained female subjects in the present study. Both groups demonstrated more vocal economy in moderate and high intensity levels owing to more voice output without an increase in glottal adduction. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Gender Differences in Personality across the Ten Aspects of the Big Five.
Weisberg, Yanna J; Deyoung, Colin G; Hirsh, Jacob B
2011-01-01
This paper investigates gender differences in personality traits, both at the level of the Big Five and at the sublevel of two aspects within each Big Five domain. Replicating previous findings, women reported higher Big Five Extraversion, Agreeableness, and Neuroticism scores than men. However, more extensive gender differences were found at the level of the aspects, with significant gender differences appearing in both aspects of every Big Five trait. For Extraversion, Openness, and Conscientiousness, the gender differences were found to diverge at the aspect level, rendering them either small or undetectable at the Big Five level. These findings clarify the nature of gender differences in personality and highlight the utility of measuring personality at the aspect level.
Gender Differences in Personality across the Ten Aspects of the Big Five
Weisberg, Yanna J.; DeYoung, Colin G.; Hirsh, Jacob B.
2011-01-01
This paper investigates gender differences in personality traits, both at the level of the Big Five and at the sublevel of two aspects within each Big Five domain. Replicating previous findings, women reported higher Big Five Extraversion, Agreeableness, and Neuroticism scores than men. However, more extensive gender differences were found at the level of the aspects, with significant gender differences appearing in both aspects of every Big Five trait. For Extraversion, Openness, and Conscientiousness, the gender differences were found to diverge at the aspect level, rendering them either small or undetectable at the Big Five level. These findings clarify the nature of gender differences in personality and highlight the utility of measuring personality at the aspect level. PMID:21866227
[A school-level longitudinal study of clinical performance examination scores].
Park, Jang Hee
2015-06-01
This school-level longitudinal study examined 7 years of clinical performance data to determine differences (effects) in students and annual changes within a school and between schools; examine how much their predictors (characteristics) influenced the variation in student performance; and calculate estimates of the schools' initial status and growth. A school-level longitudinal model was tested: level 1 (between students), level 2 (annual change within a school), and level 3 (between schools). The study sample comprised students who belonged to the CPX Consortium (n=5,283 for 2005~2008 and n=4,337 for 2009~2011). Despite a difference between evaluation domains, the performance outcomes were related to individual large-effect differences and small-effect school-level differences. Physical examination, clinical courtesy, and patient education were strongly influenced by the school effect, whereas patient-physician interaction was not affected much. Student scores are influenced by the school effect (differences), and the predictors explain the variation in differences, depending on the evaluation domain.
Tokuda, Isao T; Shimamura, Ryo
2017-08-01
As an alternative factor to produce asymmetry between left and right vocal folds, the present study focuses on level difference, which is defined as the distance between the upper surfaces of the bilateral vocal folds in the inferior-superior direction. Physical models of the vocal folds were utilized to study the effect of the level difference on the phonation threshold pressure. A vocal tract model was also attached to the vocal fold model. For two types of different models, experiments revealed that the phonation threshold pressure tended to increase as the level difference was extended. Based upon a small amplitude approximation of the vocal fold oscillations, a theoretical formula was derived for the phonation threshold pressure. This theory agrees with the experiments, especially when the phase difference between the left and right vocal folds is not extensive. Furthermore, an asymmetric two-mass model was simulated with a level difference to validate the experiments as well as the theory. The primary conclusion is that the level difference has a potential effect on voice production especially for patients with an extended level of vertical difference in the vocal folds, which might be taken into account for the diagnosis of voice disorders.
Visser, Lennart; Korthagen, Fred A. J.; Schoonenboom, Judith
2018-01-01
Within the field of procrastination, much research has been conducted on factors that have an influence on academic procrastination. Less is known about how such factors may differ for various students. In addition, not much is known about differences in the process of how factors influence students’ learning and what creates differences in procrastination behavior between students with different levels of academic procrastination. In this study learning characteristics and the self-regulation behavior of three groups of students with different levels of academic procrastination were compared. The rationale behind this was that certain learning characteristics and self-regulation behaviors may play out differently in students with different levels of academic procrastination. Participants were first-year students (N = 22) with different levels of academic procrastination enrolled in an elementary teacher education program. The selection of the participants into three groups of students (low procrastination, n = 8; average procrastination, n = 8; high procrastination, n = 6) was based on their scores on a questionnaire measuring the students’ levels of academic procrastination. From semi-structured interviews, six themes emerged that describe how students in the three groups deal with factors that influence the students’ learning: degree program choice, getting started with study activities, engagement in study activities, ways of reacting to failure, view of oneself, and study results. This study shows the importance of looking at differences in how students deal with certain factors possibly negatively influencing their learning. Within the group of students with average and high levels of academic procrastination, factors influencing their learning are regularly present. These factors lead to procrastination behavior among students with high levels of academic procrastination, but this seems not the case among students with an average level of academic procrastination. PMID:29892248
Visser, Lennart; Korthagen, Fred A J; Schoonenboom, Judith
2018-01-01
Within the field of procrastination, much research has been conducted on factors that have an influence on academic procrastination. Less is known about how such factors may differ for various students. In addition, not much is known about differences in the process of how factors influence students' learning and what creates differences in procrastination behavior between students with different levels of academic procrastination. In this study learning characteristics and the self-regulation behavior of three groups of students with different levels of academic procrastination were compared. The rationale behind this was that certain learning characteristics and self-regulation behaviors may play out differently in students with different levels of academic procrastination. Participants were first-year students ( N = 22) with different levels of academic procrastination enrolled in an elementary teacher education program. The selection of the participants into three groups of students (low procrastination, n = 8; average procrastination, n = 8; high procrastination, n = 6) was based on their scores on a questionnaire measuring the students' levels of academic procrastination. From semi-structured interviews, six themes emerged that describe how students in the three groups deal with factors that influence the students' learning: degree program choice, getting started with study activities, engagement in study activities, ways of reacting to failure, view of oneself, and study results. This study shows the importance of looking at differences in how students deal with certain factors possibly negatively influencing their learning. Within the group of students with average and high levels of academic procrastination, factors influencing their learning are regularly present. These factors lead to procrastination behavior among students with high levels of academic procrastination, but this seems not the case among students with an average level of academic procrastination.
Goleman's Leadership styles at different hierarchical levels in medical education.
Saxena, Anurag; Desanghere, Loni; Stobart, Kent; Walker, Keith
2017-09-19
With current emphasis on leadership in medicine, this study explores Goleman's leadership styles of medical education leaders at different hierarchical levels and gain insight into factors that contribute to the appropriateness of practices. Forty two leaders (28 first-level with limited formal authority, eight middle-level with wider program responsibility and six senior- level with higher organizational authority) rank ordered their preferred Goleman's styles and provided comments. Eight additional senior leaders were interviewed in-depth. Differences in ranked styles within groups were determined by Friedman tests and Wilcoxon tests. Based upon style descriptions, confirmatory template analysis was used to identify Goleman's styles for each interviewed participant. Content analysis was used to identify themes that affected leadership styles. There were differences in the repertoire and preferred styles at different leadership levels. As a group, first-level leaders preferred democratic, middle-level used coaching while the senior leaders did not have one preferred style and used multiple styles. Women and men preferred democratic and coaching styles respectively. The varied use of styles reflected leadership conceptualizations, leader accountabilities, contextual adaptations, the situation and its evolution, leaders' awareness of how they themselves were situated, and personal preferences and discomfort with styles. The not uncommon use of pace-setting and commanding styles by senior leaders, who were interviewed, was linked to working with physicians and delivering quickly on outcomes. Leaders at different levels in medical education draw from a repertoire of styles. Leadership development should incorporate learning of different leadership styles, especially at first- and mid-level positions.
Burger, J; Gaines, K F; Boring, C S; Stephens, W L; Snodgrass, J; Gochfeld, M
2001-10-01
Levels of contaminants in fish are of considerable interest because of potential effects on the fish themselves, as well as on other organisms that consume them. In this article we compare the mercury levels in muscle tissue of 11 fish species from the Savannah River, as well as selenium levels because of its known protective effect against mercury toxicity. We sampled fish from three stretches of the river: upstream, along, and downstream the Department of Energy's Savannah River Site, a former nuclear material production facility. We test the null hypothesis that there were no differences in mercury and selenium levels in fish tissue as a function of species, trophic level, and location along the river. There were significant interspecific differences in mercury levels, with bowfin (Amia calva) having the highest levels, followed by largemouth bass (Micropterus salmoides) and pickerel (Esox niger). Sunfish (Lepomis spp.) had the lowest levels of mercury. As expected, these differences generally reflected trophic levels. There were few significant locational differences in mercury levels, and existing differences were not great, presumably reflecting local movements of fish between the sites examined. Selenium and mercury concentrations were positively correlated only for bass, perch (Perca flavescens), and red-breasted sunfish (Lepomis auritus). Mercury levels were positively correlated with body mass of the fish for all species except American eel (Anguilla rostrata) and bluegill sunfish (L. macrochirus). The mercury and selenium levels in fish tissue from the Savannah River are similar to or lower than those reported in many other studies, and in most cases pose little risk to the fish themselves or to other aquatic consumers, although levels in bowfin and bass are sufficiently high to pose a potential threat to high-level consumers. Copyright 2001 Academic Press.
O'Malley, A James; Zaslavsky, Alan M; Hays, Ron D; Hepner, Kimberly A; Keller, San; Cleary, Paul D
2005-01-01
Objectives To estimate the associations among hospital-level scores from the Consumer Assessments of Healthcare Providers and Systems (CAHPS®) Hospital pilot survey within and across different services (surgery, obstetrics, medical), and to evaluate differences between hospital- and patient-level analyses. Data Source CAHPS Hospital pilot survey data provided by the Centers for Medicare and Medicaid Services. Study Design Responses to 33 questionnaire items were analyzed using patient- and hospital-level exploratory factor analytic (EFA) methods to identify both a patient-level and hospital-level composite structures for the CAHPS Hospital survey. The latter EFA was corrected for patient-level sampling variability using a hierarchical model. We compared results of these analyses with each other and to separate EFAs conducted at the service level. To quantify the similarity of assessments across services, we compared correlations of different composites within the same service with those of the same composite across different services. Data Collection Cross-sectional data were collected during the summer of 2003 via mail and telephone from 19,720 patients discharged from November 2002 through January 2003 from 132 hospitals in three states. Principal Findings Six factors provided the best description of inter-item covariation at the patient level. Analyses that assessed variability across both services and hospitals suggested that three dimensions provide a parsimonious summary of inter-item covariation at the hospital level. Hospital-level factor structures also differed across services; as much variation in quality reports was explained by service as by composite. Conclusions Variability of CAHPS scores across hospitals can be reported parsimoniously using a limited number of composites. There is at least as much distinct information in composite scores from different services as in different composite scores within each service. Because items cluster slightly differently in the different services, service-specific composites may be more informative when comparing patients in a given service across hospitals. When studying individual-level variability, a more differentiated structure is probably more appropriate. PMID:16316439
Dembo, Richard; Childs, Kristina; Belenko, Steven; Schmeidler, James; Wareham, Jennifer
2010-01-01
Gender and racial differences in infection rates for chlamydia and gonorrhea have been reported within community-based populations, but little is known of such differences within juvenile offending populations. Moreover, while research has demonstrated that certain individual-level and community-level factors affect risky behaviors associated with sexually transmitted disease (STD), less is known about how multi-level factors affect STD infection, particularly among delinquent populations. The present study investigated gender and racial differences in STD infection among a sample of 924 juvenile offenders. Generalized linear model regression analyses were conducted to examine the influence of individual-level factors such as age, offense history, and substance use and community-level factors such as concentrated disadvantage, ethnic heterogeneity, and family disruption on STD status. Results revealed significant racial and STD status differences across gender, as well as interaction effects for race and STD status for males only. Gender differences in individual-level and community-level predictors were also found. Implications of these findings for future research and public health policy are discussed. PMID:20700475
Sex differences in moral reasoning: response to Walker's (1984) conclusion that there are none.
Baumrind, D
1986-04-01
Data from the Family Socialization and Developmental Competence Project are used to probe Walker's conclusion that there are no sex differences in moral reasoning. Ordinal and nominal nonparametric statistics result in a complex but theoretically meaningful network of relationships among sex, educational level, and Kohlberg stage score level, with the presence and direction of sex differences in stage score level dependent on educational level. The effects on stage score level of educational level and working status are also shown to differ for men and women. Reasons are considered for not accepting Walker's dismissal of studies that use (a) a pre-1983 scoring manual, or (b) fail to control for education. The problems presented to Kohlberg's theory by the significant relationship between educational and stage score levels in the general population are discussed, particularly as these apply to the postconventional level of moral reasoning.
Corker, Katherine S; Donnellan, M Brent; Kim, Su Yeong; Schwartz, Seth J; Zamboanga, Byron L
2017-04-01
This research examined the magnitude of personality differences across different colleges and universities to understand (a) how much students at different colleges vary from one another and (b) whether there are site-level variables that can explain observed differences. Nearly 8,600 students at 30 colleges and universities completed a Big Five personality trait measure. Site-level information was obtained from the Integrated Postsecondary Education System database (U.S. Department of Education). Multilevel models revealed that each of the Big Five traits showed significant between-site variability, even after accounting for individual-level demographic differences. Some site-level variables (e.g., enrollment size, requiring letters of recommendation) explained between-site differences in traits, but many tests were not statistically significant. Student samples at different universities differed in terms of average levels of Big Five personality domains. This raises the possibility that personality differences may explain differences in research results obtained when studying students at different colleges and universities. Furthermore, results suggest that research that compares findings for only a few sites (e.g., much cross-cultural research) runs the risk of overgeneralizing differences between specific samples to broader group differences. These results underscore the value of multisite collaborative research efforts to enhance psychological research. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
White, Warren B.; Tai, Chang-Kou; Holland, William R.
1990-01-01
The optimal interpolation method of Lorenc (1981) was used to conduct continuous assimilation of altimetric sea level differences from the simulated Geosat exact repeat mission (ERM) into a three-layer quasi-geostrophic eddy-resolving numerical ocean box model that simulates the statistics of mesoscale eddy activity in the western North Pacific. Assimilation was conducted continuously as the Geosat tracks appeared in simulated real time/space, with each track repeating every 17 days, but occurring at different times and locations within the 17-day period, as would have occurred in a realistic nowcast situation. This interpolation method was also used to conduct the assimilation of referenced altimetric sea level differences into the same model, performing the referencing of altimetric sea sevel differences by using the simulated sea level. The results of this dynamical interpolation procedure are compared with those of a statistical (i.e., optimum) interpolation procedure.
Age-related differences in GABA levels are driven by bulk tissue changes.
Maes, Celine; Hermans, Lize; Pauwels, Lisa; Chalavi, Sima; Leunissen, Inge; Levin, Oron; Cuypers, Koen; Peeters, Ronald; Sunaert, Stefan; Mantini, Dante; Puts, Nicolaas A J; Edden, Richard A E; Swinnen, Stephan P
2018-05-02
Levels of GABA, the main inhibitory neurotransmitter in the brain, can be regionally quantified using magnetic resonance spectroscopy (MRS). Although GABA is crucial for efficient neuronal functioning, little is known about age-related differences in GABA levels and their relationship with age-related changes in brain structure. Here, we investigated the effect of age on GABA levels within the left sensorimotor cortex and the occipital cortex in a sample of 85 young and 85 older adults using the MEGA-PRESS sequence. Because the distribution of GABA varies across different brain tissues, various correction methods are available to account for this variation. Considering that these correction methods are highly dependent on the tissue composition of the voxel of interest, we examined differences in voxel composition between age groups and the impact of these various correction methods on the identification of age-related differences in GABA levels. Results indicated that, within both voxels of interest, older (as compared to young adults) exhibited smaller gray matter fraction accompanied by larger fraction of cerebrospinal fluid. Whereas uncorrected GABA levels were significantly lower in older as compared to young adults, this age effect was absent when GABA levels were corrected for voxel composition. These results suggest that age-related differences in GABA levels are at least partly driven by the age-related gray matter loss. However, as alterations in GABA levels might be region-specific, further research should clarify to what extent gray matter changes may account for age-related differences in GABA levels within other brain regions. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wang, Dong
2016-03-01
Gears are the most commonly used components in mechanical transmission systems. Their failures may cause transmission system breakdown and result in economic loss. Identification of different gear crack levels is important to prevent any unexpected gear failure because gear cracks lead to gear tooth breakage. Signal processing based methods mainly require expertize to explain gear fault signatures which is usually not easy to be achieved by ordinary users. In order to automatically identify different gear crack levels, intelligent gear crack identification methods should be developed. The previous case studies experimentally proved that K-nearest neighbors based methods exhibit high prediction accuracies for identification of 3 different gear crack levels under different motor speeds and loads. In this short communication, to further enhance prediction accuracies of existing K-nearest neighbors based methods and extend identification of 3 different gear crack levels to identification of 5 different gear crack levels, redundant statistical features are constructed by using Daubechies 44 (db44) binary wavelet packet transform at different wavelet decomposition levels, prior to the use of a K-nearest neighbors method. The dimensionality of redundant statistical features is 620, which provides richer gear fault signatures. Since many of these statistical features are redundant and highly correlated with each other, dimensionality reduction of redundant statistical features is conducted to obtain new significant statistical features. At last, the K-nearest neighbors method is used to identify 5 different gear crack levels under different motor speeds and loads. A case study including 3 experiments is investigated to demonstrate that the developed method provides higher prediction accuracies than the existing K-nearest neighbors based methods for recognizing different gear crack levels under different motor speeds and loads. Based on the new significant statistical features, some other popular statistical models including linear discriminant analysis, quadratic discriminant analysis, classification and regression tree and naive Bayes classifier, are compared with the developed method. The results show that the developed method has the highest prediction accuracies among these statistical models. Additionally, selection of the number of new significant features and parameter selection of K-nearest neighbors are thoroughly investigated.
Computed gray levels in multislice and cone-beam computed tomography.
Azeredo, Fabiane; de Menezes, Luciane Macedo; Enciso, Reyes; Weissheimer, Andre; de Oliveira, Rogério Belle
2013-07-01
Gray level is the range of shades of gray in the pixels, representing the x-ray attenuation coefficient that allows for tissue density assessments in computed tomography (CT). An in-vitro study was performed to investigate the relationship between computed gray levels in 3 cone-beam CT (CBCT) scanners and 1 multislice spiral CT device using 5 software programs. Six materials (air, water, wax, acrylic, plaster, and gutta-percha) were scanned with the CBCT and CT scanners, and the computed gray levels for each material at predetermined points were measured with OsiriX Medical Imaging software (Geneva, Switzerland), OnDemand3D (CyberMed International, Seoul, Korea), E-Film (Merge Healthcare, Milwaukee, Wis), Dolphin Imaging (Dolphin Imaging & Management Solutions, Chatsworth, Calif), and InVivo Dental Software (Anatomage, San Jose, Calif). The repeatability of these measurements was calculated with intraclass correlation coefficients, and the gray levels were averaged to represent each material. Repeated analysis of variance tests were used to assess the differences in gray levels among scanners and materials. There were no differences in mean gray levels with the different software programs. There were significant differences in gray levels between scanners for each material evaluated (P <0.001). The software programs were reliable and had no influence on the CT and CBCT gray level measurements. However, the gray levels might have discrepancies when different CT and CBCT scanners are used. Therefore, caution is essential when interpreting or evaluating CBCT images because of the significant differences in gray levels between different CBCT scanners, and between CBCT and CT values. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Burger, J; Snodgrass, J
2001-06-01
Tadpoles have been proposed as useful bioindicators of environmental contamination; yet, recently it has been shown that metal levels vary in different body compartments of tadpoles. Metals levels are higher in the digestive tract of bullfrog (Rana catesbeiana) tadpoles, which is usually not removed during such analysis. In this paper we examine the heavy metal levels in southern leopard frog (R. utricularia) tadpoles from several wetlands at the Savannah River Site and test the null hypotheses that (1) there are no differences in metal levels in different body compartments of the tadpoles, including the digestive tract; (2) there are no differences in heavy metal levels among different wetlands; and (3) there are no differences in the ratio of metals in the tail/body and in the digestive tract/body as a function of metal or developmental stage as indicated by body weight. Variations in heavy metal levels were explained by wetland and body compartment for all metals and by tadpole weight for selenium and manganese. In all cases, levels of metals were higher in the digestive tract than in the body or tail of tadpoles. Metal levels were highest in a wetland that had been remediated and lowest in a wetland that was never a pasture or remediated (i.e., was truly undisturbed). Although tadpoles are sometimes eaten by fish and other aquatic predators, leopard frogs usually avoid laying their eggs in ponds with such predators. However, avian predators will eat them. These data suggest that tadpoles can be used as bioindicators of differences in metal levels among wetlands and as indicators of potential exposure for higher-trophic-level organisms, but that to assess effects on the tadpoles themselves, digestive tracts should be removed before analysis. Copyright 2001 Academic Press.
Teimoury, Ebrahim; Jabbarzadeh, Armin; Babaei, Mohammadhosein
2017-01-01
Inventory management has frequently been targeted by researchers as one of the most pivotal problems in supply chain management. With the expansion of research studies on inventory management in supply chains, perishable inventory has been introduced and its fundamental differences from non-perishable inventory have been emphasized. This article presents livestock as a type of inventory that has been less studied in the literature. Differences between different inventory types, affect various levels of strategic, tactical and operational decision-making. In most articles, different levels of decision-making are discussed independently and sequentially. In this paper, not only is the livestock inventory introduced, but also a model has been developed to integrate decisions across different levels of decision-making using bi-level programming. Computational results indicate that the proposed bi-level approach is more efficient than the sequential decision-making approach.
Jabbarzadeh, Armin; Babaei, Mohammadhosein
2017-01-01
Inventory management has frequently been targeted by researchers as one of the most pivotal problems in supply chain management. With the expansion of research studies on inventory management in supply chains, perishable inventory has been introduced and its fundamental differences from non-perishable inventory have been emphasized. This article presents livestock as a type of inventory that has been less studied in the literature. Differences between different inventory types, affect various levels of strategic, tactical and operational decision-making. In most articles, different levels of decision-making are discussed independently and sequentially. In this paper, not only is the livestock inventory introduced, but also a model has been developed to integrate decisions across different levels of decision-making using bi-level programming. Computational results indicate that the proposed bi-level approach is more efficient than the sequential decision-making approach. PMID:28982180
Cenik, Can; Cenik, Elif Sarinay; Byeon, Gun W.; Grubert, Fabian; Candille, Sophie I.; Spacek, Damek; Alsallakh, Bilal; Tilgner, Hagen; Araya, Carlos L.; Tang, Hua; Ricci, Emiliano; Snyder, Michael P.
2015-01-01
Elucidating the consequences of genetic differences between humans is essential for understanding phenotypic diversity and personalized medicine. Although variation in RNA levels, transcription factor binding, and chromatin have been explored, little is known about global variation in translation and its genetic determinants. We used ribosome profiling, RNA sequencing, and mass spectrometry to perform an integrated analysis in lymphoblastoid cell lines from a diverse group of individuals. We find significant differences in RNA, translation, and protein levels suggesting diverse mechanisms of personalized gene expression control. Combined analysis of RNA expression and ribosome occupancy improves the identification of individual protein level differences. Finally, we identify genetic differences that specifically modulate ribosome occupancy—many of these differences lie close to start codons and upstream ORFs. Our results reveal a new level of gene expression variation among humans and indicate that genetic variants can cause changes in protein levels through effects on translation. PMID:26297486
Sex hormones in Malay and Chinese men in Malaysia: are there age and race differences?
Chin, Kok-Yong; Soelaiman, Ima-Nirwana; Mohamed, Isa Naina; Ahmad, Fairus; Ramli, Elvy Suhana Mohd; Aminuddin, Amilia; Ngah, Wan Zurinah Wan
2013-01-01
OBJECTIVES: Variations in the prevalence of sex-hormone-related diseases have been observed between Asian ethnic groups living in the same country; however, available data concerning their sex hormone levels are limited. The present study aimed to determine the influence of ethnicity and age on the sex hormone levels of Malay and Chinese men in Malaysia. METHODS: A total of 547 males of Malay and Chinese ethnicity residing in the Klang Valley Malaysia underwent a detailed screening, and their blood was collected for sex hormones analyses. RESULTS: Testosterone levels were normally distributed in the men (total, free and non-sex hormone-binding globulin (SHBG) bound fractions), and significant ethnic differences were observed (p<0.05); however, the effect size was small. In general, testosterone levels in males began to decline significantly after age 50. Significant ethnic differences in total, free and non-SHBG bound fraction estradiol levels were observed in the 20-29 and 50-59 age groups (p<0.05). The estradiol levels of Malay men decreased as they aged, but they increased for Chinese men starting at age 40. CONCLUSIONS: Small but significant differences in testosterone levels existed between Malay and Chinese males. Significant age and race differences existed in estradiol levels. These differences might contribute to the ethnic group differences in diseases related to sex hormones, which other studies have found in Malaysia. PMID:23525310
Sex hormones in Malay and Chinese men in Malaysia: are there age and race differences?
Chin, Kok-Yong; Soelaiman, Ima-Nirwana; Mohamed, Isa Naina; Ahmad, Fairus; Ramli, Elvy Suhana Mohd; Aminuddin, Amilia; Ngah, Wan Zurinah Wan
2013-01-01
Variations in the prevalence of sex-hormone-related diseases have been observed between Asian ethnic groups living in the same country; however, available data concerning their sex hormone levels are limited. The present study aimed to determine the influence of ethnicity and age on the sex hormone levels of Malay and Chinese men in Malaysia. A total of 547 males of Malay and Chinese ethnicity residing in the Klang Valley Malaysia underwent a detailed screening, and their blood was collected for sex hormones analyses. Testosterone levels were normally distributed in the men (total, free and non-sex hormone-binding globulin (SHBG) bound fractions), and significant ethnic differences were observed (p<0.05); however, the effect size was small. In general, testosterone levels in males began to decline significantly after age 50. Significant ethnic differences in total, free and non-SHBG bound fraction estradiol levels were observed in the 20-29 and 50-59 age groups (p<0.05). The estradiol levels of Malay men decreased as they aged, but they increased for Chinese men starting at age 40. Small but significant differences in testosterone levels existed between Malay and Chinese males. Significant age and race differences existed in estradiol levels. These differences might contribute to the ethnic group differences in diseases related to sex hormones, which other studies have found in Malaysia.
Race, Serum Potassium, and Associations With ESRD and Mortality.
Chen, Yan; Sang, Yingying; Ballew, Shoshana H; Tin, Adrienne; Chang, Alex R; Matsushita, Kunihiro; Coresh, Josef; Kalantar-Zadeh, Kamyar; Molnar, Miklos Z; Grams, Morgan E
2017-08-01
Recent studies suggest that potassium levels may differ by race. The basis for these differences and whether associations between potassium levels and adverse outcomes differ by race are unknown. Observational study. Associations between race and potassium level and the interaction of race and potassium level with outcomes were investigated in the Racial and Cardiovascular Risk Anomalies in Chronic Kidney Disease (RCAV) Study, a cohort of US veterans (N=2,662,462). Associations between African ancestry and potassium level were investigated in African Americans in the Atherosclerosis Risk in Communities (ARIC) Study (N=3,450). Race (African American vs non-African American and percent African ancestry) for cross-sectional analysis; serum potassium level for longitudinal analysis. Potassium level for cross-sectional analysis; mortality and end-stage renal disease for longitudinal analysis. The RCAV cohort was 18% African American (N=470,985). Potassium levels on average were 0.162mmol/L lower in African Americans compared with non-African Americans, with differences persisting after adjustment for demographics, comorbid conditions, and potassium-altering medication use. In the ARIC Study, higher African ancestry was related to lower potassium levels (-0.027mmol/L per each 10% African ancestry). In both race groups, higher and lower potassium levels were associated with mortality. Compared to potassium level of 4.2mmol/L, mortality risk associated with lower potassium levels was lower in African Americans versus non-African Americans, whereas mortality risk associated with higher levels was slightly greater. Risk relationships between potassium and end-stage renal disease were weaker, with no difference by race. No data for potassium intake. African Americans had slightly lower serum potassium levels than non-African Americans. Consistent associations between potassium levels and percent African ancestry may suggest a genetic component to these differences. Higher and lower serum potassium levels were associated with mortality in both racial groups. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Curtin, Stephen E.; Andreasen, David C.; Staley, Andrew W.
2012-01-01
Groundwater is the principal source of freshwater supply in most of Southern Maryland and Maryland's Eastern Shore. It is also the source of freshwater supply used in the operation of the Calvert Cliffs, Chalk Point, and Morgantown power plants. Increased groundwater withdrawals over the last several decades have caused groundwater levels to decline. This report presents potentiometric surface maps of the Aquia, Magothy, upper Patapsco, lower Patapsco, and Patuxent aquifers using water levels measured during September 2011. Water-level difference maps also are presented for the first four of these aquifers. The water-level differences in the Aquia aquifer are shown using groundwater-level data from 1982 and 2011, whereas the water-level differences in the Magothy aquifer are presented using data from 1975 and 2011. Water-level difference maps in both the upper Patapsco and lower Patapsco aquifers are presented using data from 1990 and 2011. These maps show cones of depression ranging from 25 to 198 feet (ft) below sea level centered on areas of major withdrawals. Water levels have declined by as much as 112 ft in the Aquia aquifer since 1982, 85 ft in the Magothy aquifer since 1975, and 47 and 71 ft in the upper Patapsco and lower Patapsco aquifers, respectively, since 1990.
Vignais, Nicolas; Bideau, Benoit; Craig, Cathy; Brault, Sébastien; Multon, Franck; Delamarche, Paul; Kulpa, Richard
2009-01-01
The authors investigated how different levels of detail (LODs) of a virtual throwing action can influence a handball goalkeeper’s motor response. Goalkeepers attempted to stop a virtual ball emanating from five different graphical LODs of the same virtual throwing action. The five levels of detail were: a textured reference level (L0), a non-textured level (L1), a wire-frame level (L2), a point-light-display (PLD) representation (L3) and a PLD level with reduced ball size (L4). For each motor response made by the goalkeeper we measured and analyzed the time to respond (TTR), the percentage of successful motor responses, the distance between the ball and the closest limb (when the stopping motion was incorrect) and the kinematics of the motion. Results showed that TTR, percentage of successful motor responses and distance with the closest limb were not significantly different for any of the five different graphical LODs. However the kinematics of the motion revealed that the trajectory of the stopping limb was significantly different when comparing the L1 and L3 levels, and when comparing the L1 and L4 levels. These differences in the control of the goalkeeper’s actions suggests that the different level of information available in the PLD representations (L3 and L4) are causing the goalkeeper to adopt different motor strategies to control the approach of their limb to stop the ball. Key points Virtual reality technology can be used to analyze sport performance because it enables standardization and reproduction of sport situations. Defining a minimal graphical level of detail of a virtual action could decrease the real time calculation of a virtual reality system. A Point Light Display graphical representation of a virtual throwing motion seems to influence the regulation of action of real handball goalkeepers. PMID:24149589
Argus, Christos K; Gill, Nicholas D; Keogh, Justin W L
2012-10-01
Levels of strength and power have been used to effectively discriminate between different levels of competition; however, there is limited literature in rugby union athletes. To assess the difference in strength and power between levels of competition, 112 rugby union players, including 43 professionals, 19 semiprofessionals, 32 academy level, and 18 high school level athletes, were assessed for bench press and box squat strength, and bench throw, and jump squat power. High school athletes were not assessed for jump squat power. Raw data along with data normalized to body mass with a derived power exponent were log transformed and analyzed. With the exception of box squat and bench press strength between professional and semiprofessional athletes, higher level athletes produced greater absolute and relative strength and power outputs than did lower level athletes (4-51%; small to very large effect sizes). Lower level athletes should strive to attain greater levels of strength and power in an attempt to reach or to be physically prepared for the next level of competition. Furthermore, the ability to produce high levels of power, rather than strength, may be a better determinate of playing ability between professional and semiprofessional athletes.
USDA-ARS?s Scientific Manuscript database
Zoning of agricultural fields is an important task for utilization of precision farming technology. One method for the definition of zones with different levels of productivity is based on fuzzy indicator model. Fuzzy indicator model for identification of zones with different levels of productivit...
Noble, N A; Tanaka, K R
1979-01-01
1. A major locus with two alleles is responsible for large differences in erythrocyte 2,3-diphosphoglycerate (DPG) levels in Rattus norvegicus. Blood from homozygous High-DPG, homozygous Low-DPG and heterozygous animals was used to measure blood indices and red cell enzyme activities. 2. Significant differences between groups were found in DPG levels, white blood cell counts and hemoglobin levels. 3. The results suggest that none of the red cell enzymes assayed is structurally or quantitatively different in the three groups.
Gender differences in public and private drinking contexts: a multi-level GENACIS analysis.
Bond, Jason C; Roberts, Sarah C M; Greenfield, Thomas K; Korcha, Rachael; Ye, Yu; Nayak, Madhabika B
2010-05-01
This multi-national study hypothesized that higher levels of country-level gender equality would predict smaller differences in the frequency of women's compared to men's drinking in public (like bars and restaurants) settings and possibly private (home or party) settings. GENACIS project survey data with drinking contexts included 22 countries in Europe (8); the Americas (7); Asia (3); Australasia (2), and Africa (2), analyzed using hierarchical linear models (individuals nested within country). Age, gender and marital status were individual predictors; country-level gender equality as well as equality in economic participation, education, and political participation, and reproductive autonomy and context of violence against women measures were country-level variables. In separate models, more reproductive autonomy, economic participation, and educational attainment and less violence against women predicted smaller differences in drinking in public settings. Once controlling for country-level economic status, only equality in economic participation predicted the size of the gender difference. Most country-level variables did not explain the gender difference in frequency of drinking in private settings. Where gender equality predicted this difference, the direction of the findings was opposite from the direction in public settings, with more equality predicting a larger gender difference, although this relationship was no longer significant after controlling for country-level economic status. Findings suggest that country-level gender equality may influence gender differences in drinking. However, the effects of gender equality on drinking may depend on the specific alcohol measure, in this case drinking context, as well as on the aspect of gender equality considered. Similar studies that use only global measures of gender equality may miss key relationships. We consider potential implications for alcohol related consequences, policy and public health.
Janin, Agnès; Léna, Jean-Paul; Deblois, Sandrine; Joly, Pierre
2012-10-01
The influence of landscape matrix on functional connectivity has been clearly established. Now methods to assess the effects of different land uses on species' movements are needed because current methods are often biased. The use of physiological parameters as indicators of the level of resistance to animal movement associated with different land uses (i.e., matrix resistance) could provide estimates of energetic costs and risks to animals migrating through the matrix. To assess whether corticosterone levels indicate matrix resistance, we conducted experiments on substrate choice and measured levels of corticosterone before and after exposure of toads (Bufo bufo) to 3 common substrates (ploughed soil, meadow, and forest litter). We expected matrix resistance and hormone levels to increase from forest litter (habitat of the toad) to meadows to ploughed soil. Adult toads had higher corticosterone levels on ploughed soil than on forest litter or meadow substrates. Hormone levels did not differ between forest litter and meadow. Toads avoided moving onto ploughed soil. Corticosterone levels in juvenile toads were not related to substrate type; however, hormone levels decreased as humidity increased. Juveniles, unlike adults, did not avoid moving over ploughed soil. The difference in responses between adult and juvenile toads may have been due to differences in experimental design (for juveniles, entire body used to measure corticosterone concentration; for adults, saliva alone); differences in the scale of sensory perception of the substrate (juveniles are much smaller than adults); or differences in cognitive processes between adult and juvenile toads. Adults probably had experience with different substrate types, whereas juveniles first emerging from the water probably did not. As a consequence, arable lands could act as ecological traps for juvenile toads. ©2012 Society for Conservation Biology.
Huang, Zhenhua; Liang, Lining; Li, Lingyu; Xu, Miao; Li, Xiang; Sun, Hao; He, Songwei; Lin, Lilong; Zhang, Yixin; Song, Yancheng; Yang, Man; Luo, Yuling; Loh, Horace H; Law, Ping-Yee; Zheng, Dayong; Zheng, Hui
2016-03-08
Pain management has been considered as significant contributor to broad quality-of-life improvement for cancer patients. Modulating serum cholesterol levels affects analgesia abilities of opioids, important pain killer for cancer patients, in mice system. Thus the correlation between opioids usages and cholesterol levels were investigated in human patients with lung cancer. Medical records of 282 patients were selected with following criteria, 1) signed inform consent, 2) full medical records on total serum cholesterol levels and opioid administration, 3) opioid-naïve, 4) not received/receiving cancer-related or cholesterol lowering treatment, 5) pain level at level 5-8. The patients were divided into different groups basing on their gender and cholesterol levels. Since different opioids, morphine, oxycodone, and fentanyl, were all administrated at fixed low dose initially and increased gradually only if pain was not controlled, the percentages of patients in each group who did not respond to the initial doses of opioids and required higher doses for pain management were determined and compared. Patients with relative low cholesterol levels have larger percentage (11 out of 28 in female and 31 out of 71 in male) to not respond to the initial dose of opioids than those with high cholesterol levels (0 out of 258 in female and 8 out of 74 in male). Similar differences were obtained when patients with different opioids were analyzed separately. After converting the doses of different opioids to equivalent doses of oxycodone, significant correlation between opioid usages and cholesterol levels was also observed. Therefore, more attention should be taken to those cancer patients with low cholesterol levels because they may require higher doses of opioids as pain killer.
Das Gupta, Esha; Ng, Wei Ren; Wong, Shew Fung; Bhurhanudeen, Abdul Kareem; Yeap, Swan Sim
2017-01-01
The aim of this study was to investigate the correlations between serum cartilage oligomeric matrix protein (COMP), interleukin-16 (IL-16) and different grades of knee osteoarthritis (KOA) in Malaysian subjects. Ninety subjects were recruited comprising 30 with Kellgren-Lawrence (K-L) grade 2 KOA, 27 with K-L grade 3 KOA, 7 with grade 4 KOA, and 30 healthy controls. All subjects completed the Western Ontario and McMaster Universities Arthritis Index (WOMAC) questionnaire. Serum COMP and IL-16 levels were measured using ELISA and their values log transformed to ensure a normal distribution. There was no significant differences in levels of log serum COMP and IL-16 between healthy controls and KOA patients. There were no significant differences in the log serum COMP and IL-16 levels within the different K-L grades in the KOA patients. In KOA patients, log serum IL-16 levels significantly correlated with the WOMAC score (p = 0.001) and its subscales, pain (p = 0.005), stiffness (p = 0.019) and physical function (p<0.0001). Serum IL-16 levels were significantly higher in Malaysian Indians compared to Malays and Chinese (p = 0.024). In this multi-ethnic Malaysian population, there was no difference in serum COMP and IL-16 levels between healthy controls and patients with KOA, nor was there any difference in serum COMP or IL-16 levels across the various K-L grades of KOA. However, there were significant inter-racial differences in serum IL-16 levels.
Similarity and Difference in the Behavior of Gases: An Interactive Demonstration
ERIC Educational Resources Information Center
Ashkenazi, Guy
2008-01-01
Previous research has documented a gap in students' understanding of gas behavior between the algorithmic-macroscopic level and the conceptual-microscopic level. A coherent understanding of both levels is needed to appreciate the difference in properties of different gases, which is not manifest in the ideal gas law. A demonstration that…
Listening Strategy Use and Linguistic Patterns in Listening Comprehension by EFL Learners
ERIC Educational Resources Information Center
Shang, Hui-Fang
2008-01-01
This study mainly focused on investigating listening strategy uses at different proficiency levels for different linguistic patterns. Three main questions were examined in regards to Taiwanese listeners of English as a foreign language (EFL): (1) For listeners with different proficiency levels, which pattern may result in a higher level of…
ERIC Educational Resources Information Center
Bureau, Daniel A.; Cole, James S.; McCormick, Alexander C.
2014-01-01
This chapter examines the differences between institutions with high and low levels of involvement in service learning as well as the differences between students with high and low levels of involvement. The study shows a correlation between institutional organization and service-learning emphasis and describes, at the student level, correlations…
Mujahid, A; Asif, M; ul Haq, I; Abdullah, M; Gilani, A H
2003-09-01
Nutrient digestibility of broiler feeds containing different levels of variously processed rice bran stored for varying periods was determined. A total of 444 Hubbard male chicks were used to conduct four trials. Each trial was carried out on 111 chicks to determine digestibility of 36 different feeds. Chicks of 5 wk age were fed feeds containing raw, roasted, and extruded rice bran treated with antioxidant, Bianox Dry (0, 125, 250 g/ton), stored for a periods of 0, 4, 8, and 12 mo and used at levels of 0, 10, 20, and 30% in feeds. Digestibility coefficients for fat and fiber of feeds were determined. Increasing storage periods of rice bran significantly reduced the fat digestibility of feed, whereas no difference in fiber digestibility was observed. Processing of rice bran by extrusion cooking significantly increased digestibility of fat even used at higher levels in broiler feeds. Interaction of storage, processing, and levels was significant for fat digestibility. Treatments of rice bran by different levels of antioxidant had no effect on digestibility of fat and fiber when incorporated in broiler feed.
A clinical economics workstation for risk-adjusted health care cost management.
Eisenstein, E. L.; Hales, J. W.
1995-01-01
This paper describes a healthcare cost accounting system which is under development at Duke University Medical Center. Our approach differs from current practice in that this system will dynamically adjust its resource usage estimates to compensate for variations in patient risk levels. This adjustment is made possible by introducing a new cost accounting concept, Risk-Adjusted Quantity (RQ). RQ divides case-level resource usage variances into their risk-based component (resource consumption differences attributable to differences in patient risk levels) and their non-risk-based component (resource consumption differences which cannot be attributed to differences in patient risk levels). Because patient risk level is a factor in estimating resource usage, this system is able to simultaneously address the financial and quality dimensions of case cost management. In effect, cost-effectiveness analysis is incorporated into health care cost management. PMID:8563361
Impact of vegetarian diet on serum immunoglobulin levels in children.
Gorczyca, Daiva; Prescha, Anna; Szeremeta, Karolina
2013-03-01
Nutrition plays an important role in immune response. We evaluated the effect of nutrient intake on serum immunoglobulin levels in vegetarian and omnivore children. Serum immunoglobulin levels and iron status were estimated in 22 vegetarian and 18 omnivore children. Seven-day food records were used to assess the diet. There were no significant differences in serum IgA, IgM, and IgG levels between groups of children. Serum immunoglobulin levels were lower in vegetarian children with iron deficiency in comparison with those without iron deficiency. In the vegetarians, IgG level correlated positively with energy, zinc, copper, and vitamin B(6) intake. In the omnivores, these correlations were stronger with IgM level. Despite negligible differences in serum immunoglobulin levels between vegetarian and omnivore children, the impact of several nutrient intakes on IgM and IgG levels differed between groups. Low iron status in vegetarian children can lead to decreased immunoglobulin levels.
Design of QoS-Aware Multi-Level MAC-Layer for Wireless Body Area Network.
Hu, Long; Zhang, Yin; Feng, Dakui; Hassan, Mohammad Mehedi; Alelaiwi, Abdulhameed; Alamri, Atif
2015-12-01
With the advances in wearable computing and various wireless technologies, there is an increasing trend to outsource body signals from wireless body area network (WBAN) to outside world including cyber space, healthcare big data clouds, etc. Since the environmental and physiological data collected by multimodal sensors have different importance, the provisioning of quality of service (QoS) for the sensory data in WBAN is a critical issue. This paper proposes multiple level-based QoS design at WBAN media access control layer in terms of user level, data level and time level. In the proposed QoS provisioning scheme, different users have different priorities, various sensory data collected by different sensor nodes have different importance, while data priority for the same sensor node varies over time. The experimental results show that the proposed multi-level based QoS provisioning solution in WBAN yields better performance for meeting QoS requirements of personalized healthcare applications while achieving energy saving.
Cenik, Can; Cenik, Elif Sarinay; Byeon, Gun W; Grubert, Fabian; Candille, Sophie I; Spacek, Damek; Alsallakh, Bilal; Tilgner, Hagen; Araya, Carlos L; Tang, Hua; Ricci, Emiliano; Snyder, Michael P
2015-11-01
Elucidating the consequences of genetic differences between humans is essential for understanding phenotypic diversity and personalized medicine. Although variation in RNA levels, transcription factor binding, and chromatin have been explored, little is known about global variation in translation and its genetic determinants. We used ribosome profiling, RNA sequencing, and mass spectrometry to perform an integrated analysis in lymphoblastoid cell lines from a diverse group of individuals. We find significant differences in RNA, translation, and protein levels suggesting diverse mechanisms of personalized gene expression control. Combined analysis of RNA expression and ribosome occupancy improves the identification of individual protein level differences. Finally, we identify genetic differences that specifically modulate ribosome occupancy--many of these differences lie close to start codons and upstream ORFs. Our results reveal a new level of gene expression variation among humans and indicate that genetic variants can cause changes in protein levels through effects on translation. © 2015 Cenik et al.; Published by Cold Spring Harbor Laboratory Press.
D-dimer concentration outliers are not rare in at-term pregnant women.
Wang, Yu; Gao, Jie; Du, Juan
2016-06-01
To determine the D-dimer levels in pregnant women at term and the differences between pregnant women with different D-dimer levels. The plasma D-dimer concentrations in pregnant women at term were identified in a cross-sectional study. The clinical indicators that are potentially relevant to D-dimer levels were compared between the pregnant women with different D-dimer levels (i.e., normal, mildly increased, and severely increased). There were always some D-dimer concentration outliers in the pregnant women at term regardless of the presence or absence of complications, and there were no significant differences in maternal age, gestational age, gravidity, parity, blood count, blood coagulation, or liver function between the pregnant women with different D-dimer levels. D-dimer levels may vary significantly during pregnancy for unknown reasons. This variation, particularly in pregnant women at term, might lead to questionable diagnostic information regarding coagulation. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Mineral Levels in Thalassaemia Major Patients Using Different Iron Chelators.
Genc, Gizem Esra; Ozturk, Zeynep; Gumuslu, Saadet; Kupesiz, Alphan
2016-03-01
The goal of the present study was to determine the levels of minerals in chronically transfused thalassaemic patients living in Antalya, Turkey and to determine mineral levels in groups using different iron chelators. Three iron chelators deferoxamine, deferiprone and deferasirox have been used to remove iron from patients' tissues. There were contradictory results in the literature about minerals including selenium, zinc, copper, and magnesium in thalassaemia major patients. Blood samples from the 60 thalassaemia major patients (the deferoxamine group, n = 19; the deferiprone group, n = 20 and the deferasirox group, n = 21) and the controls (n = 20) were collected. Levels of selenium, zinc, copper, magnesium, and iron were measured, and all of them except iron showed no significant difference between the controls and the patients regardless of chelator type. Serum copper levels in the deferasirox group were lower than those in the control and deferoxamine groups, and serum magnesium levels in the deferasirox group were higher than those in the control, deferoxamine and deferiprone groups. Iron levels in the patient groups were higher than those in the control group, and iron levels showed a significant correlation with selenium and magnesium levels. Different values of minerals in thalassaemia major patients may be the result of different dietary intake, chelator type, or regional differences in where patients live. That is why minerals may be measured in thalassaemia major patients at intervals, and deficient minerals should be replaced. Being careful about levels of copper and magnesium in thalassaemia major patients using deferasirox seems to be beneficial.
Zuroff, David C; McBride, Carolina; Ravitz, Paula; Koestner, Richard; Moskowitz, D S; Bagby, R Michael
2017-10-01
Differences between therapists in the average outcomes their patients achieve are well documented, and researchers have begun to try to explain such differences (Baldwin & Imel, 2013). Guided by Self-Determination Theory (Deci & Ryan, 2000), we examined the effects on outcome of differences between therapists in their patients' average levels of autonomous and controlled motivation for treatment, as well as the effects of differences among the patients within each therapist's caseload. Between and within-therapist differences in the SDT construct of perceived relational support were explored as predictors of patients' motivation. Nineteen therapists treated 63 patients in an outpatient clinic providing manualized interpersonal therapy (IPT) for depression. Patients completed the BDI-II at pretreatment, posttreatment, and each treatment session. The Impact Message Inventory was administered at the third session and scored for perceived therapist friendliness, a core element of relational support. We created between-therapists (therapist-level) scores by averaging over the patients in each therapist's caseload; within-therapist (patient-level) scores were computed by centering within each therapist's caseload. As expected, better outcome was predicted by higher levels of therapist-level and patient-level autonomous motivation and by lower levels of therapist-level and patient-level controlled motivation. In turn, autonomous motivation was predicted by therapist-level and patient-level relational support (friendliness). Controlled motivation was predicted solely by patient self-critical perfectionism. The results extend past work by demonstrating that both between-therapists and within-therapist differences in motivation predict outcome. As well, the results suggest that therapists should monitor their interpersonal impact so as to provide relational support. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Information Processing Capabilities in Performers Differing in Levels of Motor Skill
1979-01-01
F. I. 1. , ’ Lockhart , R. S. Levels of* processing : A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 1972, 11, 671-684...ARI TECHNICAL REPORT LEVEr.79iA4 Information Processing Capabilities in Performers Differing In Levels of 00 Motor Skill ,4 by Robert N. Singer... PROCESSING CAPABILITIES IN PERFORMERS DIFFERING IN LEVELS OF MOTOR SKILL INTRODUCTION In the human behaving systems model developed by Singer, Gerson, and
Ramos, María Concepción; Romero, María Paz
2017-08-01
The present study investigated the potassium (K) levels in petiole and other grape tissues during ripening in Vitis vinifera Shiraz and Cabernet Sauvignon, grown in areas with differences in vigour, as well as with and without leaf thinning. Potassium levels in petiole, seeds, skin and flesh were related to grape pH, acidity, berry weight and total soluble solids. Differences in K levels in petiole were in accordance with the differences in soil K. Leaf thinning gave rise to higher K levels in petiole but, in grape tissues, the differences were not significant in all samplings, with greater differences at the end of the growing cycle. Potassium levels per berry in grape tissues increased from veraison to harvest, with K mainly accumulated in skins and, to a lesser extent, in flesh. Potassium levels in flesh positively correlated with pH and total soluble solids, whereas the correlation with titratable acidity was negative. Grape juice pH and total soluble solids positively correlated with K, whereas titratable acidity correlated negatively. Leaf thinning increased K levels in petiole, although differences in K levels in grape tissues were not significant. This suggests the need to consider the K berry concentration when aiming to optimise K fertilisation programmes. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Different level of population differentiation among human genes.
Wu, Dong-Dong; Zhang, Ya-Ping
2011-01-14
During the colonization of the world, after dispersal out of African, modern humans encountered changeable environments and substantial phenotypic variations that involve diverse behaviors, lifestyles and cultures, were generated among the different modern human populations. Here, we study the level of population differentiation among different populations of human genes. Intriguingly, genes involved in osteoblast development were identified as being enriched with higher FST SNPs, a result consistent with the proposed role of the skeletal system in accounting for variation among human populations. Genes involved in the development of hair follicles, where hair is produced, were also found to have higher levels of population differentiation, consistent with hair morphology being a distinctive trait among human populations. Other genes that showed higher levels of population differentiation include those involved in pigmentation, spermatid, nervous system and organ development, and some metabolic pathways, but few involved with the immune system. Disease-related genes demonstrate excessive SNPs with lower levels of population differentiation, probably due to purifying selection. Surprisingly, we find that Mendelian-disease genes appear to have a significant excessive of SNPs with high levels of population differentiation, possibly because the incidence and susceptibility of these diseases show differences among populations. As expected, microRNA regulated genes show lower levels of population differentiation due to purifying selection. Our analysis demonstrates different level of population differentiation among human populations for different gene groups.
Qin, Xike; Liu, Bolin; Soulard, Jonathan; Morse, David; Cappadocia, Mario
2006-01-01
A method for the quantification of S-RNase levels in single styles of self-incompatible Solanum chacoense was developed and applied toward an experimental determination of the S-RNase threshold required for pollen rejection. It was found that, when single style values are averaged, accumulated levels of the S(11)- and S(12)-RNases can differ up to 10-fold within a genotype, while accumulated levels of the S(12)-RNase can differ by over 3-fold when different genotypes are compared. Surprisingly, the amount of S(12)-RNase accumulated in different styles of the same plant can differ by over 20-fold. A low level of 160 ng S-RNase in individual styles of fully incompatible plants, and a high value of 68 ng in a sporadic self-compatible (SSC) line during a bout of complete compatibility was measured, suggesting that these values bracket the threshold level of S-RNase needed for pollen rejection. Remarkably, correlations of S-RNase values to average fruit sets in different plant lines displaying sporadic self-compatibility (SSC) to different extents as well as to fruit set in immature flowers, are all consistent with a threshold value of 80 ng S(12)-RNase. Taken together, these results suggest that S-RNase levels alone are the principal determinant of the incompatibility phenotype. Interestingly, while the S-RNase threshold required for rejection of S(12)-pollen from a given genetic background is the same in styles of different genetic backgrounds, it is different when pollen donors of different genetic backgrounds are used. These results reveal a previously unsuspected level of complexity in the incompatibility reaction.
Predicting the impact of tsunami in California under rising sea level
NASA Astrophysics Data System (ADS)
Dura, T.; Garner, A. J.; Weiss, R.; Kopp, R. E.; Horton, B.
2017-12-01
The flood hazard for the California coast depends not only on the magnitude, location, and rupture length of Alaska-Aleutian subduction zone earthquakes and their resultant tsunamis, but also on rising sea levels, which combine with tsunamis to produce overall flood levels. The magnitude of future sea-level rise remains uncertain even on the decadal scale, with future sea-level projections becoming even more uncertain at timeframes of a century or more. Earthquake statistics indicate that timeframes of ten thousand to one hundred thousand years are needed to capture rare, very large earthquakes. Because of the different timescales between reliable sea-level projections and earthquake distributions, simply combining the different probabilities in the context of a tsunami hazard assessment may be flawed. Here, we considered 15 earthquakes between Mw 8 to Mw 9.4 bound by -171oW and -140oW of the Alaska-Aleutian subduction zone. We employed 24 realizations at each magnitude with random epicenter locations and different fault length-to-width ratios, and simulated the tsunami evolution from these 360 earthquakes at each decade from the years 2000 to 2200. These simulations were then carried out for different sea-level-rise projections to analyze the future flood hazard for California. Looking at the flood levels at tide gauges, we found that the flood level simulated at, for example, the year 2100 (including respective sea-level change) is different from the flood level calculated by adding the flood for the year 2000 to the sea-level change prediction for the year 2100. This is consistent for all sea-level rise scenarios, and this difference in flood levels range between 5% and 12% for the larger half of the given magnitude interval. Focusing on flood levels at the tide gauge in the Port of Los Angeles, the most probable flood level (including all earthquake magnitudes) in the year 2000 was 5 cm. Depending on the sea-level predictions, in the year 2050 the most probable flood levels could rise to 20 to 30 cm, but increase significantly from 2100 to 2200 to between 0.5 m and 2.5 m. Aside from the significant increase in flood level, it should be noted that the range over which potential most probable flood levels can vary is significant and defines a tremendous challenge for long-term planning of hazard mitigating measures.
Comments About a Chameleon Theory: Level I/Level II.
ERIC Educational Resources Information Center
Horn, John; Stankov, Lazar
1982-01-01
Jensen's ideas about two levels of intellectual abilities are criticized as being oversimplified. More than two levels of intellectual abilities and relationships between variables reflecting more than racial and socioeconomic status (SES) differences are suggested, arguing that Jensen's statements about race and SES differences are not properly…
ERIC Educational Resources Information Center
Sallayici, Mustafa; Eroglu Kolayis, Ipek; Kesilmis, Inci; Kesilmis, Mehmet Melih
2018-01-01
The objective of this study was to examine athletes' anxiety, motivation, and imagination value in competitions with different severity level. The research was conducted on swimming athlete in elite level 18 female and 19 male totally 37. To measure the level of imagination, imagine inventory in sports and to measure trait anxiety levels STAI were…
Heavy metals in laughing gulls: Gender, age and tissue differences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gochfeld, M.; Belant, J.L.; Shukla, T.
1996-12-01
The authors examined concentrations of lead, cadmium, mercury, manganese, selenium, and chromium in feathers, liver, kidney, heart, and muscle of known-aged laughing gulls (Larus atricilla) that hatched in Barnegat Bay, New Jersey and were collected at John F. Kennedy International Airport, New York 1 to 7 years later. Concentrations differed significantly among tissues, and tissue entered all the regression models explaining the greatest variation in metal levels. Age of bird contributed significantly to the models for lead, cadmium, selenium, and chromium. Although there were significant gender differences in all body measurements except wing length, there were few differences in metalmore » levels. Males had significantly higher lead levels in feathers, and females had significantly higher selenium levels in heart and muscle tissue. For lead, 3-year olds had the highest levels in the heart, liver, and kidney, and levels were lower thereafter. Mercury levels in feathers and heart decreased significantly with age. Cadmium levels increased significantly with age for feathers, heart, liver, and muscle, although there was a slight decrease in the 7-year olds. Selenium levels decreased significantly with age for all tissues. Chromium levels increased with age for liver and heart.« less
Heavy metals in fish from the Aleutians: Interspecific and locational differences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burger, Joanna, E-mail: burger@biology.rutgers.edu; Environmental and Occupational Health Sciences Institute, Rutgers University, Piscataway, NJ; Consortium for Risk Evaluation with Stakeholder Participation, Vanderbilt University, Nashville, TN 37235
The objectives of this study were to examine levels of arsenic, cadmium, lead, mercury and selenium in edible tissue of seven species of marine fish collected from several Aleutian islands (in 2004) to determine: (1) interspecific differences, (2) locational differences (among Aleutian Islands), (3) size-related differences in any metal levels within a species, and (4) potential risk to the fish or to predators on the fish, including humans. We also compared metals levels to those of three other fish species previously examined in detail, as well as examining metals in the edible tissue of octopus (Octopus dofleini). Octopus did notmore » have the highest levels of any metal. There were significant interspecific differences in all metal levels among the fish species, although the differences were less than an order of magnitude, except for arsenic (mean of 19,500 ppb in Flathead sole, Hippoglossoides elassodon). Significant intraisland variation occurred among the four sites on Amchitka, but there was not a consistent pattern. There were significant interisland differences for some metals and species. Mercury levels increased significantly with size for several species; lead increased significantly for only one fish species; and cadmium and selenium decreased significantly with size for halibut (Hippoglossus stenolepis). The Alaskan Department of Health and Social Services supports unrestricted consumption of most Alaskan fish species for all people, including pregnant women. Most mean metal concentrations were well below the levels known to adversely affect the fish themselves, or predators that consume them (including humans), except for mercury in three fish species (mean levels just below 0.3 ppm), and arsenic in two fish species. However, even at low mercury levels, people who consume fish almost daily will exceed guideline values from the Centers for Disease Control and the Environmental Protection Agency. - Highlights: • Cadmium, lead, mercury and selenium levels differed among 10 fish species from the Aleutians. • Mean Arsenic was as high as 19,500 ppb (flathead sole, Hippoglossoides elassodon). • Mercury levels increased significantly with fish size for several species. • Metal levels were generally below adverse effects levels for fish and their predators. • Mercury and arsenic might pose a risk to human consumers, and require further examination.« less
Burger, Joanna; Elbin, Susan
2015-03-01
Birds living in coastal areas are exposed to severe storms and tidal flooding during the nesting season, but also to contaminants that move up the food chain from the water column and sediment to their prey items. We examine metals in Herring Gull (Larus argentatus) and Great Black-backed Gull (Larus marinus) eggs collected from the New York/New Jersey harbor estuary in 2012 and in 2013 to determine if there were significant yearly differences in metal levels. We test the null hypothesis that there were no significant yearly differences in metal levels. We investigate whether there were consistent differences in metals from 2012 to 2013 that might suggest a storm-related effect because Superstorm Sandy landed in New Jersey in October 2012 with high winds and extensive flooding, and view this research as exploratory. Except for arsenic, there were significant inter-year variations in the mean levels for all colonies combined for Herring Gull, and for lead, mercury and selenium for Great Black-backed Gulls. All metal levels in 2013 were less than in 2012, except for lead. These differences were present for individual colonies as well. Metal levels varied significantly among islands for Herring Gulls in both years (except for cadmium in 2013). No one colony had the highest levels of all metals for Herring Gulls. A long term data set on mercury levels in Herring Gulls indicated that the differences between 2012 and 2013 were greater than usual. Several different factors could account for these differences, and these are discussed.
Sensitivity of Aerosol Multi-Sensor Daily Data Intercomparison to the Level 3 Dataday Definition
NASA Technical Reports Server (NTRS)
Leptoukh, Gregory; Lary, David; Shen, Suhung; Lynnes, Christopher
2010-01-01
Topics include: why people use Level 3 products, why someone might go wrong with Level 3 products, differences in L3 from different sensors, Level 3 data day definition, MODIS vs. MODIS, AOD MODIS Terra vs. Aqua in Pacific, AOD Aqua MODIS vs. MISR correlation map, MODIS vs MISR on Terra, MODIS atmospheric data day definition, orbit time difference for Terra and Aqua 2009-01-06, maximum time difference for Terra (Calendar day), artifact explains, data day definitions, local time distribution, spatial (local time) data day definition, maximum time difference between Terra and Aqua, Removing the artifact in 16-day AOD correlation, MODIS cloud top pressure, and MODIS Terra and Aqua vs. AIRS cloud top pressure.
Lee, Bun-Hee; Hong, Jin-Pyo; Hwang, Jung-A; Na, Kyoung-Sae; Kim, Won-Joong; Trigo, Jose; Kim, Yong-Ku
2016-02-01
Some clinical studies have reported reduced peripheral glial cell line-derived neurotrophic factor (GDNF) level in elderly patients with major depressive disorder (MDD). We verified whether a reduction in plasma GDNF level was associated with MDD. Plasma GDNF level was measured in 23 healthy control subjects and 23 MDD patients before and after 6 weeks of treatment. Plasma GDNF level in MDD patients at baseline did not differ from that in healthy controls. Plasma GDNF in MDD patients did not differ significantly from baseline to the end of treatment. GDNF level was significantly lower in recurrent-episode MDD patients than in first-episode patients before and after treatment. Our findings revealed significantly lower plasma GDNF level in recurrent-episode MDD patients, although plasma GDNF levels in MDD patients and healthy controls did not differ significantly. The discrepancy between our study and previous studies might arise from differences in the recurrence of depression or the ages of the MDD patients.
Evidence for a confidence-accuracy relationship in memory for same- and cross-race faces.
Nguyen, Thao B; Pezdek, Kathy; Wixted, John T
2017-12-01
Discrimination accuracy is usually higher for same- than for cross-race faces, a phenomenon known as the cross-race effect (CRE). According to prior research, the CRE occurs because memories for same- and cross-race faces rely on qualitatively different processes. However, according to a continuous dual-process model of recognition memory, memories that rely on qualitatively different processes do not differ in recognition accuracy when confidence is equated. Thus, although there are differences in overall same- and cross-race discrimination accuracy, confidence-specific accuracy (i.e., recognition accuracy at a particular level of confidence) may not differ. We analysed datasets from four recognition memory studies on same- and cross-race faces to test this hypothesis. Confidence ratings reliably predicted recognition accuracy when performance was above chance levels (Experiments 1, 2, and 3) but not when performance was at chance levels (Experiment 4). Furthermore, at each level of confidence, confidence-specific accuracy for same- and cross-race faces did not significantly differ when overall performance was above chance levels (Experiments 1, 2, and 3) but significantly differed when overall performance was at chance levels (Experiment 4). Thus, under certain conditions, high-confidence same-race and cross-race identifications may be equally reliable.
Meyer, Johan; Jaspers, Veerle L B; Eens, Marcel; de Coen, Wim
2009-11-01
Although feathers have been used successfully for monitoring heavy metals and organic pollutants, there are currently no data available on the use of feathers as indicators of perfluorinated chemical (PFC) exposure in birds. Also, no study has evaluated PFC levels in birds with different diets from different habitats. In the current study we investigated the PFC exposure of five different bird species from the same geographic region in Belgium, using both feathers and liver tissue. The highest mean liver perfluorooctane sulfonate (PFOS) levels were found in the Grey Heron (476 ng/g ww) followed by the Herring Gull (292 ng/g ww) and Eurasian Sparrowhawk (236 ng/g ww), whereas the Eurasian Magpie (17 ng/g ww) and the Eurasian Collared Dove (12 ng/g ww) had the lowest levels. The PFOS levels in the feathers showed a different pattern. The Grey Heron had the highest feather PFOS levels (247 ng/g dw), the Eurasian Sparrowhawk (102 ng/g dw) had the second highest feather PFOS levels, followed by the Herring Gull (79 ng/g dw) and the Eurasian Collared Dove (48 ng/g dw), and the lowest levels were found in the Eurasian Magpie (31 ng/g dw). Overall, there was a significant positive correlation (Pearson, R=0.622, p<0.01) between the feather and liver PFOS levels, indicating that feathers could be an alternative bioindicator for PFOS exposure in birds. However, caution should be taken as there was no significant correlation between the PFOS levels in the feathers and livers of the individual species. In general, birds from a higher trophic level had higher PFC levels in their tissues. This indicates that diet plays a role in PFC exposure in birds and confirms the bioaccumulation potential of PFC.
ERIC Educational Resources Information Center
Korat, Ofra; Haglili, Sharon
2007-01-01
This study examined whether maternal evaluations of children's emergent literacy (EL) levels, maternal mediation during a book-reading activity with children, and the children's EL levels differ as a function of socioeconomic status (SES; low vs. high), and whether the relationships between these variables differ as a function of SES levels. Study…
Assessment and Its Outcomes: The Influence of Disciplines and Institutions
ERIC Educational Resources Information Center
Simpson, Adrian
2016-01-01
Existing research provides evidence at the module level of systematic differences in patterns of assessment, marks achieved and distributions of marks between different disciplines. This paper examines those issues at the degree course level, and suggests reasons for the presence or absence of those module-level relationships at this higher level.…
Miller, Heather B; Witherow, D Scott; Carson, Susan
2012-01-01
The North Carolina State University Biotechnology Program offers laboratory-intensive courses to both undergraduate and graduate students. In "Manipulation and Expression of Recombinant DNA," students are separated into undergraduate and graduate sections for the laboratory, but not the lecture, component. Evidence has shown that students prefer pairing with someone of the same academic level. However, retention of main ideas in peer learning environments has been shown to be greater when partners have dissimilar abilities. Therefore, we tested the hypothesis that there will be enhanced student learning when lab partners are of different academic levels. We found that learning outcomes were met by both levels of student, regardless of pairing. Average undergraduate grades on every assessment method increased when undergraduates were paired with graduate students. Many of the average graduate student grades also increased modestly when graduate students were paired with undergraduates. Attitudes toward working with partners dramatically shifted toward favoring working with students of different academic levels. This work suggests that offering dual-level courses in which different-level partnerships are created does not inhibit learning by students of different academic levels. This format is useful for institutions that wish to offer "boutique" courses in which student enrollment may be low, but specialized equipment and faculty expertise are needed.
Miller, Heather B.; Witherow, D. Scott; Carson, Susan
2012-01-01
The North Carolina State University Biotechnology Program offers laboratory-intensive courses to both undergraduate and graduate students. In “Manipulation and Expression of Recombinant DNA,” students are separated into undergraduate and graduate sections for the laboratory, but not the lecture, component. Evidence has shown that students prefer pairing with someone of the same academic level. However, retention of main ideas in peer learning environments has been shown to be greater when partners have dissimilar abilities. Therefore, we tested the hypothesis that there will be enhanced student learning when lab partners are of different academic levels. We found that learning outcomes were met by both levels of student, regardless of pairing. Average undergraduate grades on every assessment method increased when undergraduates were paired with graduate students. Many of the average graduate student grades also increased modestly when graduate students were paired with undergraduates. Attitudes toward working with partners dramatically shifted toward favoring working with students of different academic levels. This work suggests that offering dual-level courses in which different-level partnerships are created does not inhibit learning by students of different academic levels. This format is useful for institutions that wish to offer “boutique” courses in which student enrollment may be low, but specialized equipment and faculty expertise are needed. PMID:22949428
Uddin, Shahadat
2016-02-04
A patient-centric care network can be defined as a network among a group of healthcare professionals who provide treatments to common patients. Various multi-level attributes of the members of this network have substantial influence to its perceived level of performance. In order to assess the impact different multi-level attributes of patient-centric care networks on healthcare outcomes, this study first captured patient-centric care networks for 85 hospitals using health insurance claim dataset. From these networks, this study then constructed physician collaboration networks based on the concept of patient-sharing network among physicians. A multi-level regression model was then developed to explore the impact of different attributes that are organised at two levels on hospitalisation cost and hospital length of stay. For Level-1 model, the average visit per physician significantly predicted both hospitalisation cost and hospital length of stay. The number of different physicians significantly predicted only the hospitalisation cost, which has significantly been moderated by age, gender and Comorbidity score of patients. All Level-1 findings showed significance variance across physician collaboration networks having different community structure and density. These findings could be utilised as a reflective measure by healthcare decision makers. Moreover, healthcare managers could consider them in developing effective healthcare environments.
Vermeulen, C J; Cremers, T I F H; Westerink, B H C; Van De Zande, L; Bijlsma, R
2006-07-01
Among various other mechanisms, genetic differences in the production of reactive oxygen species are thought to underlie genetic variation for longevity. Here we report on possible changes in ROS production related processes in response to selection for divergent virgin lifespan in Drosophila. The selection lines were observed to differ significantly in dopamine levels and melanin pigmentation, which is associated with dopamine levels at eclosion. These findings confirm that variation in dopamine levels is associated with genetic variation for longevity. Dopamine has previously been implied in ROS production and in the occurrence of age-related neurodegenerative diseases. In addition, we propose a possible proximate mechanism by which dopamine levels affect longevity in Drosophila: We tested if increased dopamine levels were associated with a "rate-of-living" syndrome of increased activity and respiration levels, thus aggravating the level of oxidative stress. Findings on locomotor activity and oxygen consumption of short-lived flies were in line with expectations. However, the relation is not straightforward, as flies of the long-lived lines did not show any consistent differences in pigmentation or dopamine levels with respect to the control lines. Moreover, long-lived flies also had increased locomotor activity, but showed no consistent differences in respiration rate. This strongly suggests that the response for increased and decreased lifespan may be obtained by different mechanisms.
Organisational commitment in nurses: is it dependent on age or education?
Jones, April
2015-02-01
In hospitals in the United States, the ratio of nurses to patients is declining, resulting in an increase in workloads for the remaining nurses. Consequently, the level of commitment that these nurses have to their jobs is important. Outside health care, employees from different generations working for a variety of organisations differ in their levels of organisational commitment, but this information has not been available for nurses. This study, carried out in the state of Alabama, looks at whether nurses from different generations differ in their levels of organisational commitment, and also whether there are any differences in organisational commitment between licensed practical nurses (LPNs) and registered nurses (RNs). A questionnaire designed to measure levels of organisational commitment was answered by 145 nurses. The results were analysed for any differences in organisational commitment in nurses from different generations and with different nursing degrees. Nurses from different generations showed the same levels of organisational commitment, but LPNs showed significantly less affective commitment, that is, lower feelings of loyalty to their workplace, than RNs. This information may be useful for hospital administrators and human resource managers in the United States to highlight the value of flexible incentive packages to address the needs of a diverse workforce. For healthcare employers in the UK, the concept that there is an association between nursing qualifications and levels of organisational commitment is critical for building organisational stability and effectiveness, and for nurse recruitment and retention.
Water potential in ponderosa pine stands of different growing-stock levels
J. M. Schmid; S. A. Mata; R. K. Watkins; M. R. Kaufmann
1991-01-01
Water potential was measured in five ponderosa pine (Pinus ponderosa Laws.) in each of four stands of different growing-stock levels at two locations in the Black Hills of South Dakota. Mean water potentials at dawn and midday varied significantly among growing-stock levels at one location, but differences were not consistent. Mean dawn and midday water potentials...
Comparison Between CCCM and CloudSat Radar-Lidar (RL) Cloud and Radiation Products
NASA Technical Reports Server (NTRS)
Ham, Seung-Hee; Kato, Seiji; Rose, Fred G.; Sun-Mack, Sunny
2015-01-01
To enhance cloud properties, LaRC and CIRA developed each combination algorithm for obtained properties from passive, active and imager in A-satellite constellation. When comparing global cloud fraction each other, LaRC-produced CERES-CALIPSO-CloudSat-MODIS (CCCM) products larger low-level cloud fraction over tropic ocean, while CIRA-produced Radar-Lidar (RL) shows larger mid-level cloud fraction for high latitude region. The reason for different low-level cloud fraction is due to different filtering method of lidar-detected cloud layers. Meanwhile difference in mid-level clouds is occurred due to different priority of cloud boundaries from lidar and radar.
Ammonia pollution characteristics of centralized drinking water sources in China.
Fu, Qing; Zheng, Binghui; Zhao, Xingru; Wang, Lijing; Liu, Changming
2012-01-01
The characteristics of ammonia in drinking water sources in China were evaluated during 2005-2009. The spatial distribution and seasonal changes of ammonia in different types of drinking water sources of 22 provinces, 5 autonomous regions and 4 municipalities were investigated. The levels of ammonia in drinking water sources follow the order of river > lake/reservoir > groundwater. The levels of ammonia concentration in river sources gradually decreased from 2005 to 2008, while no obvious change was observed in the lakes/reservoirs and groundwater drinking water sources. The proportion of the type of drinking water sources is different in different regions. In river drinking water sources, the ammonia level was varied in different regions and changed seasonally. The highest value and wide range of annual ammonia was found in South East region, while the lowest value was found in Southwest region. In lake/reservoir drinking water sources, the ammonia levels were not varied obviously in different regions. In underground drinking water sources, the ammonia levels were varied obviously in different regions due to the geological permeability and the natural features of regions. In the drinking water sources with higher ammonia levels, there are enterprises and wastewater drainages in the protected areas of the drinking water sources.
Zheng, Shuang; Zhou, Huan; Han, Tingting; Li, Yangxue; Zhang, Yao; Liu, Wei; Hu, Yaomin
2015-04-29
To explore clinical characteristics and beta cell function in Chinese patients with newly diagnosed drug naive type 2 diabetes mellitus (T2DM) with different levels of serum triglyceride (TG). Patients with newly diagnosed T2DM (n = 624) were enrolled and divided into different groups according to levels of serum TG. All patients underwent oral glucose tolerance tests and insulin releasing tests. Demographic data, lipid profiles, glucose levels, and insulin profiles were compared between different groups. Basic insulin secretion function index (homeostasis model assessment for beta cell function index, HOMA-β), modified beta cell function index (MBCI), glucose disposition indices (DI), and early insulin secretion function index (insulinogenic index, IGI) were used to evaluate the beta cell function. Patients of newly diagnosed T2DM with hypertriglyceridemia were younger, fatter and had worse lipid profiles, glucose profiles, and high insulin levels than those with normal TG. There is no difference in early phase insulin secretion among groups of newly diagnosed T2DM patients with different TG levels. The basal beta cell function (HOMA-β and MBCI) initially increased along rising TG levels and then decreased as the TG levels rose further. The insulin sensitivity was relatively high in patients with a low level of TG and low with a high level of TG. Hypertriglyceridemia influences clinical characteristics and β cell function of Chinese patients with newly diagnosed T2DM. A better management of dyslipidemia may, to some extent, reduce the effect of lipotoxicity, thereby improving glucose homeostasis in patients with newly diagnosed T2DM.
Blood glucose level reconstruction as a function of transcapillary glucose transport.
Koutny, Tomas
2014-10-01
A diabetic patient occasionally undergoes a detailed monitoring of their glucose levels. Over the course of a few days, a monitoring system provides a detailed track of their interstitial fluid glucose levels measured in their subcutaneous tissue. A discrepancy in the blood and interstitial fluid glucose levels is unimportant because the blood glucose levels are not measured continuously. Approximately five blood glucose level samples are taken per day, and the interstitial fluid glucose level is usually measured every 5min. An increased frequency of blood glucose level sampling would cause discomfort for the patient; thus, there is a need for methods to estimate blood glucose levels from the glucose levels measured in subcutaneous tissue. The Steil-Rebrin model is widely used to describe the relationship between blood and interstitial fluid glucose dynamics. However, we measured glucose level patterns for which the Steil-Rebrin model does not hold. Therefore, we based our research on a different model that relates present blood and interstitial fluid glucose levels to future interstitial fluid glucose levels. Using this model, we derived an improved model for calculating blood glucose levels. In the experiments conducted, this model outperformed the Steil-Rebrin model while introducing no additional requirements for glucose sample collection. In subcutaneous tissue, 26.71% of the calculated blood glucose levels had absolute values of relative differences from smoothed measured blood glucose levels less than or equal to 5% using the Steil-Rebrin model. However, the same difference interval was encountered in 63.01% of the calculated blood glucose levels using the proposed model. In addition, 79.45% of the levels calculated with the Steil-Rebrin model compared with 95.21% of the levels calculated with the proposed model had 20% difference intervals. Copyright © 2014 Elsevier Ltd. All rights reserved.
Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.
Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint
2017-09-13
GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the question of whether interindividual variability in GABA reflects an overall variability in visual inhibition and has a general influence on visual perception or whether the GABA levels of different cortical regions have selective influence on perception of different visual features. Here we report a region- and feature-dependent influence of GABA level on human visual perception. Our findings suggest that GABA level of a cortical region selectively influences perception of visual features that are topographically mapped in this region through intraregional lateral connections. Copyright © 2017 Song, Sandberg et al.
White, Brandi M; Bonilha, Heather Shaw; Ellis, Charles
2016-03-01
Childhood lead poisoning is a serious public health problem with long-term adverse effects. Healthy People 2020's environmental health objective aims to reduce childhood blood lead levels; however, efforts may be hindered by potential racial/ethnic differences. Recent recommendations have lowered the blood lead reference level. This review examined racial/ethnic differences in blood lead levels among children under 6 years of age. We completed a search of PubMed, CINAHL, and PsycINFO databases for published works from 2002 to 2012. We identified studies that reported blood lead levels and the race/ethnicity of at least two groups. Ten studies met inclusion criteria for the review. Blood lead levels were most frequently reported for black, white, and Hispanic children. Six studies examined levels between blacks, whites, and Hispanics and two between blacks and whites. Studies reporting mean lead levels among black, whites, and Hispanics found that blacks had the highest mean blood lead level. Additionally, studies reporting blood lead ranges found that black children were more likely to have elevated levels. Studies suggest that black children have higher blood lead levels compared to other racial/ethnic groups. Future studies are warranted to obtain ample sample sizes for several racial/ethnic groups to further examine differences in lead levels.
Assessment of noise levels of the equipments used in the dental teaching institution, Bangalore.
Kadanakuppe, Sushi; Bhat, Padma K; Jyothi, C; Ramegowda, C
2011-01-01
In dental practical classes, the acoustic environment is characterized by high noise levels in relation to other teaching areas, due to the exaggerated noise produced by some of these devices and use of dental equipment by many users at the same time. To measure, analyze and compare noise levels of equipments among dental learning areas under different working conditions and also to measure and compare noise levels between used and brand new handpieces under different working conditions. Noise levels were measured and analyzed in different dental learning areas that included clinical, pre-clinical areas and laboratories selected as representatives of a variety of learning-teaching activities. The noise levels were determined using a precision noise level meter (CENTER® 325 IEC 651 TYPE II) with a microphone. The mean of the maxima was determined. The data were collected, tabulated, and statistically analyzed using t tests. The noise levels measured varied between 64 and 97 dB(A).The differences in sound levels when the equipment was merely turned on and during cutting operations and also between used and brand new equipments were recorded. The laboratory engines had the highest noise levels, whereas the noise levels in high-speed turbine handpieces and the low-speed contra angle handpieces were decreased. The noise levels detected in this study are considered to be close to the limit of risk of hearing loss.
Chumnanpuen, Pramote; Nookaew, Intawat; Nielsen, Jens
2013-10-16
In the yeast Saccharomyces cerevisiae, genes containing UASINO sequences are regulated by the Ino2/Ino4 and Opi1 transcription factors, and this regulation controls lipid biosynthesis. The expression level of INO2 and INO4 genes (INO-level) at different nutrient limited conditions might lead to various responses in yeast lipid metabolism. In this study, we undertook a global study on how INO-levels (transcription level of INO2 and INO4) affect lipid metabolism in yeast and we also studied the effects of single and double deletions of the two INO-genes (deficient effect). Using 2 types of nutrient limitations (carbon and nitrogen) in chemostat cultures operated at a fixed specific growth rate of 0.1 h-1 and strains having different INO-level, we were able to see the effect on expression level of the genes involved in lipid biosynthesis and the fluxes towards the different lipid components. Through combined measurements of the transcriptome, metabolome, and lipidome it was possible to obtain a large dataset that could be used to identify how the INO-level controls lipid metabolism and also establish correlations between the different components. In this study, we undertook a global study on how INO-levels (transcription level of INO2 and INO4) affect lipid metabolism in yeast and we also studied the effects of single and double deletions of the two INO-genes (deficient effect). Using 2 types of nutrient limitations (carbon and nitrogen) in chemostat cultures operated at a fixed specific growth rate of 0.1 h-1 and strains having different INO-level, we were able to see the effect on expression level of the genes involved in lipid biosynthesis and the fluxes towards the different lipid components. Through combined measurements of the transcriptome, metabolome, and lipidome it was possible to obtain a large dataset that could be used to identify how the INO-level controls lipid metabolism and also establish correlations between the different components. Our analysis showed the strength of using a combination of transcriptome and lipidome analysis to illustrate the effect of INO-levels on phospholipid metabolism and based on our analysis we established a global regulatory map.
Human and great ape red blood cells differ in plasmalogen levels and composition
2011-01-01
Background Plasmalogens are ether phospholipids required for normal mammalian developmental, physiological, and cognitive functions. They have been proposed to act as membrane antioxidants and reservoirs of polyunsaturated fatty acids as well as influence intracellular signaling and membrane dynamics. Plasmalogens are particularly enriched in cells and tissues of the human nervous, immune, and cardiovascular systems. Humans with severely reduced plasmalogen levels have reduced life spans, abnormal neurological development, skeletal dysplasia, impaired respiration, and cataracts. Plasmalogen deficiency is also found in the brain tissue of individuals with Alzheimer disease. Results In a human and great ape cohort, we measured the red blood cell (RBC) levels of the most abundant types of plasmalogens. Total RBC plasmalogen levels were lower in humans than bonobos, chimpanzees, and gorillas, but higher than orangutans. There were especially pronounced cross-species differences in the levels of plasmalogens with a C16:0 moiety at the sn-1 position. Humans on Western or vegan diets had comparable total RBC plasmalogen levels, but the latter group showed moderately higher levels of plasmalogens with a C18:1 moiety at the sn-1 position. We did not find robust sex-specific differences in human or chimpanzee RBC plasmalogen levels or composition. Furthermore, human and great ape skin fibroblasts showed only modest differences in peroxisomal plasmalogen biosynthetic activity. Human and chimpanzee microarray data indicated that genes involved in plasmalogen biosynthesis show cross-species differential expression in multiple tissues. Conclusion We propose that the observed differences in human and great ape RBC plasmalogens are primarily caused by their rates of biosynthesis and/or turnover. Gene expression data raise the possibility that other human and great ape cells and tissues differ in plasmalogen levels. Based on the phenotypes of humans and rodents with plasmalogen disorders, we propose that cross-species differences in tissue plasmalogen levels could influence organ functions and processes ranging from cognition to reproduction to aging. PMID:21679470
Implication of circulating irisin levels with brown adipose tissue and sarcopenia in humans.
Choi, Hae Yoon; Kim, Sungeun; Park, Ji Woo; Lee, Nam Seok; Hwang, Soon Young; Huh, Joo Young; Hong, Ho Cheol; Yoo, Hye Jin; Baik, Sei Hyun; Youn, Byung-Soo; Mantzoros, Christos S; Choi, Kyung Mook
2014-08-01
Irisin is an exercise-induced novel myokine that drives brown-fat-like conversion of white adipose tissue and has been suggested to be a promising target for the treatment of obesity-related metabolic disorders. To assess the association of circulating irisin concentrations with brown adipose tissue (BAT) and/or sarcopenia in humans. We examined irisin levels in 40 BAT-positive and 40 BAT-negative women detected by (18)F-fluorodeoxyglucose positron emission tomography ((18)FDG-PET). In a separate study, we also examined 401 subjects with or without sarcopenia defined by skeletal muscle mass index (SMMI) and appendicular skeletal muscle (ASM)/height(2) using dual-energy x-ray absorptiometry. Among 6877 consecutive (18)FDG-PET scans in 4736 subjects, 146 subjects (3.1%) had positive BAT scans. The BAT-detectable group and the matched BAT-undetectable group did not differ in circulating irisin levels measured using two different ELISA kits (P = .747 and P = .160, respectively). Serum irisin levels were not different between individuals with sarcopenia and those without sarcopenia using either kit (P = .305 and P = .569, respectively). Also, serum irisin levels were not different between groups defined by ASM/height(2) using either kit (P = .352 and P = .134, respectively). Although visceral fat area and skeletal muscle mass showed significant difference according to tertiles of SMMI levels, irisin concentrations did not differ. Circulating irisin levels were not different in individuals with detectable BAT or those with sarcopenia compared with control subjects and were not correlated with SMMI.
Xiaolan, He; Guangjie, Bao; Linglu, Sun; Xue, Zhang; Shanying, Bao; Hong, Kang
2017-08-01
Objective The effect of different oxygen tensions on the cytoskeleton remodeling of goat temporomandibular joint (TMJ) disc cells were investigated. Methods Goat TMJ disc cells were cultured under normoxia (21% O₂) and hypoxia (2%, 4%, and 8% O₂). Toluidine blue, picrosirius red, and type Ⅰ collagen immunocytochemical staining were performed to observe the changes in cell phenotype under different oxygen levels. Immunofluorescent staining and real-time reverse transcription-polymerase chain reaction analysis were then performed to identify actin, tubulin, and vimentin in the cultured disc cells. Results TMJ disc cells still displayed fibroblast characteristics under different oxygen levels and their cytoskeletons had regular arrangement. The fluorescence intensities of actin and vimentin were lowest at 4% O₂(P<0.05), whereas that of tubulin was highest at 2% O₂ (P<0.05). No significant difference among the other groups was observed (P>0.05). Actin mRNA levels were considerably decreased at 2% O₂ and 4% O₂ in hypoxic conditions, while actin mRNA expression was highest in 21% O₂. Tubulin mRNA levels considerably increased at 2% O₂, while tubulin mRNA expression was lowest in 8% O₂ (P<0.05). Vimentin mRNA expression was lowest at 4% O₂ and highest at 21% O₂, and significant differences were observed between vimentin mRNA expression levels among these oxygen levels (P<0.05). Conclusion Cytoskeletons were reconstructed in different oxygen tensions, and 2% O₂ may be the optimal oxygen level required to proliferate TMJ disc cells.
Taylor, Miles G.
2014-01-01
Objectives. To test different forms of private insurance coverage as mediators for racial disparities in onset, persistent level, and acceleration of functional limitations among Medicare age-eligible Americans. Method. Data come from 7 waves of the Health and Retirement Study (1996–2008). Onset and progression latent growth models were used to estimate racial differences in onset, level, and growth of functional limitations among a sample of 5,755 people aged 65 and older in 1996. Employer-provided insurance, spousal insurance, and market insurance were next added to the model to test how differences in private insurance mediated the racial gap in physical limitations. Results. In baseline models, African Americans had larger persistent level of limitations over time. Although employer-provided, spousal provided, and market insurances were directly associated with lower persistent levels of limitation, only differences in market insurance accounted for the racial disparities in persistent level of limitations. Discussion. Results suggest private insurance is important for reducing functional limitations, but market insurance is an important mediator of the persistently larger level of limitations observed among African Americans. PMID:24569001
Wang, Zhehong; Xu, Haisong
2008-12-01
In order to investigate the performance of suprathreshold color-difference tolerances with different visual scales and different perceptual correlates, a psychophysical experiment was carried out by the method of constant stimuli using CRT colors. Five hue circles at three lightness (L*=30, 50, and 70) and chroma (C*ab=10, 20, and 30) levels were selected to ensure that the color-difference tolerances did not exceed the color gamut of the CRT display. Twelve color centers distributed evenly every 30 degrees along each hue circle were assessed by a panel of eight observers, and the corresponding color-difference tolerances were obtained. The hue circle with L*=50 and C*ab=20 was assessed with three different visual scales (DeltaV=3.06, 5.92, and 8.87 CIELAB units), which ranged from small to large visual scales, while the remaining hue circles were observed only with the small visual scale. The lightness tolerances had no significant correlation with the hue angles, while chroma and hue tolerances showed considerable hue angle dependences. The color-difference tolerances were linearly proportional to the visual scales but with different slopes. The lightness tolerances with different lightness levels but the same chroma showed the crispening effect to some extent, while the chroma and hue tolerances decreased with the increment of the lightness. For the color-difference tolerances with different chroma levels but the same lightness, there was no correlation between the lightness tolerances and the chroma levels, while the chroma and hue tolerances were nearly linearly proportional to the chroma levels.
Perceived Stress at Work Is Associated with Lower Levels of DHEA-S
Lennartsson, Anna-Karin; Theorell, Töres; Rockwood, Alan L.; Kushnir, Mark M.; Jonsdottir, Ingibjörg H.
2013-01-01
Background It is known that long-term psychosocial stress may cause or contribute to different diseases and symptoms and accelerate aging. One of the consequences of prolonged psychosocial stress may be a negative effect on the levels of dehydroepiandrosterone (DHEA) and its sulphated metabolite dehydroepiandrosterone sulphate (DHEA-S). The aim of this study is to investigate whether levels of DHEA and DHEA-S differ in individuals who report perceived stress at work compared to individuals who report no perceived stress at work. Methods Morning fasting DHEA-S and DHEA levels were measured in serum in a non-stressed group (n = 40) and a stressed group (n = 41). DHEA and DHEA-S levels were compared between the groups using ANCOVA, controlling for age. Results The mean DHEA-S levels were 23% lower in the subjects who reported stress at work compared to the non-stressed group. Statistical analysis (ANCOVA) showed a significant difference in DHEA-S levels between the groups (p = 0.010). There was no difference in DHEA level between the groups. Conclusions This study indicates that stressed individual have markedly lower levels of DHEA-S. Given the important and beneficial functions of DHEA and DHEA-S, lower levels of DHEA-S may constitute one link between psychosocial stress, ill health and accelerated ageing. PMID:24015247
Perceived stress at work is associated with lower levels of DHEA-S.
Lennartsson, Anna-Karin; Theorell, Töres; Rockwood, Alan L; Kushnir, Mark M; Jonsdottir, Ingibjörg H
2013-01-01
It is known that long-term psychosocial stress may cause or contribute to different diseases and symptoms and accelerate aging. One of the consequences of prolonged psychosocial stress may be a negative effect on the levels of dehydroepiandrosterone (DHEA) and its sulphated metabolite dehydroepiandrosterone sulphate (DHEA-S). The aim of this study is to investigate whether levels of DHEA and DHEA-S differ in individuals who report perceived stress at work compared to individuals who report no perceived stress at work. Morning fasting DHEA-S and DHEA levels were measured in serum in a non-stressed group (n = 40) and a stressed group (n = 41). DHEA and DHEA-S levels were compared between the groups using ANCOVA, controlling for age. The mean DHEA-S levels were 23% lower in the subjects who reported stress at work compared to the non-stressed group. Statistical analysis (ANCOVA) showed a significant difference in DHEA-S levels between the groups (p = 0.010). There was no difference in DHEA level between the groups. This study indicates that stressed individual have markedly lower levels of DHEA-S. Given the important and beneficial functions of DHEA and DHEA-S, lower levels of DHEA-S may constitute one link between psychosocial stress, ill health and accelerated ageing.
Yang, Xin-Wei; Wang, Zhi-Ming; Jin, Tai-Yi
2006-05-01
This study was conducted to assess occupational stress in different gender, age, work duration, educational level and marital status group. A test of occupational stress in different gender, age, work duration, educational level and marital status group, was carried out with revised occupational stress inventory (OSI-R) for 4278 participants. The results of gender show that there are heavier occupational role, stronger interpersonal and physical strain in male than that in female, and the differences are statistically significant (P < 0.01). The score of recreation in the male is higher than that in female, but the score of self-care in the female is higher than that in male, and the differences are statistically significant (P < 0.01). Difference in the scores of occupational role, personal resource among various age groups is significant (P < 0.01). Vocational, interpersonal strain scores among various age groups is significant (P < 0.05). The results of educational level analyses suggest that the difference in the scores of occupational stress and strain among various educational levels show statistically significant (P < 0.05), whereas there are no statistic significance of coping resources among the groups (P > 0.05). The occupational stress so as to improve the work ability of different groups. Different measure should be taken to reduce the occupational stress so as to improve the work ability of different groups.
Strazisar, Mojca; Mlakar, Vid; Glavac, Damjan
2009-01-01
Several studies have reported different expression levels of certain genes in NSCLC, mostly related to the stage and advancement of the tumours. We investigated 65 stage I-III NSCLC tumours: 32 adenocarcinomas (ADC), 26 squamous cell carcinomas (SCC) and 7 large cell carcinomas (LCC). Using the real-time reverse transcription polymerase chain reaction (RT-PCR), we analysed the expression of the COX-2, hTERT, MDM2, LATS2 and S100A2 genes and researched the relationships between the NSCLC types and the differences in expression levels. The differences in the expression levels of the LATS2, S100A2 and hTERT genes in different types of NSCLC are significant. hTERT and COX-2 were over-expressed and LATS2 under-expressed in all NSCLC. We also detected significant relative differences in the expression of LATS2 and MDM2, hTERT and MDM2 in different types of NSCLC. There was a significant difference in the average expression levels in S100A2 for ADC and SCC. Our study shows differences in the expression patterns within the NSCLC group, which may mimic the expression of the individual NSCLC type, and also new relationships in the expression levels for different NSCLC types.
Investigating Foreign Language Learning Anxiety: A Case of Saudi Undergraduate EFL Learners
ERIC Educational Resources Information Center
Al-Khasawneh, Fadi Maher
2016-01-01
This study investigates the level and sources of foreign language learning anxiety experienced by Saudi students studying at King Khalid University (KKU). It also aims to examine the differences between the level of language anxiety and the students' study level. For this purpose, 97 English majored students from different levels were purposively…
ELEVATED LEVELS OF SODIUM IN COMMUNITY DRINKING WATER
A comparison study of students from towns with differing levels of sodium in drinking water revealed statistically significantly higher blood pressure distributions among the students from the town with high sodium levels. Differences were found in both systolic and diastolic rea...
ERIC Educational Resources Information Center
Li, Shaofeng
2009-01-01
The present study investigates the differential effects of explicit and implicit feedback on L2 learners at different proficiency levels as measured by L2 development and learner uptake, which is defined as the learner's responses following feedback. Twenty-three learners of Chinese as a foreign language at two different levels of proficiency at a…
Langen, Carolyn D; White, Tonya; Ikram, M Arfan; Vernooij, Meike W; Niessen, Wiro J
2015-01-01
Structural and functional brain connectivity are increasingly used to identify and analyze group differences in studies of brain disease. This study presents methods to analyze uni- and bi-modal brain connectivity and evaluate their ability to identify differences. Novel visualizations of significantly different connections comparing multiple metrics are presented. On the global level, "bi-modal comparison plots" show the distribution of uni- and bi-modal group differences and the relationship between structure and function. Differences between brain lobes are visualized using "worm plots". Group differences in connections are examined with an existing visualization, the "connectogram". These visualizations were evaluated in two proof-of-concept studies: (1) middle-aged versus elderly subjects; and (2) patients with schizophrenia versus controls. Each included two measures derived from diffusion weighted images and two from functional magnetic resonance images. The structural measures were minimum cost path between two anatomical regions according to the "Statistical Analysis of Minimum cost path based Structural Connectivity" method and the average fractional anisotropy along the fiber. The functional measures were Pearson's correlation and partial correlation of mean regional time series. The relationship between structure and function was similar in both studies. Uni-modal group differences varied greatly between connectivity types. Group differences were identified in both studies globally, within brain lobes and between regions. In the aging study, minimum cost path was highly effective in identifying group differences on all levels; fractional anisotropy and mean correlation showed smaller differences on the brain lobe and regional levels. In the schizophrenia study, minimum cost path and fractional anisotropy showed differences on the global level and within brain lobes; mean correlation showed small differences on the lobe level. Only fractional anisotropy and mean correlation showed regional differences. The presented visualizations were helpful in comparing and evaluating connectivity measures on multiple levels in both studies.
Ke, W-M; Xie, S-B; Li, X-J; Zhang, S-Q; Lai, J; Ye, Y-N; Gao, Z-L; Chen, P-J
2011-09-01
Hepatitis B virus (HBV) DNA levels and liver histological necroinflammation grades are correlated with the antiviral efficacy. It is necessary to clarify the relationship between HBV replication levels apportioned by the same hepatic parenchyma cell volume and severity of liver histological necroinflammation grades in both hepatitis B e antigen (HBeAg)-positive and HBeAg-negative chronic hepatitis B. The serum HBV DNA levels apportioned by the same hepatic parenchyma cell volume were compared between HBeAg-positive and HBeAg-negative chronic hepatitis B as well as among liver histological necroinflammation grades 1, 2, 3 and 4, respectively. There were no differences in the serum HBV DNA levels between HBeAg-positive and HBeAg-negative chronic hepatitis B as well as among liver histological necroinflammation grades 1, 2, 3 and 4. However, there were differences in the serum HBV DNA levels apportioned by the same hepatic parenchyma cell volume among liver histological necroinflammation grades 1, 2, 3 and 4 in both HBeAg-positive and HBeAg-negative chronic hepatitis B, respectively. There were no differences in HBV DNA levels with the same liver histological necroinflammation grade activated by HBV wild-type and variant strains. After the differences in hepatic parenchyma cell volume for HBV replication of the same liver histological necroinflammation grade accompanied by different hepatic fibrosis stages were adjusted, the serum HBV DNA level apportioned by the same hepatic parenchyma cell volume was correlated with the severity of liver histological necroinflammation grade. © 2011 Blackwell Publishing Ltd.
van den Bos, Ruud; Taris, Ruben; Scheppink, Bianca; de Haan, Lydia; Verster, Joris C.
2013-01-01
Recent laboratory studies have shown that men display more risk-taking behavior in decision-making tasks following stress, whilst women are more risk-aversive or become more task-focused. In addition, these studies have shown that sex differences are related to levels of the stress hormone cortisol (indicative of activation of the hypothalamus-pituitary-adrenocortical-axis): the higher the levels of cortisol the more risk-taking behavior is shown by men, whereas women generally display more risk-aversive or task-focused behavior following higher levels of cortisol. Here, we assessed whether such relationships hold outside the laboratory, correlating levels of cortisol obtained during a job-related assessment procedure with decision-making parameters in the Cambridge Gambling Task (CGT) in male and female police recruits. The CGT allows for discriminating different aspects of reward-based decision-making. In addition, we correlated levels of alpha-amylase [indicative of activation of the sympatho-adrenomedullary-axis (SAM)] and decision-making parameters. In line with earlier studies men and women only differed in risk-adjustment in the CGT. Salivary cortisol levels correlated positively and strongly with risk-taking measures in men, which was significantly different from the weak negative correlation in women. In contrast, and less strongly so, salivary alpha-amylase levels correlated positively with risk-taking in women, which was significantly different from the weak negative correlation with risk-taking in men. Collectively, these data support and extend data of earlier studies indicating that risky decision-making in men and women is differently affected by stress hormones. The data are briefly discussed in relation to the effects of stress on gambling. PMID:24474909
Investigations of internal noise levels for different target sizes, contrasts, and noise structures
NASA Astrophysics Data System (ADS)
Han, Minah; Choi, Shinkook; Baek, Jongduk
2014-03-01
To describe internal noise levels for different target sizes, contrasts, and noise structures, Gaussian targets with four different sizes (i.e., standard deviation of 2,4,6 and 8) and three different noise structures(i.e., white, low-pass, and highpass) were generated. The generated noise images were scaled to have standard deviation of 0.15. For each noise type, target contrasts were adjusted to have the same detectability based on NPW, and the detectability of CHO was calculated accordingly. For human observer study, 3 trained observers performed 2AFC detection tasks, and correction rate, Pc, was calculated for each task. By adding proper internal noise level to numerical observer (i.e., NPW and CHO), detectability of human observer was matched with that of numerical observers. Even though target contrasts were adjusted to have the same detectability of NPW observer, detectability of human observer decreases as the target size increases. The internal noise level varies for different target sizes, contrasts, and noise structures, demonstrating different internal noise levels should be considered in numerical observer to predict the detection performance of human observer.
Jakeman, K J; Bird, C R; Thorpe, R; Smith, H; Sweet, C
1991-03-01
Fever in influenza results from the release of endogenous pyrogen (EP) following virus-phagocyte interaction and its level correlates with the differing virulence of virus strains. However, the different levels of fever produced in ferrets by intracardial inoculation of EP obtained from the interaction of different virus strains with ferret of human phagocytes did not correlate with the levels of interleukin 1 (IL-1), IL-6 or tumour necrosis factor in the same samples as assayed by conventional in vitro methods. Hence, the EP produced by influenza virus appears to be different to these cytokines.