Interaural time sensitivity of high-frequency neurons in the inferior colliculus.
Yin, T C; Kuwada, S; Sujaku, Y
1984-11-01
Recent psychoacoustic experiments have shown that interaural time differences provide adequate cues for lateralizing high-frequency sounds, provided the stimuli are complex and not pure tones. We present here physiological evidence in support of these findings. Neurons of high best frequency in the cat inferior colliculus respond to interaural phase differences of amplitude modulated waveforms, and this response depends upon preservation of phase information of the modulating signal. Interaural phase differences were introduced in two ways: by interaural delays of the entire waveform and by binaural beats in which there was an interaural frequency difference in the modulating waveform. Results obtained with these two methods are similar. Our results show that high-frequency cells can respond to interaural time differences of amplitude modulated signals and that they do so by a sensitivity to interaural phase differences of the modulating waveform.
Blanks, Deidra A.; Buss, Emily; Grose, John H.; Fitzpatrick, Douglas C.; Hall, Joseph W.
2009-01-01
Objectives The present study investigated interaural time discrimination for binaurally mismatched carrier frequencies in listeners with normal hearing. One goal of the investigation was to gain insights into binaural hearing in patients with bilateral cochlear implants, where the coding of interaural time differences may be limited by mismatches in the neural populations receiving stimulation on each side. Design Temporal envelopes were manipulated to present low frequency timing cues to high frequency auditory channels. Carrier frequencies near 4 kHz were amplitude modulated at 128 Hz via multiplication with a half-wave rectified sinusoid, and that modulation was either in-phase across ears or delayed to one ear. Detection thresholds for non-zero interaural time differences were measured for a range of stimulus levels and a range of carrier frequency mismatches. Data were also collected under conditions designed to limit cues based on stimulus spectral spread, including masking and truncation of sidebands associated with modulation. Results Listeners with normal hearing can detect interaural time differences in the face of substantial mismatches in carrier frequency across ears. Conclusions The processing of interaural time differences in listeners with normal hearing is likely based on spread of excitation into binaurally matched auditory channels. Sensitivity to interaural time differences in listeners with cochlear implants may depend upon spread of current that results in the stimulation of neural populations that share common tonotopic space bilaterally. PMID:18596646
Poganiatz, I; Wagner, H
2001-04-01
Interaural level differences play an important role for elevational sound localization in barn owls. The changes of this cue with sound location are complex and frequency dependent. We exploited the opportunities offered by the virtual space technique to investigate the behavioral relevance of the overall interaural level difference by fixing this parameter in virtual stimuli to a constant value or introducing additional broadband level differences to normal virtual stimuli. Frequency-specific monaural cues in the stimuli were not manipulated. We observed an influence of the broadband interaural level differences on elevational, but not on azimuthal sound localization. Since results obtained with our manipulations explained only part of the variance in elevational turning angle, we conclude that frequency-specific cues are also important. The behavioral consequences of changes of the overall interaural level difference in a virtual sound depended on the combined interaural time difference contained in the stimulus, indicating an indirect influence of temporal cues on elevational sound localization as well. Thus, elevational sound localization is influenced by a combination of many spatial cues including frequency-dependent and temporal features.
Neural tuning matches frequency-dependent time differences between the ears
Benichoux, Victor; Fontaine, Bertrand; Franken, Tom P; Karino, Shotaro; Joris, Philip X; Brette, Romain
2015-01-01
The time it takes a sound to travel from source to ear differs between the ears and creates an interaural delay. It varies systematically with spatial direction and is generally modeled as a pure time delay, independent of frequency. In acoustical recordings, we found that interaural delay varies with frequency at a fine scale. In physiological recordings of midbrain neurons sensitive to interaural delay, we found that preferred delay also varies with sound frequency. Similar observations reported earlier were not incorporated in a functional framework. We find that the frequency dependence of acoustical and physiological interaural delays are matched in key respects. This suggests that binaural neurons are tuned to acoustical features of ecological environments, rather than to fixed interaural delays. Using recordings from the nerve and brainstem we show that this tuning may emerge from neurons detecting coincidences between input fibers that are mistuned in frequency. DOI: http://dx.doi.org/10.7554/eLife.06072.001 PMID:25915620
Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H
2016-08-01
To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.
Localization by interaural time difference (ITD): Effects of interaural frequency mismatch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonham, B.H.; Lewis, E.R.
1999-07-01
A commonly accepted physiological model for lateralization of low-frequency sounds by interaural time delay (ITD) stipulates that binaural comparison neurons receive input from frequency-matched channels from each ear. Here, the effects of hypothetical interaural frequency mismatches on this model are reported. For this study, the cat{close_quote}s auditory system peripheral to the binaural comparison neurons was represented by a neurophysiologically derived model, and binaural comparison neurons were represented by cross-correlators. The results of the study indicate that, for binaural comparison neurons receiving input from one cochlear channel from each ear, interaural CF mismatches may serve to either augment or diminish themore » effective difference in ipsilateral and contralateral axonal time delays from the periphery to the binaural comparison neuron. The magnitude of this increase or decrease in the effective time delay difference can be up to 400 {mu}s for CF mismatches of 0.2 octaves or less for binaural neurons with CFs between 250 Hz and 2.5 kHz. For binaural comparison neurons with nominal CFs near 500 Hz, the 25-{mu}s effective time delay difference caused by a 0.012-octave CF mismatch is equal to the ITD previously shown to be behaviorally sufficient for the cat to lateralize a low-frequency sound source. {copyright} {ital 1999 Acoustical Society of America.}« less
Yin, T C; Kuwada, S
1983-10-01
We used the binaural beat stimulus to study the interaural phase sensitivity of inferior colliculus (IC) neurons in the cat. The binaural beat, produced by delivering tones of slightly different frequencies to the two ears, generates continuous and graded changes in interaural phase. Over 90% of the cells that exhibit a sensitivity to changes in the interaural delay also show a sensitivity to interaural phase disparities with the binaural beat. Cells respond with a burst of impulses with each complete cycle of the beat frequency. The period histogram obtained by binning the poststimulus time histogram on the beat frequency gives a measure of the interaural phase sensitivity of the cell. In general, there is good correspondence in the shapes of the period histograms generated from binaural beats and the interaural phase curves derived from interaural delays and in the mean interaural phase angle calculated from them. The magnitude of the beat frequency determines the rate of change of interaural phase and the sign determines the direction of phase change. While most cells respond in a phase-locked manner up to beat frequencies of 10 Hz, there are some cells tht will phase lock up to 80 Hz. Beat frequency and mean interaural phase angle are linearly related for most cells. Most cells respond equally in the two directions of phase change and with different rates of change, at least up to 10 Hz. However, some IC cells exhibit marked sensitivity to the speed of phase change, either responding more vigorously at low beat frequencies or at high beat frequencies. In addition, other cells demonstrate a clear directional sensitivity. The cells that show sensitivity to the direction and speed of phase changes would be expected to demonstrate a sensitivity to moving sound sources in the free field. Changes in the mean interaural phase of the binaural beat period histograms are used to determine the effects of changes in average and interaural intensity on the phase sensitivity of the cells. The effects of both forms of intensity variation are continuously distributed. The binaural beat offers a number of advantages for studying the interaural phase sensitivity of binaural cells. The dynamic characteristics of the interaural phase can be varied so that the speed and direction of phase change are under direct control. The data can be obtained in a much more efficient manner, as the binaural beat is about 10 times faster in terms of data collection than the interaural delay.
Kuwada, S; Yin, T C
1983-10-01
Detailed, quantitative studies were made of the interaural phase sensitivity of 197 neurons with low best frequency in the inferior colliculus (IC) of the barbiturate-anesthetized cat. We analyzed the responses of single cells to interaural delays in which tone bursts were delivered to the two ears via sealed earphones and the onset of the tone to one ear with respect to the other was varied. For most (80%) cells the discharge rate is a cyclic function of interaural delay at a period corresponding to that of the stimulating frequency. The cyclic nature of the interaural delay curve indicates that these cells are sensitive to the interaural phase difference. These cells are distributed throughout the low-frequency zone of the IC, but they are less numerous in the medial and caudal zones. Cells with a wide variety of response patterns will exhibit interaural phase sensitivities at stimulating frequencies up to 3,100 Hz, although above 2,500 Hz the number of such cells decrease markedly. Using dichotic stimuli we could study the cell's sensitivity to the onset delay and interaural phase independently. The large majority of IC cells respond only to changes in interaural phase, with no sensitivity to the onset delay. However, a small number (7%) of cells exhibit a sensitivity to the onset delay as well as to the interaural phase disparity, and most of these cells show an onset response. The effects of changing the stimulus intensity equally to both ears or of changing the interaural intensity difference on the mean interaural phase were studied. While some neurons are not affected by level changes, others exhibit systematic phase shifts for both average and interaural intensity variations, and there is a continuous distribution of sensitivities between these extremes. A few cells also showed systematic changes in the shape of the interaural delay curves as a function of interaural intensity difference, especially at very long delays. These shifts can be interpreted as a form of time-intensity trading. A few cells demonstrated orderly changes in the interaural delay curve as the repetition rate of the stimulus was varied. Some of these changes are consonant with an inhibitory effect that occurs at stimulus offset. The responses of the neurons show a strong bias for stimuli that would originate from he contralateral sound field; 77% of the responses display mean interaural phase angles that are less than 0.5 of a cycle, which are delays to the ipsilateral tone.(ABSTRACT TRUNCATED AT 400 WORDS)
Comparison of Interaural Electrode Pairing Methods for Bilateral Cochlear Implants
Dietz, Mathias
2015-01-01
In patients with bilateral cochlear implants (CIs), pairing matched interaural electrodes and stimulating them with the same frequency band is expected to facilitate binaural functions such as binaural fusion, localization, and spatial release from masking. Because clinical procedures typically do not include patient-specific interaural electrode pairing, it remains the case that each electrode is allocated to a generic frequency range, based simply on the electrode number. Two psychoacoustic techniques for determining interaurally paired electrodes have been demonstrated in several studies: interaural pitch comparison and interaural time difference (ITD) sensitivity. However, these two methods are rarely, if ever, compared directly. A third, more objective method is to assess the amplitude of the binaural interaction component (BIC) derived from electrically evoked auditory brainstem responses for different electrode pairings; a method has been demonstrated to be a potential candidate for bilateral CI users. Here, we tested all three measures in the same eight CI users. We found good correspondence between the electrode pair producing the largest BIC and the electrode pair producing the maximum ITD sensitivity. The correspondence between the pairs producing the largest BIC and the pitch-matched electrode pairs was considerably weaker, supporting the previously proposed hypothesis that whilst place pitch might adapt over time to accommodate mismatched inputs, sensitivity to ITDs does not adapt to the same degree. PMID:26631108
Haywood, Nicholas R; Undurraga, Jaime A; Marquardt, Torsten; McAlpine, David
2015-12-30
There has been continued interest in clinical objective measures of binaural processing. One commonly proposed measure is the binaural interaction component (BIC), which is obtained typically by recording auditory brainstem responses (ABRs)-the BIC reflects the difference between the binaural ABR and the sum of the monaural ABRs (i.e., binaural - (left + right)). We have recently developed an alternative, direct measure of sensitivity to interaural time differences, namely, a following response to modulations in interaural phase difference (the interaural phase modulation following response; IPM-FR). To obtain this measure, an ongoing diotically amplitude-modulated signal is presented, and the interaural phase difference of the carrier is switched periodically at minima in the modulation cycle. Such periodic modulations to interaural phase difference can evoke a steady state following response. BIC and IPM-FR measurements were compared from 10 normal-hearing subjects using a 16-channel electroencephalographic system. Both ABRs and IPM-FRs were observed most clearly from similar electrode locations-differential recordings taken from electrodes near the ear (e.g., mastoid) in reference to a vertex electrode (Cz). Although all subjects displayed clear ABRs, the BIC was not reliably observed. In contrast, the IPM-FR typically elicited a robust and significant response. In addition, the IPM-FR measure required a considerably shorter recording session. As the IPM-FR magnitude varied with interaural phase difference modulation depth, it could potentially serve as a correlate of perceptual salience. Overall, the IPM-FR appears a more suitable clinical measure than the BIC. © The Author(s) 2015.
Growth in Head Size during Infancy: Implications for Sound Localization.
ERIC Educational Resources Information Center
Clifton, Rachel K.; And Others
1988-01-01
Compared head circumference and interaural distance in infants between birth and 22 weeks of age and in a small sample of preschool children and adults. Calculated changes in interaural time differences according to age. Found a large shift in distance. (SKC)
NASA Astrophysics Data System (ADS)
Nur Farid, Mifta; Arifianto, Dhany
2016-11-01
A person who is suffering from hearing loss can be helped by using hearing aids and the most optimal performance of hearing aids are binaural hearing aids because it has similarities to human auditory system. In a conversation at a cocktail party, a person can focus on a single conversation even though the background sound and other people conversation is quite loud. This phenomenon is known as the cocktail party effect. In an early study, has been explained that binaural hearing have an important contribution to the cocktail party effect. So in this study, will be performed separation on the input binaural sound with 2 microphone sensors of two sound sources based on both the binaural cue, interaural time difference (ITD) and interaural level difference (ILD) using binary mask. To estimate value of ITD, is used cross-correlation method which the value of ITD represented as time delay of peak shifting at time-frequency unit. Binary mask is estimated based on pattern of ITD and ILD to relative strength of target that computed statistically using probability density estimation. Results of sound source separation performing well with the value of speech intelligibility using the percent correct word by 86% and 3 dB by SNR.
Localizing nearby sound sources in a classroom: Binaural room impulse responses
NASA Astrophysics Data System (ADS)
Shinn-Cunningham, Barbara G.; Kopco, Norbert; Martin, Tara J.
2005-05-01
Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy. .
Localizing nearby sound sources in a classroom: binaural room impulse responses.
Shinn-Cunningham, Barbara G; Kopco, Norbert; Martin, Tara J
2005-05-01
Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy.
The use of interaural parameters during incoherence detection in reproducible noise
NASA Astrophysics Data System (ADS)
Goupell, Matthew Joseph
Interaural incoherence is a measure of the dissimilarity of the signals in the left and right ears. It is important in a number of acoustical phenomenon such as a listener's sensation envelopment and apparent source width in room acoustics, speech intelligibility, and binaural release from energetic masking. Humans are incredibly sensitive to the difference between perfectly coherent and slightly incoherent signals, however the nature of this sensitivity is not well understood. The purpose of this dissertation is to understand what parameters are important to incoherence detection. Incoherence is perceived to have time-varying characteristics. It is conjectured that incoherence detection is performed by a process that takes this time dependency into account. Left-ear-right-ear noise-pairs were generated, all with a fixed value of interaural coherence, 0.9922. The noises had a center frequency of 500 Hz, a bandwidth of 14 Hz, and a duration of 500 ms. Listeners were required to discriminate between these slightly incoherent noises and diotic noises, with a coherence of 1.0. It was found that the value of interaural incoherence itself was an inadequate predictor of discrimination. Instead, incoherence was much more readily detected for those noise-pairs with the largest fluctuations in interaural phase and level differences (as measured by the standard deviation). Noise-pairs with the same value of coherence, and geometric mean frequency of 500 Hz were also generated for bandwidths of 108 Hz and 2394 Hz. It was found that for increasing bandwidth, fluctuations in interaural differences varied less between different noise-pairs and that detection performance varied less as well. The results suggest that incoherence detection is based on the size and the speed of interaural fluctuations and that the value of coherence itself predicts performance only in the wide-band limit where different particular noises with the same incoherence have similar fluctuations. Noise-pairs with short durations of 100, 50, and 25 ms, and bandwidth of 14 Hz, and a coherence of 0.9922 were used to test if a short-term incoherence function is used in incoherence detection. It was found that listeners could significantly use fluctuations of phase and level to detect incoherence for all three of these short durations. Therefore, a short-term coherence function is not used to detect incoherence. For the smallest duration of 25 ms, listeners' detection cue sometimes changed from a "width" cue to a lateralization cue. Modeling of the data was performed. Ten different binaural models were tested against detection data for 14-Hz and 108-Hz bandwidths. These models included different types of binaural processing: independent interaural phase and level differences, lateral position, and short-term cross-correlation. Several preprocessing features were incorporated in the models: compression, temporal averaging, and envelope weighting. For the 14-Hz bandwidth data, the most successful model assumed independent centers for interaural phase and interaural level processing, and this model correlated with detectability at r = 0.87. That model also described the data best when it was assumed that interaural phase fluctuations and interaural level fluctuations contribute approximately equally to incoherence detection. For the 108-Hz bandwidth data, detection performance varied much less among different waveforms, and the data were less able to distinguish between models.
Adiloğlu, K.; Herzke, T.
2015-01-01
We present the first portable, binaural, real-time research platform compatible with Oticon Medical SP and XP generation cochlear implants. The platform consists of (a) a pair of behind-the-ear devices, each containing front and rear calibrated microphones, (b) a four-channel USB analog-to-digital converter, (c) real-time PC-based sound processing software called the Master Hearing Aid, and (d) USB-connected hardware and output coils capable of driving two implants simultaneously. The platform is capable of processing signals from the four microphones simultaneously and producing synchronized binaural cochlear implant outputs that drive two (bilaterally implanted) SP or XP implants. Both audio signal preprocessing algorithms (such as binaural beamforming) and novel binaural stimulation strategies (within the implant limitations) can be programmed by researchers. When the whole research platform is combined with Oticon Medical SP implants, interaural electrode timing can be controlled on individual electrodes to within ±1 µs and interaural electrode energy differences can be controlled to within ±2%. Hence, this new platform is particularly well suited to performing experiments related to interaural time differences in combination with interaural level differences in real-time. The platform also supports instantaneously variable stimulation rates and thereby enables investigations such as the effect of changing the stimulation rate on pitch perception. Because the processing can be changed on the fly, researchers can use this platform to study perceptual changes resulting from different processing strategies acutely. PMID:26721923
Backus, B; Adiloğlu, K; Herzke, T
2015-12-30
We present the first portable, binaural, real-time research platform compatible with Oticon Medical SP and XP generation cochlear implants. The platform consists of (a) a pair of behind-the-ear devices, each containing front and rear calibrated microphones, (b) a four-channel USB analog-to-digital converter, (c) real-time PC-based sound processing software called the Master Hearing Aid, and (d) USB-connected hardware and output coils capable of driving two implants simultaneously. The platform is capable of processing signals from the four microphones simultaneously and producing synchronized binaural cochlear implant outputs that drive two (bilaterally implanted) SP or XP implants. Both audio signal preprocessing algorithms (such as binaural beamforming) and novel binaural stimulation strategies (within the implant limitations) can be programmed by researchers. When the whole research platform is combined with Oticon Medical SP implants, interaural electrode timing can be controlled on individual electrodes to within ±1 µs and interaural electrode energy differences can be controlled to within ±2%. Hence, this new platform is particularly well suited to performing experiments related to interaural time differences in combination with interaural level differences in real-time. The platform also supports instantaneously variable stimulation rates and thereby enables investigations such as the effect of changing the stimulation rate on pitch perception. Because the processing can be changed on the fly, researchers can use this platform to study perceptual changes resulting from different processing strategies acutely. © The Author(s) 2015.
Ihlefeld, Antje; Litovsky, Ruth Y
2012-01-01
Spatial release from masking refers to a benefit for speech understanding. It occurs when a target talker and a masker talker are spatially separated. In those cases, speech intelligibility for target speech is typically higher than when both talkers are at the same location. In cochlear implant listeners, spatial release from masking is much reduced or absent compared with normal hearing listeners. Perhaps this reduced spatial release occurs because cochlear implant listeners cannot effectively attend to spatial cues. Three experiments examined factors that may interfere with deploying spatial attention to a target talker masked by another talker. To simulate cochlear implant listening, stimuli were vocoded with two unique features. First, we used 50-Hz low-pass filtered speech envelopes and noise carriers, strongly reducing the possibility of temporal pitch cues; second, co-modulation was imposed on target and masker utterances to enhance perceptual fusion between the two sources. Stimuli were presented over headphones. Experiments 1 and 2 presented high-fidelity spatial cues with unprocessed and vocoded speech. Experiment 3 maintained faithful long-term average interaural level differences but presented scrambled interaural time differences with vocoded speech. Results show a robust spatial release from masking in Experiments 1 and 2, and a greatly reduced spatial release in Experiment 3. Faithful long-term average interaural level differences were insufficient for producing spatial release from masking. This suggests that appropriate interaural time differences are necessary for restoring spatial release from masking, at least for a situation where there are few viable alternative segregation cues.
Hearing in three dimensions: Sound localization
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Kistler, Doris J.
1990-01-01
The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.
Detection of Interaural Time Differences in the Alligator
Carr, Catherine E.; Soares, Daphne; Smolders, Jean; Simon, Jonathan Z.
2011-01-01
The auditory systems of birds and mammals use timing information from each ear to detect interaural time difference (ITD). To determine whether the Jeffress-type algorithms that underlie sensitivity to ITD in birds are an evolutionarily stable strategy, we recorded from the auditory nuclei of crocodilians, who are the sister group to the birds. In alligators, precisely timed spikes in the first-order nucleus magnocellularis (NM) encode the timing of sounds, and NM neurons project to neurons in the nucleus laminaris (NL) that detect interaural time differences. In vivo recordings from NL neurons show that the arrival time of phase-locked spikes differs between the ipsilateral and contralateral inputs. When this disparity is nullified by their best ITD, the neurons respond maximally. Thus NL neurons act as coincidence detectors. A biologically detailed model of NL with alligator parameters discriminated ITDs up to 1 kHz. The range of best ITDs represented in NL was much larger than in birds, however, and extended from 0 to 1000 μs contralateral, with a median ITD of 450 μs. Thus, crocodilians and birds employ similar algorithms for ITD detection, although crocodilians have larger heads. PMID:19553438
ERIC Educational Resources Information Center
Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.
2016-01-01
Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…
Anatomical limits on interaural time differences: an ecological perspective
Hartmann, William M.; Macaulay, Eric J.
2013-01-01
Human listeners, and other animals too, use interaural time differences (ITD) to localize sounds. If the sounds are pure tones, a simple frequency factor relates the ITD to the interaural phase difference (IPD), for which there are known iso-IPD boundaries, 90°, 180°… defining regions of spatial perception. In this article, iso-IPD boundaries for humans are translated into azimuths using a spherical head model (SHM), and the calculations are checked by free-field measurements. The translated boundaries provide quantitative tests of an ecological interpretation for the dramatic onset of ITD insensitivity at high frequencies. According to this interpretation, the insensitivity serves as a defense against misinformation and can be attributed to limits on binaural processing in the brainstem. Calculations show that the ecological explanation passes the tests only if the binaural brainstem properties evolved or developed consistent with heads that are 50% smaller than current adult heads. Measurements on more realistic head shapes relax that requirement only slightly. The problem posed by the discrepancy between the current head size and a smaller, ideal head size was apparently solved by the evolution or development of central processes that discount large IPDs in favor of interaural level differences. The latter become more important with increasing head size. PMID:24592209
Gordon, Karen A.; Deighton, Michael R.; Abbasalipour, Parvaneh; Papsin, Blake C.
2014-01-01
There are significant challenges to restoring binaural hearing to children who have been deaf from an early age. The uncoordinated and poor temporal information available from cochlear implants distorts perception of interaural timing differences normally important for sound localization and listening in noise. Moreover, binaural development can be compromised by bilateral and unilateral auditory deprivation. Here, we studied perception of both interaural level and timing differences in 79 children/adolescents using bilateral cochlear implants and 16 peers with normal hearing. They were asked on which side of their head they heard unilaterally or bilaterally presented click- or electrical pulse- trains. Interaural level cues were identified by most participants including adolescents with long periods of unilateral cochlear implant use and little bilateral implant experience. Interaural timing cues were not detected by new bilateral adolescent users, consistent with previous evidence. Evidence of binaural timing detection was, for the first time, found in children who had much longer implant experience but it was marked by poorer than normal sensitivity and abnormally strong dependence on current level differences between implants. In addition, children with prior unilateral implant use showed a higher proportion of responses to their first implanted sides than children implanted simultaneously. These data indicate that there are functional repercussions of developing binaural hearing through bilateral cochlear implants, particularly when provided sequentially; nonetheless, children have an opportunity to use these devices to hear better in noise and gain spatial hearing. PMID:25531107
Effectiveness of Interaural Delays Alone as Cues During Dynamic Sound Localization
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)
1996-01-01
The contribution of interaural time differences (ITDs) to the localization of virtual sound sources with and without head motion was examined. Listeners estimated the apparent azimuth, elevation and distance of virtual sources presented over headphones. Stimuli (3 sec., white noise) were synthesized from minimum-phase representations of nonindividualized head-related transfer functions (HRTFs); binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; head position was tracked and stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. Two synthesis conditions were tested: (1) both interaural level differences (ILDs) and ITDs correctly correlated with source location and head motion, (2) ITDs correct, no ILDs (flat magnitude spectrum). Head movements reduced azimuth confusions primarily when interaural cues were correctly correlated, although a smaller effect was also seen for ITDs alone. Externalization was generally poor for ITD-only conditions and was enhanced by head motion only for normal HRTFs. Overall the data suggest that, while ITDs alone can provide a significant cue for azimuth, the errors most commonly associated with virtual sources are reduced by location-dependent magnitude cues.
The effect of interaural fluctuation rate on correlation change discrimination.
Goupell, Matthew J; Litovsky, Ruth Y
2014-02-01
While bilateral cochlear implants (CIs) provide some binaural benefits, these benefits are limited compared to those observed in normal-hearing (NH) listeners. The large frequency-to-electrode allocation bandwidths (BWs) in CIs compared to auditory filter BWs in NH listeners increases the interaural fluctuation rate available for binaural unmasking, which may limit binaural benefits. The purpose of this work was to investigate the effect of interaural fluctuation rate on correlation change discrimination and binaural masking-level differences in NH listeners presented a CI simulation using a pulsed-sine vocoder. In experiment 1, correlation-change just-noticeable differences (JNDs) and tone-in-noise thresholds were measured for narrowband noises with different BWs and center frequencies (CFs). The results suggest that the BW, CF, and/or interaural fluctuation rate are important factors for correlation change discrimination. In experiment 2, the interaural fluctuation rate was systematically varied and dissociated from changes in BW and CF by using a pulsed-sine vocoder. Results indicated that the interaural fluctuation rate did not affect correlation change JNDs for correlated reference noises; however, slow interaural fluctuations increased correlation change JNDs for uncorrelated reference noises. In experiment 3, the BW, CF, and vocoder pulse rate were varied while interaural fluctuation rate was held constant. JNDs increased for increasing BW and decreased for increasing CF. In summary, relatively fast interaural fluctuation rates are not detrimental for detecting changes in interaural correlation. Thus, limiting factors to binaural benefits in CI listeners could be a result of other temporal and/or spectral deficiencies from electrical stimulation.
Smith, Rosanna C G; Price, Stephen R
2014-01-01
Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.
Binaural beats at high frequencies.
McFadden, D; Pasanen, E G
1975-10-24
Binaural beats have long been believed to be audible only at low frequencies, but an interaction reminiscent of a binaural beat can sometimes be heard when different two-tone complexes of high frequency are presented to the two ears. The primary requirement is that the frequency separation in the complex at one ear be slightly different from that in the other--that is, that there be a small interaural difference in the envelope periodicities. This finding is in accord with other recent demonstrations that the auditory system is not deaf to interaural time differences at high frequencies.
NASA Astrophysics Data System (ADS)
Dye, Raymond H.; Stellmack, Mark A.; Jurcin, Noah F.
2005-05-01
Two experiments measured listeners' abilities to weight information from different components in a complex of 553, 753, and 953 Hz. The goal was to determine whether or not the ability to adjust perceptual weights generalized across tasks. Weights were measured by binary logistic regression between stimulus values that were sampled from Gaussian distributions and listeners' responses. The first task was interaural time discrimination in which listeners judged the laterality of the target component. The second task was monaural level discrimination in which listeners indicated whether the level of the target component decreased or increased across two intervals. For both experiments, each of the three components served as the target. Ten listeners participated in both experiments. The results showed that those individuals who adjusted perceptual weights in the interaural time experiment could also do so in the monaural level discrimination task. The fact that the same individuals appeared to be analytic in both tasks is an indication that the weights measure the ability to attend to a particular region of the spectrum while ignoring other spectral regions. .
Rakerd, Brad; Hartmann, William M.
2010-01-01
Binaural recordings of noise in rooms were used to determine the relationship between binaural coherence and the effectiveness of the interaural time difference (ITD) as a cue for human sound localization. Experiments showed a strong, monotonic relationship between the coherence and a listener’s ability to discriminate values of ITD. The relationship was found to be independent of other, widely varying acoustical properties of the rooms. However, the relationship varied dramatically with noise band center frequency. The ability to discriminate small ITD changes was greatest for a mid-frequency band. To achieve sensitivity comparable to mid-band, the binaural coherence had to be much larger at high frequency, where waveform ITD cues are imperceptible, and also at low frequency, where the binaural coherence in a room is necessarily large. Rivalry experiments with opposing interaural level differences (ILDs) found that the trading ratio between ITD and ILD increasingly favored the ILD as coherence decreased, suggesting that the perceptual weight of the ITD is decreased by increased reflections in rooms. PMID:21110600
Delphi, Maryam; Lotfi, M-Yones; Moossavi, Abdollah; Bakhshi, Enayatollah; Banimostafa, Maryam
2017-09-01
Previous studies have shown that interaural-time-difference (ITD) training can improve localization ability. Surprisingly little is, however, known about localization training vis-à-vis speech perception in noise based on interaural time difference in the envelope (ITD ENV). We sought to investigate the reliability of an ITD ENV-based training program in speech-in-noise perception among elderly individuals with normal hearing and speech-in-noise disorder. The present interventional study was performed during 2016. Sixteen elderly men between 55 and 65 years of age with the clinical diagnosis of normal hearing up to 2000 Hz and speech-in-noise perception disorder participated in this study. The training localization program was based on changes in ITD ENV. In order to evaluate the reliability of the training program, we performed speech-in-noise tests before the training program, immediately afterward, and then at 2 months' follow-up. The reliability of the training program was analyzed using the Friedman test and the SPSS software. Significant statistical differences were shown in the mean scores of speech-in-noise perception between the 3 time points (P=0.001). The results also indicated no difference in the mean scores of speech-in-noise perception between the 2 time points of immediately after the training program and 2 months' follow-up (P=0.212). The present study showed the reliability of an ITD ENV-based localization training in elderly individuals with speech-in-noise perception disorder.
Sound localization in common vampire bats: Acuity and use of the binaural time cue by a small mammal
Heffner, Rickye S.; Koay, Gimseong; Heffner, Henry E.
2015-01-01
Passive sound-localization acuity and the ability to use binaural time and intensity cues were determined for the common vampire bat (Desmodus rotundus). The bats were tested using a conditioned suppression/avoidance procedure in which they drank defibrinated blood from a spout in the presence of sounds from their right, but stopped drinking (i.e., broke contact with the spout) whenever a sound came from their left, thereby avoiding a mild shock. The mean minimum audible angle for three bats for a 100-ms noise burst was 13.1°—within the range of thresholds for other bats and near the mean for mammals. Common vampire bats readily localized pure tones of 20 kHz and higher, indicating they could use interaural intensity-differences. They could also localize pure tones of 5 kHz and lower, thereby demonstrating the use of interaural time-differences, despite their very small maximum interaural distance of 60 μs. A comparison of the use of locus cues among mammals suggests several implications for the evolution of sound localization and its underlying anatomical and physiological mechanisms. PMID:25618037
Bibee, Jacqueline M.; Stecker, G. Christopher
2016-01-01
Spatial judgments are often dominated by low-frequency binaural cues and onset cues when binaural cues vary across the spectrum and duration, respectively, of a brief sound. This study combined these dimensions to assess the spectrotemporal weighting of binaural information. Listeners discriminated target interaural time difference (ITD) and interaural level difference (ILD) carried by the onset, offset, or full duration of a 4-kHz Gabor click train with a 2-ms period in the presence or absence of a diotic 500-Hz interferer tone. ITD and ILD thresholds were significantly elevated by the interferer in all conditions and by a similar amount to previous reports for static cues. Binaural interference was dramatically greater for ITD targets lacking onset cues compared to onset and full-duration conditions. Binaural interference for ILD targets was similar across dynamic-cue conditions. These effects mirror the baseline discriminability of dynamic ITD and ILD cues [Stecker and Brown. (2010). J. Acoust. Soc. Am. 127, 3092–3103], consistent with stronger interference for less-robust/higher-variance cues. The results support the view that binaural cue integration occurs simultaneously across multiple variance-weighted dimensions, including time and frequency. PMID:27794286
Bibee, Jacqueline M; Stecker, G Christopher
2016-10-01
Spatial judgments are often dominated by low-frequency binaural cues and onset cues when binaural cues vary across the spectrum and duration, respectively, of a brief sound. This study combined these dimensions to assess the spectrotemporal weighting of binaural information. Listeners discriminated target interaural time difference (ITD) and interaural level difference (ILD) carried by the onset, offset, or full duration of a 4-kHz Gabor click train with a 2-ms period in the presence or absence of a diotic 500-Hz interferer tone. ITD and ILD thresholds were significantly elevated by the interferer in all conditions and by a similar amount to previous reports for static cues. Binaural interference was dramatically greater for ITD targets lacking onset cues compared to onset and full-duration conditions. Binaural interference for ILD targets was similar across dynamic-cue conditions. These effects mirror the baseline discriminability of dynamic ITD and ILD cues [Stecker and Brown. (2010). J. Acoust. Soc. Am. 127, 3092-3103], consistent with stronger interference for less-robust/higher-variance cues. The results support the view that binaural cue integration occurs simultaneously across multiple variance-weighted dimensions, including time and frequency.
Computation of interaural time difference in the owl's coincidence detector neurons.
Funabiki, Kazuo; Ashida, Go; Konishi, Masakazu
2011-10-26
Both the mammalian and avian auditory systems localize sound sources by computing the interaural time difference (ITD) with submillisecond accuracy. The neural circuits for this computation in birds consist of axonal delay lines and coincidence detector neurons. Here, we report the first in vivo intracellular recordings from coincidence detectors in the nucleus laminaris of barn owls. Binaural tonal stimuli induced sustained depolarizations (DC) and oscillating potentials whose waveforms reflected the stimulus. The amplitude of this sound analog potential (SAP) varied with ITD, whereas DC potentials did not. The amplitude of the SAP was correlated with firing rate in a linear fashion. Spike shape, synaptic noise, the amplitude of SAP, and responsiveness to current pulses differed between cells at different frequencies, suggesting an optimization strategy for sensing sound signals in neurons tuned to different frequencies.
Auditory cortical neurons are sensitive to static and continuously changing interaural phase cues.
Reale, R A; Brugge, J F
1990-10-01
1. The interaural-phase-difference (IPD) sensitivity of single neurons in the primary auditory (AI) cortex of the anesthetized cat was studied at stimulus frequencies ranging from 120 to 2,500 Hz. Best frequencies of the 43 AI cells sensitive to IPD ranged from 190 to 2,400 Hz. 2. A static IPD was produced when a pair of low-frequency tone bursts, differing from one another only in starting phase, were presented dichotically. The resulting IPD-sensitivity curves, which plot the number of discharges evoked by the binaural signal as a function of IPD, were deeply modulated circular functions. IPD functions were analyzed for their mean vector length (r) and mean interaural phase (phi). Phase sensitivity was relatively independent of best frequency (BF) but highly dependent on stimulus frequency. Regardless of BF or stimulus frequency within the excitatory response area the majority of cells fired maximally when the ipsilateral tone lagged the contralateral signal and fired least when this interaural-phase relationship was reversed. 3. Sensitivity to continuously changing IPD was studied by delivering to the two ears 3-s tones that differed slightly in frequency, resulting in a binaural beat. Approximately 26% of the cells that showed a sensitivity to static changes in IPD also showed a sensitivity to dynamically changing IPD created by this binaural tonal combination. The discharges were highly periodic and tightly synchronized to a particular phase of the binaural beat cycle. High synchrony can be attributed to the fact that cortical neurons typically respond to an excitatory stimulus with but a single spike that is often precisely timed to stimulus onset. A period histogram, binned on the binaural beat frequency (fb), produced an equivalent IPD-sensitivity function for dynamically changing interaural phase. For neurons sensitive to both static and continuously changing interaural phase there was good correspondence between their static (phi s) and dynamic (phi d) mean interaural phases. 4. All cells responding to a dynamically changing stimulus exhibited a linear relationship between mean interaural phase and beat frequency. Most cells responded equally well to binaural beats regardless of the initial direction of phase change. For a fixed duration stimulus, and at relatively low fb, the number of spikes evoked increased with increasing fb, reflecting the increasing number of effective stimulus cycles. At higher fb, AI neurons were unable to follow the rate at which the most effective phase repeated itself during the 3 s of stimulation.(ABSTRACT TRUNCATED AT 400 WORDS)
Zheng, Jianwen; Lu, Jing; Chen, Kai
2013-07-01
Several methods have been proposed for the generation of the focused source, usually a virtual monopole source positioned in between the loudspeaker array and the listener. The problem of pre-echoes of the common analytical methods has been noticed, and the most concise method to cope with this problem is the angular weight method. In this paper, the interaural time and level difference, which are well related to the localization cues of human auditory systems, will be used to further investigate the effectiveness of the focused source generation methods. It is demonstrated that the combination of angular weight method and the numerical pressure matching method has comparatively better performance in a given reconstructed area.
Laumen, Geneviève; Tollin, Daniel J.; Beutelmann, Rainer; Klump, Georg M.
2016-01-01
The effect of interaural time difference (ITD) and interaural level difference (ILD) on wave 4 of the binaural and summed monaural auditory brainstem responses (ABRs) as well as on the DN1 component of the binaural interaction component (BIC) of the ABR in young and old Mongolian gerbils (Meriones unguiculatus) was investigated. Measurements were made at a fixed sound pressure level (SPL) and a fixed level above visually detected ABR threshold to compensate for individual hearing threshold differences. In both stimulation modes (fixed SPL and fixed level above visually detected ABR threshold) an effect of ITD on the latency and the amplitude of wave 4 as well as of the BIC was observed. With increasing absolute ITD values BIC latencies were increased and amplitudes were decreased. ILD had a much smaller effect on these measures. Old animals showed a reduced amplitude of the DN1 component. This difference was due to a smaller wave 4 in the summed monaural ABRs of old animals compared to young animals whereas wave 4 in the binaural-evoked ABR showed no age-related difference. In old animals the small amplitude of the DN1 component was correlated with small binaural-evoked wave 1 and wave 3 amplitudes. This suggests that the reduced peripheral input affects central binaural processing which is reflected in the BIC. PMID:27173973
de Taillez, Tobias; Grimm, Giso; Kollmeier, Birger; Neher, Tobias
2018-06-01
To investigate the influence of an algorithm designed to enhance or magnify interaural difference cues on speech signals in noisy, spatially complex conditions using both technical and perceptual measurements. To also investigate the combination of interaural magnification (IM), monaural microphone directionality (DIR), and binaural coherence-based noise reduction (BC). Speech-in-noise stimuli were generated using virtual acoustics. A computational model of binaural hearing was used to analyse the spatial effects of IM. Predicted speech quality changes and signal-to-noise-ratio (SNR) improvements were also considered. Additionally, a listening test was carried out to assess speech intelligibility and quality. Listeners aged 65-79 years with and without sensorineural hearing loss (N = 10 each). IM increased the horizontal separation of concurrent directional sound sources without introducing any major artefacts. In situations with diffuse noise, however, the interaural difference cues were distorted. Preprocessing the binaural input signals with DIR reduced distortion. IM influenced neither speech intelligibility nor speech quality. The IM algorithm tested here failed to improve speech perception in noise, probably because of the dispersion and inconsistent magnification of interaural difference cues in complex environments.
Interaural intensity difference limen.
DOT National Transportation Integrated Search
1967-05-01
The ability to judge the direction (the azimuth) of a sound source and to discriminate it from others is often essential to flyers. A major factor in the judgment process is the interaural intensity difference that the pilot can perceive. Three kinds...
Hu, Hongmei; Kollmeier, Birger; Dietz, Mathias
2016-01-01
Although bilateral cochlear implants (BiCIs) have succeeded in improving the spatial hearing performance of bilateral CI users, the overall performance is still not comparable with normal hearing listeners. Limited success can be partially caused by an interaural mismatch of the place-of-stimulation in each cochlea. Pairing matched interaural CI electrodes and stimulating them with the same frequency band is expected to facilitate binaural functions such as binaural fusion, localization, or spatial release from masking. It has been shown in animal experiments that the magnitude of the binaural interaction component (BIC) derived from the wave-eV decreases for increasing interaural place of stimulation mismatch. This motivated the investigation of the suitability of an electroencephalography-based objective electrode-frequency fitting procedure based on the BIC for BiCI users. A 61 channel monaural and binaural electrically evoked auditory brainstem response (eABR) recording was performed in 7 MED-EL BiCI subjects so far. These BiCI subjects were directly stimulated at 60% dynamic range with 19.9 pulses per second via a research platform provided by the University of Innsbruck (RIB II). The BIC was derived for several interaural electrode pairs by subtracting the response from binaural stimulation from their summed monaural responses. The BIC based pairing results are compared with two psychoacoustic pairing methods: interaural pulse time difference sensitivity and interaural pitch matching. The results for all three methods analyzed as a function of probe electrode allow for determining a matched pair in more than half of the subjects, with a typical accuracy of ± 1 electrode. This includes evidence for statistically significant tuning of the BIC as a function of probe electrode in human subjects. However, results across the three conditions were sometimes not consistent. These discrepancies will be discussed in the light of pitch plasticity versus less plastic brainstem processing.
Linear summation in the barn owl's brainstem underlies responses to interaural time differences.
Kuokkanen, Paula T; Ashida, Go; Carr, Catherine E; Wagner, Hermann; Kempter, Richard
2013-07-01
The neurophonic potential is a synchronized frequency-following extracellular field potential that can be recorded in the nucleus laminaris (NL) in the brainstem of the barn owl. Putative generators of the neurophonic are the afferent axons from the nucleus magnocellularis, synapses onto NL neurons, and spikes of NL neurons. The outputs of NL, i.e., action potentials of NL neurons, are only weakly represented in the neurophonic. Instead, the inputs to NL, i.e., afferent axons and their synaptic potentials, are the predominant origin of the neurophonic (Kuokkanen PT, Wagner H, Ashida G, Carr CE, Kempter R. J Neurophysiol 104: 2274-2290, 2010). Thus in NL the monaural inputs from the two brain sides converge and create a binaural neurophonic. If these monaural inputs contribute independently to the extracellular field, the response to binaural stimulation can be predicted from the sum of the responses to ipsi- and contralateral stimulation. We found that a linear summation model explains the dependence of the responses on interaural time difference as measured experimentally with binaural stimulation. The fit between model predictions and data was excellent, even without taking into account the nonlinear responses of NL coincidence detector neurons, although their firing rate and synchrony strongly depend on the interaural time difference. These results are consistent with the view that the afferent axons and their synaptic potentials in NL are the primary origin of the neurophonic.
Tellers, Philipp; Lehmann, Jessica; Führ, Hartmut; Wagner, Hermann
2017-09-01
Birds and mammals use the interaural time difference (ITD) for azimuthal sound localization. While barn owls can use the ITD of the stimulus carrier frequency over nearly their entire hearing range, mammals have to utilize the ITD of the stimulus envelope to extend the upper frequency limit of ITD-based sound localization. ITD is computed and processed in a dedicated neural circuit that consists of two pathways. In the barn owl, ITD representation is more complex in the forebrain than in the midbrain pathway because of the combination of two inputs that represent different ITDs. We speculated that one of the two inputs includes an envelope contribution. To estimate the envelope contribution, we recorded ITD response functions for correlated and anticorrelated noise stimuli in the barn owl's auditory arcopallium. Our findings indicate that barn owls, like mammals, represent both carrier and envelope ITDs of overlapping frequency ranges, supporting the hypothesis that carrier and envelope ITD-based localization are complementary beyond a mere extension of the upper frequency limit. NEW & NOTEWORTHY The results presented in this study show for the first time that the barn owl is able to extract and represent the interaural time difference (ITD) information conveyed by the envelope of a broadband acoustic signal. Like mammals, the barn owl extracts the ITD of the envelope and the carrier of a signal from the same frequency range. These results are of general interest, since they reinforce a trend found in neural signal processing across different species. Copyright © 2017 the American Physiological Society.
Asadollahi, Ali; Endler, Frank; Nelken, Israel; Wagner, Hermann
2010-08-01
Humans and animals are able to detect signals in noisy environments. Detection improves when the noise and the signal have different interaural phase relationships. The resulting improvement in detection threshold is called the binaural masking level difference. We investigated neural mechanisms underlying the release from masking in the inferior colliculus of barn owls in low-frequency and high-frequency neurons. A tone (signal) was presented either with the same interaural time difference as the noise (masker) or at a 180 degrees phase shift as compared with the interaural time difference of the noise. The changes in firing rates induced by the addition of a signal of increasing level while masker level was kept constant was well predicted by the relative responses to the masker and signal alone. In many cases, the response at the highest signal levels was dominated by the response to the signal alone, in spite of a significant response to the masker at low signal levels, suggesting the presence of occlusion. Detection thresholds and binaural masking level differences were widely distributed. The amount of release from masking increased with increasing masker level. Narrowly tuned neurons in the central nucleus of the inferior colliculus had detection thresholds that were lower than or similar to those of broadly tuned neurons in the external nucleus of the inferior colliculus. Broadly tuned neurons exhibited higher masking level differences than narrowband neurons. These data suggest that detection has different spectral requirements from localization.
Binaural sensitivity in children who use bilateral cochlear implants.
Ehlers, Erica; Goupell, Matthew J; Zheng, Yi; Godar, Shelly P; Litovsky, Ruth Y
2017-06-01
Children who are deaf and receive bilateral cochlear implants (BiCIs) perform better on spatial hearing tasks using bilateral rather than unilateral inputs; however, they underperform relative to normal-hearing (NH) peers. This gap in performance is multi-factorial, including the inability of speech processors to reliably deliver binaural cues. Although much is known regarding binaural sensitivity of adults with BiCIs, less is known about how the development of binaural sensitivity in children with BiCIs compared to NH children. Sixteen children (ages 9-17 years) were tested using synchronized research processors. Interaural time differences and interaural level differences (ITDs and ILDs, respectively) were presented to pairs of pitch-matched electrodes. Stimuli were 300-ms, 100-pulses-per-second, constant-amplitude pulse trains. In the first and second experiments, discrimination of interaural cues (either ITDs or ILDs) was measured using a two-interval left/right task. In the third experiment, subjects reported the perceived intracranial position of ITDs and ILDs in a lateralization task. All children demonstrated sensitivity to ILDs, possibly due to monaural level cues. Children who were born deaf had weak or absent sensitivity to ITDs; in contrast, ITD sensitivity was noted in children with previous exposure to acoustic hearing. Therefore, factors such as auditory deprivation, in particular, lack of early exposure to consistent timing differences between the ears, may delay the maturation of binaural circuits and cause insensitivity to binaural differences.
Gessele, Nikodemus; Garcia-Pino, Elisabet; Omerbašić, Damir; Park, Thomas J; Koch, Ursula
2016-01-01
Naked mole-rats (Heterocephalus glaber) live in large eu-social, underground colonies in narrow burrows and are exposed to a large repertoire of communication signals but negligible binaural sound localization cues, such as interaural time and intensity differences. We therefore asked whether monaural and binaural auditory brainstem nuclei in the naked mole-rat are differentially adjusted to this acoustic environment. Using antibody stainings against excitatory and inhibitory presynaptic structures, namely the vesicular glutamate transporter VGluT1 and the glycine transporter GlyT2 we identified all major auditory brainstem nuclei except the superior paraolivary nucleus in these animals. Naked mole-rats possess a well structured medial superior olive, with a similar synaptic arrangement to interaural-time-difference encoding animals. The neighboring lateral superior olive, which analyzes interaural intensity differences, is large and elongated, whereas the medial nucleus of the trapezoid body, which provides the contralateral inhibitory input to these binaural nuclei, is reduced in size. In contrast, the cochlear nucleus, the nuclei of the lateral lemniscus and the inferior colliculus are not considerably different when compared to other rodent species. Most interestingly, binaural auditory brainstem nuclei lack the membrane-bound hyperpolarization-activated channel HCN1, a voltage-gated ion channel that greatly contributes to the fast integration times in binaural nuclei of the superior olivary complex in other species. This suggests substantially lengthened membrane time constants and thus prolonged temporal integration of inputs in binaural auditory brainstem neurons and might be linked to the severely degenerated sound localization abilities in these animals.
Laumen, Geneviève; Tollin, Daniel J; Beutelmann, Rainer; Klump, Georg M
2016-07-01
The effect of interaural time difference (ITD) and interaural level difference (ILD) on wave 4 of the binaural and summed monaural auditory brainstem responses (ABRs) as well as on the DN1 component of the binaural interaction component (BIC) of the ABR in young and old Mongolian gerbils (Meriones unguiculatus) was investigated. Measurements were made at a fixed sound pressure level (SPL) and a fixed level above visually detected ABR threshold to compensate for individual hearing threshold differences. In both stimulation modes (fixed SPL and fixed level above visually detected ABR threshold) an effect of ITD on the latency and the amplitude of wave 4 as well as of the BIC was observed. With increasing absolute ITD values BIC latencies were increased and amplitudes were decreased. ILD had a much smaller effect on these measures. Old animals showed a reduced amplitude of the DN1 component. This difference was due to a smaller wave 4 in the summed monaural ABRs of old animals compared to young animals whereas wave 4 in the binaural-evoked ABR showed no age-related difference. In old animals the small amplitude of the DN1 component was correlated with small binaural-evoked wave 1 and wave 3 amplitudes. This suggests that the reduced peripheral input affects central binaural processing which is reflected in the BIC. Copyright © 2016 Elsevier B.V. All rights reserved.
The Relative Contribution of Interaural Time and Magnitude Cues to Dynamic Sound Localization
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)
1995-01-01
This paper presents preliminary data from a study examining the relative contribution of interaural time differences (ITDs) and interaural level differences (ILDs) to the localization of virtual sound sources both with and without head motion. The listeners' task was to estimate the apparent direction and distance of virtual sources (broadband noise) presented over headphones. Stimuli were synthesized from minimum phase representations of nonindividualized directional transfer functions; binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; the position of the listener's head was tracked and the stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. ILDs and ITDs were either correctly or incorrectly correlated with head motion: (1) both ILDs and ITDs correctly correlated, (2) ILDs correct, ITD fixed at 0 deg azimuth and 0 deg elevation, (3) ITDs correct, ILDs fixed at 0 deg, 0 deg. Similar conditions were run for static conditions except that none of the cues changed with head motion. The data indicated that, compared to static conditions, head movements helped listeners to resolve confusions primarily when ILDs were correctly correlated, although a smaller effect was also seen for correct ITDs. Together with the results for static conditions, the data suggest that localization tends to be dominated by the cue that is most reliable or consistent, when reliability is defined by consistency over time as well as across frequency bands.
Binaural comodulation masking release: Effects of masker interaural correlation
Hall, Joseph W.; Buss, Emily; Grose, John H.
2007-01-01
Binaural detection was examined for a signal presented in a narrow band of noise centered on the on-signal masking band (OSB) or in the presence of flanking noise bands that were random or comodulated with respect to the OSB. The noise had an interaural correlation of 1.0 (No), 0.99 or 0.95. In No noise, random flanking bands worsened Sπ detection and comodulated bands improved Sπ detection for some listeners but had no effect for other listeners. For the 0.99 or 0.95 interaural correlation conditions, random flanking bands were less detrimental to Sπ detection and comodulated flanking bands improved Sπ detection for all listeners. Analyses based on signal detection theory indicated that the improvement in Sπ thresholds obtained with comodulated bands was not compatible with an optimal combination of monaural and binaural cues or to across-frequency analyses of dynamic interaural phase differences. Two accounts consistent with the improvement in Sπ thresholds in comodulated noise were (1) envelope information carried by the flanking bands improves the weighting of binaural cues associated with the signal; (2) the auditory system is sensitive to across-frequency differences in ongoing interaural correlation. PMID:17225415
2014-01-01
Localizing a sound source requires the auditory system to determine its direction and its distance. In general, hearing-impaired listeners do less well in experiments measuring localization performance than normal-hearing listeners, and hearing aids often exacerbate matters. This article summarizes the major experimental effects in direction (and its underlying cues of interaural time differences and interaural level differences) and distance for normal-hearing, hearing-impaired, and aided listeners. Front/back errors and the importance of self-motion are noted. The influence of vision on the localization of real-world sounds is emphasized, such as through the ventriloquist effect or the intriguing link between spatial hearing and visual attention. PMID:25492094
Masking Level Difference Response Norms from Learning Disabled Individuals.
ERIC Educational Resources Information Center
Waryas, Paul A.; Battin, R. Ray
1985-01-01
The study presents normative data on Masking Level Difference (an improvement of the auditory processing of interaural time/intensity differences between signals and masking noises) for 90 learning disabled persons (4-35 years old). It was concluded that the MLD may quickly screen for auditory processing problems. (CL)
Sound source localization inspired by the ears of the Ormia ochracea
NASA Astrophysics Data System (ADS)
Kuntzman, Michael L.; Hall, Neal A.
2014-07-01
The parasitoid fly Ormia ochracea has the remarkable ability to locate crickets using audible sound. This ability is, in fact, remarkable as the fly's hearing mechanism spans only 1.5 mm which is 50× smaller than the wavelength of sound emitted by the cricket. The hearing mechanism is, for all practical purposes, a point in space with no significant interaural time or level differences to draw from. It has been discovered that evolution has empowered the fly with a hearing mechanism that utilizes multiple vibration modes to amplify interaural time and level differences. Here, we present a fully integrated, man-made mimic of the Ormia's hearing mechanism capable of replicating the remarkable sound localization ability of the special fly. A silicon-micromachined prototype is presented which uses multiple piezoelectric sensing ports to simultaneously transduce two orthogonal vibration modes of the sensing structure, thereby enabling simultaneous measurement of sound pressure and pressure gradient.
NASA Astrophysics Data System (ADS)
SAKAI, H.; HOTEHAMA, T.; ANDO, Y.; PRODI, N.; POMPOLI, R.
2002-02-01
Measurements of railway noise were conducted by use of a diagnostic system of regional environmental noise. The system is based on the model of the human auditory-brain system. The model consists of the interplay of autocorrelators and an interaural crosscorrelator acting on the pressure signals arriving at the ear entrances, and takes into account the specialization of left and right human cerebral hemispheres. Different kinds of railway noise were measured through binaural microphones of a dummy head. To characterize the railway noise, physical factors, extracted from the autocorrelation functions (ACF) and interaural crosscorrelation function (IACF) of binaural signals, were used. The factors extracted from ACF were (1) energy represented at the origin of the delay, Φ (0), (2) effective duration of the envelope of the normalized ACF, τe, (3) the delay time of the first peak, τ1, and (4) its amplitude,ø1 . The factors extracted from IACF were (5) IACC, (6) interaural delay time at which the IACC is defined, τIACC, and (7) width of the IACF at the τIACC,WIACC . The factor Φ (0) can be represented as a geometrical mean of energies at both ears as listening level, LL.
Detection and localization of sounds: Virtual tones and virtual reality
NASA Astrophysics Data System (ADS)
Zhang, Peter Xinya
Modern physiologically based binaural models employ internal delay lines in the pathways from left and right peripheries to central processing nuclei. Various models apply the delay lines differently, and give different predictions for the detection of dichotic pitches, wherein listeners hear a virtual tone in the noise background. Two dichotic pitch stimuli (Huggins pitch and binaural coherence edge pitch) with low boundary frequencies were used to test the predictions by two different models. The results from five experiments show that the relative dichotic pitch strengths support the equalization-cancellation model and disfavor the central activity pattern (CAP) model. The CAP model makes predictions for the lateralization of Huggins pitch based on interaural time differences (ITD). By measuring human lateralization for Huggins pitches with two different types of phase boundaries (linear-phase and stepped phase), and by comparing with lateralization of sine-tones, it was shown that the lateralization of Huggins pitch stimuli is similar to that of the corresponding sine-tones, and the lateralizations of Huggins pitch stimuli with the two different boundaries were even more similar to one another. The results agreed roughly with the CAP model predictions. Agreement was significantly improved by incorporating individualized scale factors and offsets into the model, and was further unproved with a model including compression at large ITDs. Furthermore, ambiguous stimuli, with an interaural phase difference of 180 degrees, were consistently lateralized on the left or right based on individual asymmetries---which introduces the concept of "earedness". Interaural phase difference (IPD) and interaural time difference (ITD) are two different forms of temporal cues. With varying frequency, an auditory system based on IPD or ITD gives different quantitative predictions on lateralization. A lateralization experiment with sine tones tested whether human auditory system is an IPD-meter or an ITD-meter. Listeners estimated the lateral positions of 50 sine tones with IPDs ranging from -150° to +150° and with different frequencies, all in the range where signal fine structure supports lateralization. The estimates indicated that listeners lateralize sine tones on the basis of ITD and not IPD. In order to distinguish between sound sources in front and in back, listeners use spectral cues caused by the diffraction by pinna, head, neck and torso. To study this effect, the VRX technique was developed based on transaural technology. The technique was successful in presenting desired spectra into listeners' ears with high accuracy up to 16 kHz. When presented with real source and simulated virtual signal, listeners in an anechoic room could not distinguish between them. Eleven experiments on discrimination between front and back sources were carried out in an anechoic room. The results show several findings. First, the results support a multiple band comparison model, and disfavor a necessary band(s) model. Second, it was found that preserving the spectral dips was more important than preserving the spectral peaks for successful front/back discrimination. Moreover, it was confirmed that neither monaural cues nor interaural spectral level difference cues were adequate for front/back discrimination. Furthermore, listeners' performance did not deteriorate when presented with sharpened spectra. Finally, when presented with an interaural delay less than 200 mus, listeners could succeed to discriminate front from back, although the image was pulled to the side, which suggests that the localizations in azimuthal plane and in sagittal plane are independent within certain limits.
Time-Varying Distortions of Binaural Information by Bilateral Hearing Aids
Rodriguez, Francisco A.; Portnuff, Cory D. F.; Goupell, Matthew J.; Tollin, Daniel J.
2016-01-01
In patients with bilateral hearing loss, the use of two hearing aids (HAs) offers the potential to restore the benefits of binaural hearing, including sound source localization and segregation. However, existing evidence suggests that bilateral HA users’ access to binaural information, namely interaural time and level differences (ITDs and ILDs), can be compromised by device processing. Our objective was to characterize the nature and magnitude of binaural distortions caused by modern digital behind-the-ear HAs using a variety of stimuli and HA program settings. Of particular interest was a common frequency-lowering algorithm known as nonlinear frequency compression, which has not previously been assessed for its effects on binaural information. A binaural beamforming algorithm was also assessed. Wide dynamic range compression was enabled in all programs. HAs were placed on a binaural manikin, and stimuli were presented from an arc of loudspeakers inside an anechoic chamber. Stimuli were broadband noise bursts, 10-Hz sinusoidally amplitude-modulated noise bursts, or consonant–vowel–consonant speech tokens. Binaural information was analyzed in terms of ITDs, ILDs, and interaural coherence, both for whole stimuli and in a time-varying sense (i.e., within a running temporal window) across four different frequency bands (1, 2, 4, and 6 kHz). Key findings were: (a) Nonlinear frequency compression caused distortions of high-frequency envelope ITDs and significantly reduced interaural coherence. (b) For modulated stimuli, all programs caused time-varying distortion of ILDs. (c) HAs altered the relationship between ITDs and ILDs, introducing large ITD–ILD conflicts in some cases. Potential perceptual consequences of measured distortions are discussed. PMID:27698258
2012-06-01
a listener uses to interpret the auditory environment is interaural difference cues. Interaural difference cues are perceived binaurally , and they...signal in noise is not enough for accurate localization performance. Instead, it appears that both audibility and binaural signal processing of both...be interpreted differently among researchers. 4. Conclusions Accurately processed and interpreted binaural and monaural spatial cues enable a
Spitzer, M W; Semple, M N
1998-12-01
Transformation of binaural response properties in the ascending auditory pathway: influence of time-varying interaural phase disparity. J. Neurophysiol. 80: 3062-3076, 1998. Previous studies demonstrated that tuning of inferior colliculus (IC) neurons to interaural phase disparity (IPD) is often profoundly influenced by temporal variation of IPD, which simulates the binaural cue produced by a moving sound source. To determine whether sensitivity to simulated motion arises in IC or at an earlier stage of binaural processing we compared responses in IC with those of two major IPD-sensitive neuronal classes in the superior olivary complex (SOC), neurons whose discharges were phase locked (PL) to tonal stimuli and those that were nonphase locked (NPL). Time-varying IPD stimuli consisted of binaural beats, generated by presenting tones of slightly different frequencies to the two ears, and interaural phase modulation (IPM), generated by presenting a pure tone to one ear and a phase modulated tone to the other. IC neurons and NPL-SOC neurons were more sharply tuned to time-varying than to static IPD, whereas PL-SOC neurons were essentially uninfluenced by the mode of stimulus presentation. Preferred IPD was generally similar in responses to static and time-varying IPD for all unit populations. A few IC neurons were highly influenced by the direction and rate of simulated motion, but the major effect for most IC neurons and all SOC neurons was a linear shift of preferred IPD at high rates-attributable to response latency. Most IC and NPL-SOC neurons were strongly influenced by IPM stimuli simulating motion through restricted ranges of azimuth; simulated motion through partially overlapping azimuthal ranges elicited discharge profiles that were highly discontiguous, indicating that the response associated with a particular IPD is dependent on preceding portions of the stimulus. In contrast, PL-SOC responses tracked instantaneous IPD throughout the trajectory of simulated motion, resulting in highly contiguous discharge profiles for overlapping stimuli. This finding indicates that responses of PL-SOC units to time-varying IPD reflect only instantaneous IPD with no additional influence of dynamic stimulus attributes. Thus the neuronal representation of auditory spatial information undergoes a major transformation as interaural delay is initially processed in the SOC and subsequently reprocessed in IC. The finding that motion sensitivity in IC emerges from motion-insensitive input suggests that information about change of position is crucial to spatial processing at higher levels of the auditory system.
Majdak, Piotr; Laback, Bernhard; Baumgartner, Wolf-Dieter
2006-10-01
Bilateral cochlear implant (CI) listeners currently use stimulation strategies which encode interaural time differences (ITD) in the temporal envelope but which do not transmit ITD in the fine structure, due to the constant phase in the electric pulse train. To determine the utility of encoding ITD in the fine structure, ITD-based lateralization was investigated with four CI listeners and four normal hearing (NH) subjects listening to a simulation of electric stimulation. Lateralization discrimination was tested at different pulse rates for various combinations of independently controlled fine structure ITD and envelope ITD. Results for electric hearing show that the fine structure ITD had the strongest impact on lateralization at lower pulse rates, with significant effects for pulse rates up to 800 pulses per second. At higher pulse rates, lateralization discrimination depended solely on the envelope ITD. The data suggest that bilateral CI listeners benefit from transmitting fine structure ITD at lower pulse rates. However, there were strong interindividual differences: the better performing CI listeners performed comparably to the NH listeners.
Failure of the precedence effect with a noise-band vocoder
Seeber, Bernhard U.; Hafter, Ervin R.
2011-01-01
The precedence effect (PE) describes the ability to localize a direct, leading sound correctly when its delayed copy (lag) is present, though not separately audible. The relative contribution of binaural cues in the temporal fine structure (TFS) of lead–lag signals was compared to that of interaural level differences (ILDs) and interaural time differences (ITDs) carried in the envelope. In a localization dominance paradigm participants indicated the spatial location of lead–lag stimuli processed with a binaural noise-band vocoder whose noise carriers introduced random TFS. The PE appeared for noise bursts of 10 ms duration, indicating dominance of envelope information. However, for three test words the PE often failed even at short lead–lag delays, producing two images, one toward the lead and one toward the lag. When interaural correlation in the carrier was increased, the images appeared more centered, but often remained split. Although previous studies suggest dominance of TFS cues, no image is lateralized in accord with the ITD in the TFS. An interpretation in the context of auditory scene analysis is proposed: By replacing the TFS with that of noise the auditory system loses the ability to fuse lead and lag into one object, and thus to show the PE. PMID:21428515
Evaluation of a method for enhancing interaural level differences at low frequencies.
Moore, Brian C J; Kolarik, Andrew; Stone, Michael A; Lee, Young-Woo
2016-10-01
A method (called binaural enhancement) for enhancing interaural level differences at low frequencies, based on estimates of interaural time differences, was developed and evaluated. Five conditions were compared, all using simulated hearing-aid processing: (1) Linear amplification with frequency-response shaping; (2) binaural enhancement combined with linear amplification and frequency-response shaping; (3) slow-acting four-channel amplitude compression with independent compression at the two ears (AGC4CH); (4) binaural enhancement combined with four-channel compression (BE-AGC4CH); and (5) four-channel compression but with the compression gains synchronized across ears. Ten hearing-impaired listeners were tested, and gains and compression ratios for each listener were set to match targets prescribed by the CAM2 fitting method. Stimuli were presented via headphones, using virtualization methods to simulate listening in a moderately reverberant room. The intelligibility of speech at ±60° azimuth in the presence of competing speech on the opposite side of the head at ±60° azimuth was not affected by the binaural enhancement processing. Sound localization was significantly better for condition BE-AGC4CH than for condition AGC4CH for a sentence, but not for broadband noise, lowpass noise, or lowpass amplitude-modulated noise. The results suggest that the binaural enhancement processing can improve localization for sounds with distinct envelope fluctuations.
Brown, Andrew D; Rodriguez, Francisco A; Portnuff, Cory D F; Goupell, Matthew J; Tollin, Daniel J
2016-10-03
In patients with bilateral hearing loss, the use of two hearing aids (HAs) offers the potential to restore the benefits of binaural hearing, including sound source localization and segregation. However, existing evidence suggests that bilateral HA users' access to binaural information, namely interaural time and level differences (ITDs and ILDs), can be compromised by device processing. Our objective was to characterize the nature and magnitude of binaural distortions caused by modern digital behind-the-ear HAs using a variety of stimuli and HA program settings. Of particular interest was a common frequency-lowering algorithm known as nonlinear frequency compression, which has not previously been assessed for its effects on binaural information. A binaural beamforming algorithm was also assessed. Wide dynamic range compression was enabled in all programs. HAs were placed on a binaural manikin, and stimuli were presented from an arc of loudspeakers inside an anechoic chamber. Stimuli were broadband noise bursts, 10-Hz sinusoidally amplitude-modulated noise bursts, or consonant-vowel-consonant speech tokens. Binaural information was analyzed in terms of ITDs, ILDs, and interaural coherence, both for whole stimuli and in a time-varying sense (i.e., within a running temporal window) across four different frequency bands (1, 2, 4, and 6 kHz). Key findings were: (a) Nonlinear frequency compression caused distortions of high-frequency envelope ITDs and significantly reduced interaural coherence. (b) For modulated stimuli, all programs caused time-varying distortion of ILDs. (c) HAs altered the relationship between ITDs and ILDs, introducing large ITD-ILD conflicts in some cases. Potential perceptual consequences of measured distortions are discussed. © The Author(s) 2016.
Maps of interaural delay in the owl's nucleus laminaris
Shah, Sahil; McColgan, Thomas; Ashida, Go; Kuokkanen, Paula T.; Brill, Sandra; Kempter, Richard; Wagner, Hermann
2015-01-01
Axons from the nucleus magnocellularis form a presynaptic map of interaural time differences (ITDs) in the nucleus laminaris (NL). These inputs generate a field potential that varies systematically with recording position and can be used to measure the map of ITDs. In the barn owl, the representation of best ITD shifts with mediolateral position in NL, so as to form continuous, smoothly overlapping maps of ITD with iso-ITD contours that are not parallel to the NL border. Frontal space (0°) is, however, represented throughout and thus overrepresented with respect to the periphery. Measurements of presynaptic conduction delay, combined with a model of delay line conduction velocity, reveal that conduction delays can account for the mediolateral shifts in the map of ITD. PMID:26224776
Churchill, Tyler H; Kan, Alan; Goupell, Matthew J; Litovsky, Ruth Y
2014-09-01
Most contemporary cochlear implant (CI) processing strategies discard acoustic temporal fine structure (TFS) information, and this may contribute to the observed deficits in bilateral CI listeners' ability to localize sounds when compared to normal hearing listeners. Additionally, for best speech envelope representation, most contemporary speech processing strategies use high-rate carriers (≥900 Hz) that exceed the limit for interaural pulse timing to provide useful binaural information. Many bilateral CI listeners are sensitive to interaural time differences (ITDs) in low-rate (<300 Hz) constant-amplitude pulse trains. This study explored the trade-off between superior speech temporal envelope representation with high-rate carriers and binaural pulse timing sensitivity with low-rate carriers. The effects of carrier pulse rate and pulse timing on ITD discrimination, ITD lateralization, and speech recognition in quiet were examined in eight bilateral CI listeners. Stimuli consisted of speech tokens processed at different electrical stimulation rates, and pulse timings that either preserved or did not preserve acoustic TFS cues. Results showed that CI listeners were able to use low-rate pulse timing cues derived from acoustic TFS when presented redundantly on multiple electrodes for ITD discrimination and lateralization of speech stimuli.
Lateralization of noise-burst trains based on onset and ongoing interaural delays.
Freyman, Richard L; Balakrishnan, Uma; Zurek, Patrick M
2010-07-01
The lateralization of 250-ms trains of brief noise bursts was measured using an acoustic pointing technique. Stimuli were designed to assess the contribution of the interaural time delay (ITD) of the onset binaural burst relative to that of the ITDs in the ongoing part of the train. Lateralization was measured by listeners' adjustments of the ITD of a pointer stimulus, a 50-ms burst of noise, to match the lateral position of the target train. Results confirmed previous reports of lateralization dominance by the onset burst under conditions in which the train is composed of frozen tokens and the ongoing part contains multiple ambiguous interaural delays. In contrast, lateralization of ongoing trains in which fresh noise tokens were used for each set of two alternating (left-leading/right-leading) binaural pairs followed the ITD of the first pair in each set, regardless of the ITD of the onset burst of the entire stimulus and even when the onset burst was removed by gradual gating. This clear lateralization of a long-duration stimulus with ambiguous interaural delay cues suggests precedence mechanisms that involve not only the interaural cues at the beginning of a sound, but also the pattern of cues within an ongoing sound.
Kuwada, S; Batra, R; Stanford, T R
1989-02-01
1. We studied the effects of sodium pentobarbital on 22 neurons in the inferior colliculus (IC) of the rabbit. We recorded changes in the sensitivity of these neurons to monaural stimulation and to ongoing interaural time differences (ITDs). Monaural stimuli were tone bursts at or near the neuron's best frequency. The ITD was varied by delivering tones that differed by 1 Hz to the two ears, resulting in a 1-Hz binaural beat. 2. We assessed a neuron's ITD sensitivity by calculating three measures from the responses to binaural beats: composite delay, characteristic delay (CD), and characteristic phase (CP). To obtain the composite delay, we first derived period histograms by averaging, showing the response at each stimulating frequency over one period of the beat frequency. Second, the period histograms were replotted as a function of their equivalent interaural delay and then averaged together to yield the composite delay curve. Last, we calculated the composite peak or trough delay by fitting a parabola to the peak or trough of this composite curve. The composite delay curve represents the average response to all frequencies within the neuron's responsive range, and the peak reflects the interaural delay that produces the maximum response. The CD and CP were estimated from a weighted fit of a regression line to the plot of the mean interaural phase of the response versus the stimulating frequency. The slope and phase intercept of this regression line yielded estimates of CD and CP, respectively. These two quantities are thought to reflect the mechanism of ITD sensitivity, which involves the convergence of phase-locked inputs on a binaural cell. The CD estimates the difference in the time required for the two inputs to travel from either ear to this cell, whereas the CP reflects the interaural phase difference of the inputs at this cell. 3. Injections of sodium pentobarbital at subsurgical dosages (less than 25 mg/kg) almost invariably altered the neuron's response rate, response latency, response pattern, and spontaneous activity. Most of these changes were predictable and consistent with an enhancement of inhibitory influences. For example, if the earliest response was inhibitory, later excitation was usually reduced and latency increased. If the earliest response was excitatory, the level of this excitation was unaltered or slightly enhanced, and changes in latency were minimal. 4. The neuron's response pattern also changed in a predictable way. For example, a response with an inhibitory pause could either change to a response with a longer pause or to a response with an onset only.(ABSTRACT TRUNCATED AT 400 WORDS)
The sensitivity of hearing-impaired adults to acoustic attributes in simulated rooms
Whitmer, William M.; McShefferty, David; Akeroyd, Michael A.
2016-01-01
In previous studies we have shown that older hearing-impaired individuals are relatively insensitive to changes in the apparent width of broadband noises when those width changes were based on differences in interaural coherence [W. Whitmer, B. Seeber and M. Akeroyd, J. Acoust. Soc. Am. 132, 369-379 (2012)]. This insensitivity has been linked to senescent difficulties in resolving binaural fine-structure differences. It is therefore possible that interaural coherence, despite its widespread use, may not be the best acoustic surrogate of spatial perception for the aged and impaired. To test this, we simulated the room impulse responses for various acoustic scenarios with differing coherence and lateral (energy) fraction attributes using room modelling software (ODEON). Bilaterally impaired adult participants were asked to sketch the perceived size of speech tokens and musical excerpts that were convolved with these impulse responses and presented to them in a sound-dampened enclosure through a 24-loudspeaker array. Participants’ binaural acuity was also measured using an interaural phase discrimination task. Corroborating our previous findings, the results showed less sensitivity to interaural coherence in the auditory source width judgments of older hearing-impaired individuals, indicating that alternate acoustic measurements in the design of spaces for the elderly may be necessary. PMID:27213028
The sensitivity of hearing-impaired adults to acoustic attributes in simulated rooms.
Whitmer, William M; McShefferty, David; Akeroyd, Michael A
2013-06-02
In previous studies we have shown that older hearing-impaired individuals are relatively insensitive to changes in the apparent width of broadband noises when those width changes were based on differences in interaural coherence [W. Whitmer, B. Seeber and M. Akeroyd, J. Acoust. Soc. Am. 132, 369-379 (2012)]. This insensitivity has been linked to senescent difficulties in resolving binaural fine-structure differences. It is therefore possible that interaural coherence, despite its widespread use, may not be the best acoustic surrogate of spatial perception for the aged and impaired. To test this, we simulated the room impulse responses for various acoustic scenarios with differing coherence and lateral (energy) fraction attributes using room modelling software (ODEON). Bilaterally impaired adult participants were asked to sketch the perceived size of speech tokens and musical excerpts that were convolved with these impulse responses and presented to them in a sound-dampened enclosure through a 24-loudspeaker array. Participants' binaural acuity was also measured using an interaural phase discrimination task. Corroborating our previous findings, the results showed less sensitivity to interaural coherence in the auditory source width judgments of older hearing-impaired individuals, indicating that alternate acoustic measurements in the design of spaces for the elderly may be necessary.
Keidser, Gitte; Rohrseitz, Kristin; Dillon, Harvey; Hamacher, Volkmar; Carter, Lyndal; Rass, Uwe; Convery, Elizabeth
2006-10-01
This study examined the effect that signal processing strategies used in modern hearing aids, such as multi-channel WDRC, noise reduction, and directional microphones have on interaural difference cues and horizontal localization performance relative to linear, time-invariant amplification. Twelve participants were bilaterally fitted with BTE devices. Horizontal localization testing using a 360 degrees loudspeaker array and broadband pulsed pink noise was performed two weeks, and two months, post-fitting. The effect of noise reduction was measured with a constant noise present at 80 degrees azimuth. Data were analysed independently in the left/right and front/back dimension and showed that of the three signal processing strategies, directional microphones had the most significant effect on horizontal localization performance and over time. Specifically, a cardioid microphone could decrease front/back errors over time, whereas left/right errors increased when different microphones were fitted to left and right ears. Front/back confusions were generally prominent. Objective measurements of interaural differences on KEMAR explained significant shifts in left/right errors. In conclusion, there is scope for improving the sense of localization in hearing aid users.
Tympanic-response transition in ICE: Dependence upon the interaural cavity's shape
NASA Astrophysics Data System (ADS)
van Hemmen, J. Leo
More than half of the terrestrial vertebrates have internally coupled ears (ICE), where an interaural cavity of some shape acoustically couples the eardrums. Hence what the animal's auditory system perceives is not the outside stimulus but the superposition of outside and internal pressure on the two eardrums, resulting in so-called internal time and level difference, iTD and iLD, which are keys to sound localization. For a cylindrical shape, it is known that on the frequency axis two domains with appreciably increased iTD and iLD values occur, segregated by the eardrum's fundamental frequency. Here we analyze the case where, as in nature, two or more canals couple the eardrums so that, by opening one of the canals, the animal can switch from coupled to two independent ears. We analyze the iTD/iLD transition and its dependence upon the interaural cavity's size and shape. As compared to a single connection, the iTD performance is preserved to a large extent. Nonetheless, the price to pay for freedom of choice is a reduced frequency range with high-iTD plateau. Work done in collaboration with A.P. Vedurmudi; partially supported by BCCN-Munich.
Sparreboom, Marloes; Beynon, Andy J; Snik, Ad F M; Mylanus, Emmanuel A M
2016-07-01
In many studies evaluating the effect of sequential bilateral cochlear implantation in congenitally deaf children, device use is not taken into account. In this study, however, device use was analyzed in relation to auditory brainstem maturation and speech recognition, which were measured in children with early-onset deafness, 5-6 years after bilateral cochlear implantation. We hypothesized that auditory brainstem maturation is mostly functionally driven by auditory stimulation and is therefore influenced by device use and not mainly by inter-implant delay. Twenty-one children participated and had inter-implant delays between 1.2 and 7.2 years. The electrically-evoked auditory brainstem response was measured for both implants separately. The difference in interaural wave V latency and speech recognition between both implants were used in the analyses. Device use was measured with a Likert scale. Results showed that the less the second device is used, the larger the difference in interaural wave V latencies is, which consequently leads to larger differences in interaural speech recognition. In children with early-onset deafness, after various periods of unilateral deprivation, full-time device use can lead to similar auditory brainstem responses and speech recognition between both ears. Therefore, device use should be considered as a relevant factor contributing to outcomes after sequential bilateral cochlear implantation. These results are indicative for a longer window between implantations in children with early-onset deafness to obtain symmetrical auditory pathway maturation than is mentioned in the literature. Results, however, must be interpreted as preliminary findings as actual device use with data logging was not yet available at the time of the study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Gifford, René H.; Grantham, D. Wesley; Sheffield, Sterling W.; Davis, Timothy J.; Dwyer, Robert; Dorman, Michael F.
2014-01-01
The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from −90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100–900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. PMID:24607490
Gifford, René H; Grantham, D Wesley; Sheffield, Sterling W; Davis, Timothy J; Dwyer, Robert; Dorman, Michael F
2014-06-01
The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from -90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100-900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. Copyright © 2014. Published by Elsevier B.V.
Xiong, Xiaorui R.; Liang, Feixue; Li, Haifu; Mesik, Lukas; Zhang, Ke K.; Polley, Daniel B.; Tao, Huizhong W.; Xiao, Zhongju; Zhang, Li I.
2013-01-01
Binaural integration in the central nucleus of inferior colliculus (ICC) plays a critical role in sound localization. However, its arithmetic nature and underlying synaptic mechanisms remain unclear. Here, we showed in mouse ICC neurons that the contralateral dominance is created by a “push-pull”-like mechanism, with contralaterally dominant excitation and more bilaterally balanced inhibition. Importantly, binaural spiking response is generated apparently from an ipsilaterally-mediated scaling of contralateral response, leaving frequency tuning unchanged. This scaling effect is attributed to a divisive attenuation of contralaterally-evoked synaptic excitation onto ICC neurons with their inhibition largely unaffected. Thus, a gain control mediates the linear transformation from monaural to binaural spike responses. The gain value is modulated by interaural level difference (ILD) primarily through scaling excitation to different levels. The ILD-dependent synaptic scaling and gain adjustment allow ICC neurons to dynamically encode interaural sound localization cues while maintaining an invariant representation of other independent sound attributes. PMID:23972599
2009-12-01
minimize onset transients. Broadband noise allows the observer access to both binaural cues (interaural differences in time of arrival and intensity) and...in the health sciences. 3’ ed. New York: Wiley; 1983. 18. Carmichel EL, Harris FP, Stoiy BH. Effects of binaural electronic hearing protectors on
Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W
2013-11-01
Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in white noise. Relative to control stimuli that contain no inter-aural timing differences, dichotic pitch stimuli typically elicit an object related negativity (ORN) response, associated with the perceptual segregation of the tone and the carrier noise into distinct auditory objects. Autistic children failed to demonstrate an ORN, suggesting a failure of segregation; however, comparison with the ORNs of age-matched typically developing controls narrowly failed to attain significance. More striking, the autistic children demonstrated a significant differential response to the pitch stimulus, peaking at around 50 ms. This was not present in the control group, nor has it been found in other groups tested using similar stimuli. This response may be a neural signature of atypical processing of pitch in at least some autistic individuals.
Threshold of the precedence effect in noise
Freyman, Richard L.; Griffin, Amanda M.; Zurek, Patrick M.
2014-01-01
Three effects that show a temporal asymmetry in the influence of interaural cues were studied through the addition of masking noise: (1) The transient precedence effect—the perceptual dominance of a leading transient over a similar lagging transient; (2) the ongoing precedence effect—lead dominance with lead and lag components that extend in time; and (3) the onset capture effect—determination by an onset transient of the lateral position of an otherwise ambiguous extended trailing sound. These three effects were evoked with noise-burst stimuli and were compared in the presence of masking noise. Using a diotic noise masker, detection thresholds for stimuli with lead/lag interaural delays of 0/500 μs were compared to those with 500/0 μs delays. None of the three effects showed a masking difference between those conditions, suggesting that none of the effects is operative at masked threshold. A task requiring the discrimination between stimuli with 500/0 and 0/500 μs interaural delays was used to determine the threshold for each effect in noise. The results showed similar thresholds in noise (10–13 dB SL) for the transient and ongoing precedence effects, but a much higher threshold (33 dB SL) for onset capture of an ambiguous trailing sound. PMID:24815272
ERIC Educational Resources Information Center
Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W.
2013-01-01
Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in…
Spatial cue reliability drives frequency tuning in the barn Owl's midbrain
Cazettes, Fanny; Fischer, Brian J; Pena, Jose L
2014-01-01
The robust representation of the environment from unreliable sensory cues is vital for the efficient function of the brain. However, how the neural processing captures the most reliable cues is unknown. The interaural time difference (ITD) is the primary cue to localize sound in horizontal space. ITD is encoded in the firing rate of neurons that detect interaural phase difference (IPD). Due to the filtering effect of the head, IPD for a given location varies depending on the environmental context. We found that, in barn owls, at each location there is a frequency range where the head filtering yields the most reliable IPDs across contexts. Remarkably, the frequency tuning of space-specific neurons in the owl's midbrain varies with their preferred sound location, matching the range that carries the most reliable IPD. Thus, frequency tuning in the owl's space-specific neurons reflects a higher-order feature of the code that captures cue reliability. DOI: http://dx.doi.org/10.7554/eLife.04854.001 PMID:25531067
The spatial unmasking of speech: evidence for within-channel processing of interaural time delay.
Edmonds, Barrie A; Culling, John F
2005-05-01
Across-frequency processing by common interaural time delay (ITD) in spatial unmasking was investigated by measuring speech reception thresholds (SRTs) for high- and low-frequency bands of target speech presented against concurrent speech or a noise masker. Experiment 1 indicated that presenting one of these target bands with an ITD of +500 micros and the other with zero ITD (like the masker) provided some release from masking, but full binaural advantage was only measured when both target bands were given an ITD of + 500 micros. Experiment 2 showed that full binaural advantage could also be achieved when the high- and low-frequency bands were presented with ITDs of equal but opposite magnitude (+/- 500 micros). In experiment 3, the masker was also split into high- and low-frequency bands with ITDs of equal but opposite magnitude (+/-500 micros). The ITD of the low-frequency target band matched that of the high-frequency masking band and vice versa. SRTs indicated that, as long as the target and masker differed in ITD within each frequency band, full binaural advantage could be achieved. These results suggest that the mechanism underlying spatial unmasking exploits differences in ITD independently within each frequency channel.
NASA Astrophysics Data System (ADS)
Suzuki, Yôiti; Watanabe, Kanji; Iwaya, Yukio; Gyoba, Jiro; Takane, Shouichi
2005-04-01
Because the transfer functions governing subjective sound localization (HRTFs) show strong individuality, sound localization systems based on synthesis of HRTFs require suitable HRTFs for individual listeners. However, it is impractical to obtain HRTFs for all listeners based on measurements. Improving sound localization by adjusting non-individualized HRTFs to a specific listener based on that listener's anthropometry might be a practical method. This study first developed a new method to estimate interaural time differences (ITDs) using HRTFs. Then correlations between ITDs and anthropometric parameters were analyzed using the canonical correlation method. Results indicated that parameters relating to head size, and shoulder and ear positions are significant. Consequently, it was attempted to express ITDs based on listener's anthropometric data. In this process, the change of ITDs as a function of azimuth angle was parameterized as a sum of sine functions. Then the parameters were analyzed using multiple regression analysis, in which the anthropometric parameters were used as explanatory variables. The predicted or individualized ITDs were installed in the nonindividualized HRTFs to evaluate sound localization performance. Results showed that individualization of ITDs improved horizontal sound localization.
Bernstein, Leslie R; Trahiotis, Constantine
2017-02-01
Interaural cross-correlation-based models of binaural processing have accounted successfully for a wide variety of binaural phenomena, including binaural detection, binaural discrimination, and measures of extents of laterality based on interaural temporal disparities, interaural intensitive disparities, and their combination. This report focuses on quantitative accounts of data obtained from binaural detection experiments published over five decades. Particular emphasis is placed on stimulus contexts for which commonly used correlation-based approaches fail to provide adequate explanations of the data. One such context concerns binaural detection of signals masked by certain noises that are narrow-band and/or interaurally partially correlated. It is shown that a cross-correlation-based model that includes stages of peripheral auditory processing can, when coupled with an appropriate decision variable, account well for a wide variety of classic and recently published binaural detection data including those that have, heretofore, proven to be problematic.
Nilsson, Mats E; Schenkman, Bo N
2016-02-01
Blind people use auditory information to locate sound sources and sound-reflecting objects (echolocation). Sound source localization benefits from the hearing system's ability to suppress distracting sound reflections, whereas echolocation would benefit from "unsuppressing" these reflections. To clarify how these potentially conflicting aspects of spatial hearing interact in blind versus sighted listeners, we measured discrimination thresholds for two binaural location cues: inter-aural level differences (ILDs) and inter-aural time differences (ITDs). The ILDs or ITDs were present in single clicks, in the leading component of click pairs, or in the lagging component of click pairs, exploiting processes related to both sound source localization and echolocation. We tested 23 blind (mean age = 54 y), 23 sighted-age-matched (mean age = 54 y), and 42 sighted-young (mean age = 26 y) listeners. The results suggested greater ILD sensitivity for blind than for sighted listeners. The blind group's superiority was particularly evident for ILD-lag-click discrimination, suggesting not only enhanced ILD sensitivity in general but also increased ability to unsuppress lagging clicks. This may be related to the blind person's experience of localizing reflected sounds, for which ILDs may be more efficient than ITDs. On the ITD-discrimination tasks, the blind listeners performed better than the sighted age-matched listeners, but not better than the sighted young listeners. ITD sensitivity declines with age, and the equal performance of the blind listeners compared to a group of substantially younger listeners is consistent with the notion that blind people's experience may offset age-related decline in ITD sensitivity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Mc Laughlin, Myles; Chabwine, Joelle Nsimire; van der Heijden, Marcel; Joris, Philip X
2008-10-01
To localize low-frequency sounds, humans rely on an interaural comparison of the temporally encoded sound waveform after peripheral filtering. This process can be compared with cross-correlation. For a broadband stimulus, after filtering, the correlation function has a damped oscillatory shape where the periodicity reflects the filter's center frequency and the damping reflects the bandwidth (BW). The physiological equivalent of the correlation function is the noise delay (ND) function, which is obtained from binaural cells by measuring response rate to broadband noise with varying interaural time delays (ITDs). For monaural neurons, delay functions are obtained by counting coincidences for varying delays across spike trains obtained to the same stimulus. Previously, we showed that BWs in monaural and binaural neurons were similar. However, earlier work showed that the damping of delay functions differs significantly between these two populations. Here, we address this paradox by looking at the role of sensitivity to changes in interaural correlation. We measured delay and correlation functions in the cat inferior colliculus (IC) and auditory nerve (AN). We find that, at a population level, AN and IC neurons with similar characteristic frequencies (CF) and BWs can have different responses to changes in correlation. Notably, binaural neurons often show compression, which is not found in the AN and which makes the shape of delay functions more invariant with CF at the level of the IC than at the AN. We conclude that binaural sensitivity is more dependent on correlation sensitivity than has hitherto been appreciated and that the mechanisms underlying correlation sensitivity should be addressed in future studies.
NASA Astrophysics Data System (ADS)
Kim, Sungyoung; Martens, William L.
2005-04-01
By industry standard (ITU-R. Recommendation BS.775-1), multichannel stereophonic signals within the frequency range of up to 80 or 120 Hz may be mixed and delivered via a single driver (e.g., a subwoofer) without significant impairment of stereophonic sound quality. The assumption that stereophonic information within such low-frequency content is not significant was tested by measuring discrimination thresholds for changes in interaural cross-correlation (IACC) within spectral bands containing the lowest frequency components of low-pitch musical tones. Performances were recorded for three different musical instruments playing single notes ranging in fundamental frequency from 41 Hz to 110 Hz. The recordings, made using a multichannel microphone array composed of five DPA 4006 pressure microphones, were processed to produce a set of stimuli that varied in interaural cross-correlation (IACC) within a low-frequency band, but were otherwise identical in a higher-frequency band. This correlation processing was designed to have minimal effect upon other psychoacoustic variables such as loudness and timbre. The results show that changes in interaural cross correlation (IACC) within low-frequency bands of low-pitch musical tones are most easily discriminated when decorrelated signals are presented via subwoofers positioned at extreme lateral angles (far from the median plane). [Work supported by VRQ.
Active localization of virtual sounds
NASA Technical Reports Server (NTRS)
Loomis, Jack M.; Hebert, C.; Cicinelli, J. G.
1991-01-01
We describe a virtual sound display built around a 12 MHz 80286 microcomputer and special purpose analog hardware. The display implements most of the primary cues for sound localization in the ear-level plane. Static information about direction is conveyed by interaural time differences and, for frequencies above 1800 Hz, by head sound shadow (interaural intensity differences) and pinna sound shadow. Static information about distance is conveyed by variation in sound pressure (first power law) for all frequencies, by additional attenuation in the higher frequencies (simulating atmospheric absorption), and by the proportion of direct to reverberant sound. When the user actively locomotes, the changing angular position of the source occasioned by head rotations provides further information about direction and the changing angular velocity produced by head translations (motion parallax) provides further information about distance. Judging both from informal observations by users and from objective data obtained in an experiment on homing to virtual and real sounds, we conclude that simple displays such as this are effective in creating the perception of external sounds to which subjects can home with accuracy and ease.
NASA Astrophysics Data System (ADS)
Aaronson, Neil L.
This dissertation deals with questions important to the problem of human sound source localization in rooms, starting with perceptual studies and moving on to physical measurements made in rooms. In Chapter 1, a perceptual study is performed relevant to a specific phenomenon the effect of speech reflections occurring in the front-back dimension and the ability of humans to segregate that from unreflected speech. Distracters were presented from the same source as the target speech, a loudspeaker directly in front of the listener, and also from a loudspeaker directly behind the listener, delayed relative to the front loudspeaker. Steps were taken to minimize the contributions of binaural difference cues. For all delays within +/-32 ms, a release from informational masking of about 2 dB occurred. This suggested that human listeners are able to segregate speech sources based on spatial cues, even with minimal binaural cues. In moving on to physical measurements in rooms, a method was sought for simultaneous measurement of room characteristics such as impulse response (IR) and reverberation time (RT60), and binaural parameters such as interaural time difference (ITD), interaural level difference (ILD), and the interaural cross-correlation function and coherence. Chapter 2 involves investigations into the usefulness of maximum length sequences (MLS) for these purposes. Comparisons to random telegraph noise (RTN) show that MLS performs better in the measurement of stationary and room transfer functions, IR, and RT60 by an order of magnitude in RMS percent error, even after Wiener filtering and exponential time-domain filtering have improved the accuracy of RTN measurements. Measurements were taken in real rooms in an effort to understand how the reverberant characteristics of rooms affect binaural parameters important to sound source localization. Chapter 3 deals with interaural coherence, a parameter important for localization and perception of auditory source width. MLS were used to measure waveform and envelope coherences in two rooms for various source distances and 0° azimuth through a head-and-torso simulator (KEMAR). A relationship is sought that relates these two types of coherence, since envelope coherence, while an important quantity, is generally less accessible than waveform coherence. A power law relationship is shown to exist between the two that works well within and across bands, for any source distance, and is robust to reverberant conditions of the room. Measurements of ITD, ILD, and coherence in rooms give insight into the way rooms affect these parameters, and in turn, the ability of listeners to localize sounds in rooms. Such measurements, along with room properties, are made and analyzed using MLS methods in Chapter 4. It was found that the pinnae cause incoherence for sound sources incident between 30° and 90°. In human listeners, this does not seem to adversely affect performance in lateralization experiments. The cause of poor coherence in rooms was studied as part of Chapter 4 as well. It was found that rooms affect coherence by introducing variance into the ITD spectra within the bands in which it is measured. A mathematical model to predict the interaural coherence within a band given the standard deviation of the ITD spectrum and the center frequency of the band gives an exponential relationship. This is found to work well in predicting measured coherence given ITD spectrum variance. The pinnae seem to affect the ITD spectrum in a similar way at incident sound angles for which coherence is poor in an anechoic environment.
Arndt, Susan; Aschendorff, Antje; Laszig, Roland; Wesarg, Thomas
2016-01-01
The ability to detect a target signal masked by noise is improved in normal-hearing listeners when interaural phase differences (IPDs) between the ear signals exist either in the masker or in the signal. To improve binaural hearing in bilaterally implanted cochlear implant (BiCI) users, a coding strategy providing the best possible access to IPD is highly desirable. In this study, we compared two coding strategies in BiCI users provided with CI systems from MED-EL (Innsbruck, Austria). The CI systems were bilaterally programmed either with the fine structure processing strategy FS4 or with the constant rate strategy high definition continuous interleaved sampling (HDCIS). Familiarization periods between 6 and 12 weeks were considered. The effect of IPD was measured in two types of experiments: (a) IPD detection thresholds with tonal signals addressing mainly one apical interaural electrode pair and (b) with speech in noise in terms of binaural speech intelligibility level differences (BILD) addressing multiple electrodes bilaterally. The results in (a) showed improved IPD detection thresholds with FS4 compared with HDCIS in four out of the seven BiCI users. In contrast, 12 BiCI users in (b) showed similar BILD with FS4 (0.6 ± 1.9 dB) and HDCIS (0.5 ± 2.0 dB). However, no correlation between results in (a) and (b) both obtained with FS4 was found. In conclusion, the degree of IPD sensitivity determined on an apical interaural electrode pair was not an indicator for BILD based on bilateral multielectrode stimulation. PMID:27659487
Zirn, Stefan; Arndt, Susan; Aschendorff, Antje; Laszig, Roland; Wesarg, Thomas
2016-09-22
The ability to detect a target signal masked by noise is improved in normal-hearing listeners when interaural phase differences (IPDs) between the ear signals exist either in the masker or in the signal. To improve binaural hearing in bilaterally implanted cochlear implant (BiCI) users, a coding strategy providing the best possible access to IPD is highly desirable. In this study, we compared two coding strategies in BiCI users provided with CI systems from MED-EL (Innsbruck, Austria). The CI systems were bilaterally programmed either with the fine structure processing strategy FS4 or with the constant rate strategy high definition continuous interleaved sampling (HDCIS). Familiarization periods between 6 and 12 weeks were considered. The effect of IPD was measured in two types of experiments: (a) IPD detection thresholds with tonal signals addressing mainly one apical interaural electrode pair and (b) with speech in noise in terms of binaural speech intelligibility level differences (BILD) addressing multiple electrodes bilaterally. The results in (a) showed improved IPD detection thresholds with FS4 compared with HDCIS in four out of the seven BiCI users. In contrast, 12 BiCI users in (b) showed similar BILD with FS4 (0.6 ± 1.9 dB) and HDCIS (0.5 ± 2.0 dB). However, no correlation between results in (a) and (b) both obtained with FS4 was found. In conclusion, the degree of IPD sensitivity determined on an apical interaural electrode pair was not an indicator for BILD based on bilateral multielectrode stimulation. © The Author(s) 2016.
Models of the electrically stimulated binaural system: A review.
Dietz, Mathias
2016-01-01
In an increasing number of countries, the standard treatment for deaf individuals is moving toward the implantation of two cochlear implants. Today's device technology and fitting procedure, however, appears as if the two implants would serve two independent ears and brains. Many experimental studies have demonstrated that after careful matching and balancing of left and right stimulation in controlled laboratory studies most patients have almost normal sensitivity to interaural level differences and some sensitivity to interaural time differences (ITDs). Mechanisms underlying the limited ITD sensitivity are still poorly understood and many different aspects may contribute. Recent pioneering computational approaches identified some of the functional implications the electric input imposes on the neural brainstem circuits. Simultaneously these studies have raised new questions and certainly demonstrated that further refinement of the model stages is necessary. They join the experimental study's conclusions that binaural device technology, binaural fitting, specific speech coding strategies, and binaural signal processing algorithms are obviously missing components to maximize the benefit of bilateral implantation. Within this review, the existing models of the electrically stimulated binaural system are explained, compared, and discussed from a viewpoint of a "CI device with auditory system" and from that of neurophysiological research.
Akeroyd, Michael A; Chambers, John; Bullock, David; Palmer, Alan R; Summerfield, A Quentin; Nelson, Philip A; Gatehouse, Stuart
2007-02-01
Cross-talk cancellation is a method for synthesizing virtual auditory space using loudspeakers. One implementation is the "Optimal Source Distribution" technique [T. Takeuchi and P. Nelson, J. Acoust. Soc. Am. 112, 2786-2797 (2002)], in which the audio bandwidth is split across three pairs of loudspeakers, placed at azimuths of +/-90 degrees, +/-15 degrees, and +/-3 degrees, conveying low, mid, and high frequencies, respectively. A computational simulation of this system was developed and verified against measurements made on an acoustic system using a manikin. Both the acoustic system and the simulation gave a wideband average cancellation of almost 25 dB. The simulation showed that when there was a mismatch between the head-related transfer functions used to set up the system and those of the final listener, the cancellation was reduced to an average of 13 dB. Moreover, in this case the binaural interaural time differences and interaural level differences delivered by the simulation of the optimal source distribution (OSD) system often differed from the target values. It is concluded that only when the OSD system is set up with "matched" head-related transfer functions can it deliver accurate binaural cues.
Monaghan, Jessica J. M.; Seeber, Bernhard U.
2017-01-01
The ability of normal-hearing (NH) listeners to exploit interaural time difference (ITD) cues conveyed in the modulated envelopes of high-frequency sounds is poor compared to ITD cues transmitted in the temporal fine structure at low frequencies. Sensitivity to envelope ITDs is further degraded when envelopes become less steep, when modulation depth is reduced, and when envelopes become less similar between the ears, common factors when listening in reverberant environments. The vulnerability of envelope ITDs is particularly problematic for cochlear implant (CI) users, as they rely on information conveyed by slowly varying amplitude envelopes. Here, an approach to improve access to envelope ITDs for CIs is described in which, rather than attempting to reduce reverberation, the perceptual saliency of cues relating to the source is increased by selectively sharpening peaks in the amplitude envelope judged to contain reliable ITDs. Performance of the algorithm with room reverberation was assessed through simulating listening with bilateral CIs in headphone experiments with NH listeners. Relative to simulated standard CI processing, stimuli processed with the algorithm generated lower ITD discrimination thresholds and increased extents of laterality. Depending on parameterization, intelligibility was unchanged or somewhat reduced. The algorithm has the potential to improve spatial listening with CIs. PMID:27586742
A circuit for detection of interaural time differences in the nucleus laminaris of turtles.
Willis, Katie L; Carr, Catherine E
2017-11-15
The physiological hearing range of turtles is approximately 50-1000 Hz, as determined by cochlear microphonics ( Wever and Vernon, 1956a). These low frequencies can constrain sound localization, particularly in red-eared slider turtles, which are freshwater turtles with small heads and isolated middle ears. To determine if these turtles were sensitive to interaural time differences (ITDs), we investigated the connections and physiology of their auditory brainstem nuclei. Tract tracing experiments showed that cranial nerve VIII bifurcated to terminate in the first-order nucleus magnocellularis (NM) and nucleus angularis (NA), and the NM projected bilaterally to the nucleus laminaris (NL). As the NL received inputs from each side, we developed an isolated head preparation to examine responses to binaural auditory stimulation. Magnocellularis and laminaris units responded to frequencies from 100 to 600 Hz, and phase-locked reliably to the auditory stimulus. Responses from the NL were binaural, and sensitive to ITD. Measures of characteristic delay revealed best ITDs around ±200 μs, and NL neurons typically had characteristic phases close to 0, consistent with binaural excitation. Thus, turtles encode ITDs within their physiological range, and their auditory brainstem nuclei have similar connections and cell types to other reptiles. © 2017. Published by The Company of Biologists Ltd.
Franken, Tom P; Joris, Philip X; Smith, Philip H
2018-06-14
The brainstem's lateral superior olive (LSO) is thought to be crucial for localizing high-frequency sounds by coding interaural sound level differences (ILD). Its neurons weigh contralateral inhibition against ipsilateral excitation, making their firing rate a function of the azimuthal position of a sound source. Since the very first in vivo recordings, LSO principal neurons have been reported to give sustained and temporally integrating 'chopper' responses to sustained sounds. Neurons with transient responses were observed but largely ignored and even considered a sign of pathology. Using the Mongolian gerbil as a model system, we have obtained the first in vivo patch clamp recordings from labeled LSO neurons and find that principal LSO neurons, the most numerous projection neurons of this nucleus, only respond at sound onset and show fast membrane features suggesting an importance for timing. These results provide a new framework to interpret previously puzzling features of this circuit. © 2018, Franken et al.
Mechanisms for Adjusting Interaural Time Differences to Achieve Binaural Coincidence Detection
Seidl, Armin H.; Rubel, Edwin W; Harris, David M.
2010-01-01
Understanding binaural perception requires detailed analyses of the neural circuitry responsible for the computation of interaural time differences (ITDs). In the avian brainstem, this circuit consists of internal axonal delay lines innervating an array of coincidence detector neurons that encode external ITDs. Nucleus magnocellularis (NM) neurons project to the dorsal dendritic field of the ipsilateral nucleus laminaris (NL) and to the ventral field of the contralateral NL. Contralateral-projecting axons form a delay line system along a band of NL neurons. Binaural acoustic signals in the form of phase-locked action potentials from NM cells arrive at NL and establish a topographic map of sound source location along the azimuth. These pathways are assumed to represent a circuit similar to the Jeffress model of sound localization, establishing a place code along an isofrequency contour of NL. Three-dimensional measurements of axon lengths reveal major discrepancies with the current model; the temporal offset based on conduction length alone makes encoding of physiological ITDs impossible. However, axon diameter and distances between Nodes of Ranvier also influence signal propagation times along an axon. Our measurements of these parameters reveal that diameter and internode distance can compensate for the temporal offset inferred from axon lengths alone. Together with other recent studies these unexpected results should inspire new thinking on the cellular biology, evolution and plasticity of the circuitry underlying low frequency sound localization in both birds and mammals. PMID:20053889
Goupell, Matthew J
2015-03-01
Bilateral cochlear implant (CI) listeners can perform binaural tasks, but they are typically worse than normal-hearing (NH) listeners. To understand why this difference occurs and the mechanisms involved in processing dynamic binaural differences, interaural envelope correlation change discrimination sensitivity was measured in real and simulated CI users. In experiment 1, 11 CI (eight late deafened, three early deafened) and eight NH listeners were tested in an envelope correlation change discrimination task. Just noticeable differences (JNDs) were best for a matched place-of-stimulation and increased for an increasing mismatch. In experiment 2, attempts at intracranially centering stimuli did not produce lower JNDs. In experiment 3, the percentage of correct identifications of antiphasic carrier pulse trains modulated by correlated envelopes was measured as a function of mismatch and pulse rate. Sensitivity decreased for increasing mismatch and increasing pulse rate. The experiments led to two conclusions. First, envelope correlation change discrimination necessitates place-of-stimulation matched inputs. However, it is unclear if previous experience with acoustic hearing is necessary for envelope correlation change discrimination. Second, NH listeners presented with CI simulations demonstrated better performance than real CI listeners. If the simulations are realistic representations of electrical stimuli, real CI listeners appear to have difficulty processing interaural information in modulated signals.
2017-01-01
Binaural cues occurring in natural environments are frequently time varying, either from the motion of a sound source or through interactions between the cues produced by multiple sources. Yet, a broad understanding of how the auditory system processes dynamic binaural cues is still lacking. In the current study, we directly compared neural responses in the inferior colliculus (IC) of unanesthetized rabbits to broadband noise with time-varying interaural time differences (ITD) with responses to noise with sinusoidal amplitude modulation (SAM) over a wide range of modulation frequencies. On the basis of prior research, we hypothesized that the IC, one of the first stages to exhibit tuning of firing rate to modulation frequency, might use a common mechanism to encode time-varying information in general. Instead, we found weaker temporal coding for dynamic ITD compared with amplitude modulation and stronger effects of adaptation for amplitude modulation. The differences in temporal coding of dynamic ITD compared with SAM at the single-neuron level could be a neural correlate of “binaural sluggishness,” the inability to perceive fluctuations in time-varying binaural cues at high modulation frequencies, for which a physiological explanation has so far remained elusive. At ITD-variation frequencies of 64 Hz and above, where a temporal code was less effective, noise with a dynamic ITD could still be distinguished from noise with a constant ITD through differences in average firing rate in many neurons, suggesting a frequency-dependent tradeoff between rate and temporal coding of time-varying binaural information. NEW & NOTEWORTHY Humans use time-varying binaural cues to parse auditory scenes comprising multiple sound sources and reverberation. However, the neural mechanisms for doing so are poorly understood. Our results demonstrate a potential neural correlate for the reduced detectability of fluctuations in time-varying binaural information at high speeds, as occurs in reverberation. The results also suggest that the neural mechanisms for processing time-varying binaural and monaural cues are largely distinct. PMID:28381487
The effect of stimulus intensity on the right ear advantage in dichotic listening.
Hugdahl, Kenneth; Westerhausen, René; Alho, Kimmo; Medvedev, Svyatoslav; Hämäläinen, Heikki
2008-01-24
The dichotic listening test is non-invasive behavioural technique to study brain lateralization and it has been shown, that its results can be systematically modulated by varying stimulation properties (bottom-up effects) or attentional instructions (top-down effects) of the testing procedure. The goal of the present study was to further investigate the bottom-up modulation, by examining the effect of differences in the right or left ear stimulus intensity on the ear advantage. For this purpose, interaural intensity difference were gradually varied in steps of 3 dB from -21 dB in favour of the left ear to +21 dB in favour of the right ear, also including a no difference baseline condition. Thirty-three right-handed adult participants with normal hearing acuity were tested. The dichotic listening paradigm was based on consonant-vowel stimuli pairs. Only pairs with the same voicing (voice or non-voiced) of the consonant sound were used. The results showed: (a) a significant right ear advantage (REA) for interaural intensity differences from 21 to -3 dB, (b) no ear advantage (NEA) for the -6 dB difference, and (c) a significant left ear advantage (LEA) for differences form -9 to -21 dB. It is concluded that the right ear advantage in dichotic listening to CV syllables withstands an interaural intensity difference of -9 dB before yielding to a significant left ear advantage. This finding could have implications for theories of auditory laterality and hemispheric asymmetry for phonological processing.
Encke, Jörg; Hemmert, Werner
2018-01-01
The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs) which are first analyzed in the medial superior olive (MSO) in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.
Effect of occlusion, directionality and age on horizontal localization
NASA Astrophysics Data System (ADS)
Alworth, Lynzee Nicole
Localization acuity of a given listener is dependent upon the ability discriminate between interaural time and level disparities. Interaural time differences are encoded by low frequency information whereas interaural level differences are encoded by high frequency information. Much research has examined effects of hearing aid microphone technologies and occlusion separately and prior studies have not evaluated age as a factor in localization acuity. Open-fit hearing instruments provide new earmold technologies and varying microphone capabilities; however, these instruments have yet to be evaluated with regard to horizontal localization acuity. Thus, the purpose of this study is to examine the effects of microphone configuration, type of dome in open-fit hearing instruments, and age on the horizontal localization ability of a given listener. Thirty adults participated in this study and were grouped based upon hearing sensitivity and age (young normal hearing, >50 years normal hearing, >50 hearing impaired). Each normal hearing participant completed one localization experiment (unaided/unamplified) where they listened to the stimulus "Baseball" and selected the point of origin. Hearing impaired listeners were fit with the same two receiver-in-the-ear hearing aids and same dome types, thus controlling for microphone technologies, type of dome, and fitting between trials. Hearing impaired listeners completed a total of 7 localization experiments (unaided/unamplified; open dome: omnidirectional, adaptive directional, fixed directional; micromold: omnidirectional, adaptive directional, fixed directional). Overall, results of this study indicate that age significantly affects horizontal localization ability as younger adult listeners with normal hearing made significantly fewer localization errors than older adult listeners with normal hearing. Also, results revealed a significant difference in performance between dome type; however, upon further examination was not significant. Therefore, results examining type of dome should be viewed with caution. Results examining microphone configuration and microphone configuration by dome type were not significant. Moreover, results evaluating performance relative to unaided (unamplified) were not significant. Taken together, these results suggest open-fit hearing instruments, regardless of microphone or dome type, do not degrade horizontal localization acuity within a given listener relative to their 'older aged' normal hearing counterparts in quiet environments.
The acoustical bright spot and mislocalization of tones by human listeners.
Macaulay, Eric J; Hartmann, William M; Rakerd, Brad
2010-03-01
Listeners attempted to localize 1500-Hz sine tones presented in free field from a loudspeaker array, spanning azimuths from 0 degrees (straight ahead) to 90 degrees (extreme right). During this task, the tone levels and phases were measured in the listeners' ear canals. Because of the acoustical bright spot, measured interaural level differences (ILD) were non-monotonic functions of azimuth with a maximum near 55 degrees . In a source-identification task, listeners' localization decisions closely tracked the non-monotonic ILD, and thus became inaccurate at large azimuths. When listeners received training and feedback, their accuracy improved only slightly. In an azimuth-discrimination task, listeners decided whether a first sound was to the left or to the right of a second. The discrimination results also reflected the confusion caused by the non-monotonic ILD, and they could be predicted approximately by a listener's identification results. When the sine tones were amplitude modulated or replaced by narrow bands of noise, interaural time difference (ITD) cues greatly reduced the confusion for most listeners, but not for all. Recognizing the important role of the bright spot requires a reevaluation of the transition between the low-frequency region for localization (mainly ITD) and the high-frequency region (mainly ILD).
Brand, Thomas
2018-01-01
In studies investigating binaural processing in human listeners, relatively long and task-dependent time constants of a binaural window ranging from 10 ms to 250 ms have been observed. Such time constants are often thought to reflect “binaural sluggishness.” In this study, the effect of binaural sluggishness on binaural unmasking of speech in stationary speech-shaped noise is investigated in 10 listeners with normal hearing. In order to design a masking signal with temporally varying binaural cues, the interaural phase difference of the noise was modulated sinusoidally with frequencies ranging from 0.25 Hz to 64 Hz. The lowest, that is the best, speech reception thresholds (SRTs) were observed for the lowest modulation frequency. SRTs increased with increasing modulation frequency up to 4 Hz. For higher modulation frequencies, SRTs remained constant in the range of 1 dB to 1.5 dB below the SRT determined in the diotic situation. The outcome of the experiment was simulated using a short-term binaural speech intelligibility model, which combines an equalization–cancellation (EC) model with the speech intelligibility index. This model segments the incoming signal into 23.2-ms time frames in order to predict release from masking in modulated noises. In order to predict the results from this study, the model required a further time constant applied to the EC mechanism representing binaural sluggishness. The best agreement with perceptual data was achieved using a temporal window of 200 ms in the EC mechanism. PMID:29338577
Hauth, Christopher F; Brand, Thomas
2018-01-01
In studies investigating binaural processing in human listeners, relatively long and task-dependent time constants of a binaural window ranging from 10 ms to 250 ms have been observed. Such time constants are often thought to reflect "binaural sluggishness." In this study, the effect of binaural sluggishness on binaural unmasking of speech in stationary speech-shaped noise is investigated in 10 listeners with normal hearing. In order to design a masking signal with temporally varying binaural cues, the interaural phase difference of the noise was modulated sinusoidally with frequencies ranging from 0.25 Hz to 64 Hz. The lowest, that is the best, speech reception thresholds (SRTs) were observed for the lowest modulation frequency. SRTs increased with increasing modulation frequency up to 4 Hz. For higher modulation frequencies, SRTs remained constant in the range of 1 dB to 1.5 dB below the SRT determined in the diotic situation. The outcome of the experiment was simulated using a short-term binaural speech intelligibility model, which combines an equalization-cancellation (EC) model with the speech intelligibility index. This model segments the incoming signal into 23.2-ms time frames in order to predict release from masking in modulated noises. In order to predict the results from this study, the model required a further time constant applied to the EC mechanism representing binaural sluggishness. The best agreement with perceptual data was achieved using a temporal window of 200 ms in the EC mechanism.
Binaural hearing in children using Gaussian enveloped and transposed tones.
Ehlers, Erica; Kan, Alan; Winn, Matthew B; Stoelb, Corey; Litovsky, Ruth Y
2016-04-01
Children who use bilateral cochlear implants (BiCIs) show significantly poorer sound localization skills than their normal hearing (NH) peers. This difference has been attributed, in part, to the fact that cochlear implants (CIs) do not faithfully transmit interaural time differences (ITDs) and interaural level differences (ILDs), which are known to be important cues for sound localization. Interestingly, little is known about binaural sensitivity in NH children, in particular, with stimuli that constrain acoustic cues in a manner representative of CI processing. In order to better understand and evaluate binaural hearing in children with BiCIs, the authors first undertook a study on binaural sensitivity in NH children ages 8-10, and in adults. Experiments evaluated sound discrimination and lateralization using ITD and ILD cues, for stimuli with robust envelope cues, but poor representation of temporal fine structure. Stimuli were spondaic words, Gaussian-enveloped tone pulse trains (100 pulse-per-second), and transposed tones. Results showed that discrimination thresholds in children were adult-like (15-389 μs for ITDs and 0.5-6.0 dB for ILDs). However, lateralization based on the same binaural cues showed higher variability than seen in adults. Results are discussed in the context of factors that may be responsible for poor representation of binaural cues in bilaterally implanted children.
NASA Astrophysics Data System (ADS)
Sakai, H.; Sato, S.; Prodi, N.; Pompoli, R.
2001-03-01
Measurements of aircraft noise were made at the airport "G. Marconi" in Bologna by using a measurement system for regional environmental noise. The system is based on the model of the human auditory-brain system, which is based on the interplay of autocorrelators and an interaural cross-correlator acting on the pressure signals arriving at the ear entrances, and takes into account the specialization of left and right human cerebral hemispheres (see reference [8]). Measurements were taken through dual microphones at ear entrances of a dummy head. The aircraft noise was characterized with the following physical factors calculated from the autocorrelation function (ACF) and interaural cross-correlation function (IACF) for binaural signals. From the ACF analysis, (1) energy represented at the origin of delay,Φ (0), (2) effective duration of the envelope of the normalized ACF, τe, (3) the delay time of the first peak, τ1, and (4) its amplitude, φ1were extracted. From the IACF analysis, (5) IACC, (6) interaural delay time at which the IACC is defined, τIACC, and (7) width of the IACF at the τIACC, WIACCwere extracted. The factorΦ (0) can be represented as the geometrical mean of the energies at both ears. A noise source may be identified by these factors as timbre.
Binaural unmasking of multi-channel stimuli in bilateral cochlear implant users.
Van Deun, Lieselot; van Wieringen, Astrid; Francart, Tom; Büchner, Andreas; Lenarz, Thomas; Wouters, Jan
2011-10-01
Previous work suggests that bilateral cochlear implant users are sensitive to interaural cues if experimental speech processors are used to preserve accurate interaural information in the electrical stimulation pattern. Binaural unmasking occurs in adults and children when an interaural delay is applied to the envelope of a high-rate pulse train. Nevertheless, for speech perception, binaural unmasking benefits have not been demonstrated consistently, even with coordinated stimulation at both ears. The present study aimed at bridging the gap between basic psychophysical performance on binaural signal detection tasks on the one hand and binaural perception of speech in noise on the other hand. Therefore, binaural signal detection was expanded to multi-channel stimulation and biologically relevant interaural delays. A harmonic complex, consisting of three sinusoids (125, 250, and 375 Hz), was added to three 125-Hz-wide noise bands centered on the sinusoids. When an interaural delay of 700 μs was introduced, an average BMLD of 3 dB was established. Outcomes are promising in view of real-life benefits. Future research should investigate the generalization of the observed benefits for signal detection to speech perception in everyday listening situations and determine the importance of coordination of bilateral speech processors and accentuation of envelope cues.
The Relationship Between Intensity Coding and Binaural Sensitivity in Adults With Cochlear Implants.
Todd, Ann E; Goupell, Matthew J; Litovsky, Ruth Y
Many bilateral cochlear implant users show sensitivity to binaural information when stimulation is provided using a pair of synchronized electrodes. However, there is large variability in binaural sensitivity between and within participants across stimulation sites in the cochlea. It was hypothesized that within-participant variability in binaural sensitivity is in part affected by limitations and characteristics of the auditory periphery which may be reflected by monaural hearing performance. The objective of this study was to examine the relationship between monaural and binaural hearing performance within participants with bilateral cochlear implants. Binaural measures included dichotic signal detection and interaural time difference discrimination thresholds. Diotic signal detection thresholds were also measured. Monaural measures included dynamic range and amplitude modulation detection. In addition, loudness growth was compared between ears. Measures were made at three stimulation sites per listener. Greater binaural sensitivity was found with larger dynamic ranges. Poorer interaural time difference discrimination was found with larger difference between comfortable levels of the two ears. In addition, poorer diotic signal detection thresholds were found with larger differences between the dynamic ranges of the two ears. No relationship was found between amplitude modulation detection thresholds or symmetry of loudness growth and the binaural measures. The results suggest that some of the variability in binaural hearing performance within listeners across stimulation sites can be explained by factors nonspecific to binaural processing. The results are consistent with the idea that dynamic range and comfortable levels relate to peripheral neural survival and the width of the excitation pattern which could affect the fidelity with which central binaural nuclei process bilateral inputs.
Sutojo, Sarinah; van de Par, Steven; Schoenmaker, Esther
2018-06-01
In situations with competing talkers or in the presence of masking noise, speech intelligibility can be improved by spatially separating the target speaker from the interferers. This advantage is generally referred to as spatial release from masking (SRM) and different mechanisms have been suggested to explain it. One proposed mechanism to benefit from spatial cues is the binaural masking release, which is purely stimulus driven. According to this mechanism, the spatial benefit results from differences in the binaural cues of target and masker, which need to appear simultaneously in time and frequency to improve the signal detection. In an alternative proposed mechanism, the differences in the interaural cues improve the segregation of auditory streams, a process, which involves top-down processing rather than being purely stimulus driven. Other than the cues that produce binaural masking release, the interaural cue differences between target and interferer required to improve stream segregation do not have to appear simultaneously in time and frequency. This study is concerned with the contribution of binaural masking release to SRM for three masker types that differ with respect to the amount of energetic masking they exert. Speech intelligibility was measured, employing a stimulus manipulation that inhibits binaural masking release, and analyzed with a metric to account for the number of better-ear glimpses. Results indicate that the contribution of the stimulus-driven binaural masking release plays a minor role while binaural stream segregation and the availability of glimpses in the better ear had a stronger influence on improving the speech intelligibility. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
JNDS of interaural time delay (ITD) of selected frequency bands in speech and music signals
NASA Astrophysics Data System (ADS)
Aliphas, Avner; Colburn, H. Steven; Ghitza, Oded
2002-05-01
JNDS of interaural time delay (ITD) of selected frequency bands in the presence of other frequency bands have been reported for noiseband stimuli [Zurek (1985); Trahiotis and Bernstein (1990)]. Similar measurements will be reported for speech and music signals. When stimuli are synthesized with bandpass/band-stop operations, performance with complex stimuli are similar to noisebands (JNDS in tens or hundreds of microseconds); however, the resulting waveforms, when viewed through a model of the auditory periphery, show distortions (irregularities in phase and level) at the boundaries of the target band of frequencies. An alternate synthesis method based upon group-delay filtering operations does not show these distortions and is being used for the current measurements. Preliminary measurements indicate that when music stimuli are created using the new techniques, JNDS of ITDs are increased significantly compared to previous studies, with values on the order of milliseconds.
Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou
2018-01-01
Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.
On the ability of human listeners to distinguish between front and back.
Zhang, Peter Xinya; Hartmann, William M
2010-02-01
In order to determine whether a sound source is in front or in back, listeners can use location-dependent spectral cues caused by diffraction from their anatomy. This capability was studied using a precise virtual reality technique (VRX) based on a transaural technology. Presented with a virtual baseline simulation accurate up to 16 kHz, listeners could not distinguish between the simulation and a real source. Experiments requiring listeners to discriminate between front and back locations were performed using controlled modifications of the baseline simulation to test hypotheses about the important spectral cues. The experiments concluded: (1) Front/back cues were not confined to any particular 1/3rd or 2/3rd octave frequency region. Often adequate cues were available in any of several disjoint frequency regions. (2) Spectral dips were more important than spectral peaks. (3) Neither monaural cues nor interaural spectral level difference cues were adequate. (4) Replacing baseline spectra by sharpened spectra had minimal effect on discrimination performance. (5) When presented with an interaural time difference less than 200 micros, which pulled the image to the side, listeners still successfully discriminated between front and back, suggesting that front/back discrimination is independent of azimuthal localization within certain limits. Copyright 2009 Elsevier B.V. All rights reserved.
On the ability of human listeners to distinguish between front and back
Zhang, Peter Xinya; Hartmann, William M.
2009-01-01
In order to determine whether a sound source is in front or in back, listeners can use location-dependent spectral cues caused by diffraction from their anatomy. This capability was studied using a precise virtual-reality technique (VRX) based on a transaural technology. Presented with a virtual baseline simulation accurate up to 16 kHz, listeners could not distinguish between the simulation and a real source. Experiments requiring listeners to discriminate between front and back locations were performed using controlled modifications of the baseline simulation to test hypotheses about the important spectral cues. The experiments concluded: (1) Front/back cues were not confined to any particular 1/3rd or 2/3rd octave frequency region. Often adequate cues were available in any of several disjoint frequency regions. (2) Spectral dips were more important than spectral peaks. (3) Neither monaural cues nor interaural spectral level difference cues were adequate. (4) Replacing baseline spectra by sharpened spectra had minimal effect on discrimination performance. (5) When presented with an interaural time difference less than 200 μs, which pulled the image to the side, listeners still successfully discriminated between front and back, suggesting that front/back discrimination is independent of azimuthal localization within certain limits. PMID:19900525
The acoustical bright spot and mislocalization of tones by human listeners
Macaulay, Eric J.; Hartmann, William M.; Rakerd, Brad
2010-01-01
Listeners attempted to localize 1500-Hz sine tones presented in free field from a loudspeaker array, spanning azimuths from 0° (straight ahead) to 90° (extreme right). During this task, the tone levels and phases were measured in the listeners’ ear canals. Because of the acoustical bright spot, measured interaural level differences (ILD) were non-monotonic functions of azimuth with a maximum near 55°. In a source-identification task, listeners’ localization decisions closely tracked the non-monotonic ILD, and thus became inaccurate at large azimuths. When listeners received training and feedback, their accuracy improved only slightly. In an azimuth-discrimination task, listeners decided whether a first sound was to the left or to the right of a second. The discrimination results also reflected the confusion caused by the non-monotonic ILD, and they could be predicted approximately by a listener’s identification results. When the sine tones were amplitude modulated or replaced by narrow bands of noise, interaural time difference (ITD) cues greatly reduced the confusion for most listeners, but not for all. Recognizing the important role of the bright spot requires a reevaluation of the transition between the low-frequency region for localization (mainly ITD) and the high-frequency region (mainly ILD). PMID:20329844
NASA Astrophysics Data System (ADS)
Miller, Robert E. (Robin)
2005-04-01
Perception of very low frequencies (VLF) below 125 Hz reproduced by large woofers and subwoofers (SW), encompassing 3 octaves of the 10 regarded as audible, has physiological and content aspects. Large room acoustics and vibrato add VLF fluctuations, modulating audible carrier frequencies to >1 Hz. By convention, sounds below 90 Hz produce no interaural cues useful for spatial perception or localization, therefore bass management redirects the VLF range from main channels to a single (monaural) subwoofer channel, even if to more than one subwoofer. Yet subjects claim they hear a difference between a single subwoofer channel and two (stereo bass). If recordings contain spatial VLF content, is it possible physiologically to perceive interaural time/phase difference (ITD/IPD) between 16 and 125 Hz? To what extent does this perception have a lifelike quality; to what extent is it localization? If a first approximation of localization, would binaural SWs allow a higher crossover frequency (smaller satellite speakers)? Reported research supports the Jeffress model of ITD determination in brain structures, and extending the accepted lower frequency limit of IPD. Meanwhile, uncorrelated very low frequencies exist in all tested multi-channel music and movie content. The audibility, recording, and reproduction of uncorrelated VLF are explored in theory and experiments.
Four-choice sound localization abilities of two Florida manatees, Trichechus manatus latirostris.
Colbert, Debborah E; Gaspard, Joseph C; Reep, Roger; Mann, David A; Bauer, Gordon B
2009-07-01
The absolute sound localization abilities of two Florida manatees (Trichechus manatus latirostris) were measured using a four-choice discrimination paradigm, with test locations positioned at 45 deg., 90 deg., 270 deg. and 315 deg. angles relative to subjects facing 0 deg. Three broadband signals were tested at four durations (200, 500, 1000, 3000 ms), including a stimulus that spanned a wide range of frequencies (0.2-20 kHz), one stimulus that was restricted to frequencies with wavelengths shorter than their interaural time distances (6-20 kHz) and one that was limited to those with wavelengths longer than their interaural time distances (0.2-2 kHz). Two 3000 ms tonal signals were tested, including a 4 kHz stimulus, which is the midpoint of the 2.5-5.9 kHz fundamental frequency range of manatee vocalizations and a 16 kHz stimulus, which is in the range of manatee best-hearing sensitivity. Percentage correct within the broadband conditions ranged from 79% to 93% for Subject 1 and from 51% to 93% for Subject 2. Both performed above chance with the tonal signals but had much lower accuracy than with broadband signals, with Subject 1 at 44% and 33% and Subject 2 at 49% and 32% at the 4 kHz and 16 kHz conditions, respectively. These results demonstrate that manatees are able to localize frequency bands with wavelengths that are both shorter and longer than their interaural time distances and suggest that they have the ability to localize both manatee vocalizations and recreational boat engine noises.
Aronoff, Justin M.; Freed, Daniel J.; Fisher, Laurel M.; Pal, Ivan; Soli, Sigfrid D.
2011-01-01
Objectives Cochlear implant microphones differ in placement, frequency response, and other characteristics such as whether they are directional. Although normal hearing individuals are often used as controls in studies examining cochlear implant users’ binaural benefits, the considerable differences across cochlear implant microphones make such comparisons potentially misleading. The goal of this study was to examine binaural benefits for speech perception in noise for normal hearing individuals using stimuli processed by head-related transfer functions (HRTFs) based on the different cochlear implant microphones. Design HRTFs were created for different cochlear implant microphones and used to test participants on the Hearing in Noise Test. Experiment 1 tested cochlear implant users and normal hearing individuals with HRTF-processed stimuli and with sound field testing to determine whether the HRTFs adequately simulated sound field testing. Experiment 2 determined the measurement error and performance-intensity function for the Hearing in Noise Test with normal hearing individuals listening to stimuli processed with the various HRTFs. Experiment 3 compared normal hearing listeners’ performance across HRTFs to determine how the HRTFs affected performance. Experiment 4 evaluated binaural benefits for normal hearing listeners using the various HRTFs, including ones that were modified to investigate the contributions of interaural time and level cues. Results The results indicated that the HRTFs adequately simulated sound field testing for the Hearing in Noise Test. They also demonstrated that the test-retest reliability and performance-intensity function were consistent across HRTFs, and that the measurement error for the test was 1.3 dB, with a change in signal-to-noise ratio of 1 dB reflecting a 10% change in intelligibility. There were significant differences in performance when using the various HRTFs, with particularly good thresholds for the HRTF based on the directional microphone when the speech and masker were spatially separated, emphasizing the importance of measuring binaural benefits separately for each HRTF. Evaluation of binaural benefits indicated that binaural squelch and spatial release from masking were found for all HRTFs and binaural summation was found for all but one HRTF, although binaural summation was less robust than the other types of binaural benefits. Additionally, the results indicated that neither interaural time nor level cues dominated binaural benefits for the normal hearing participants. Conclusions This study provides a means to measure the degree to which cochlear implant microphones affect acoustic hearing with respect to speech perception in noise. It also provides measures that can be used to evaluate the independent contributions of interaural time and level cues. These measures provide tools that can aid researchers in understanding and improving binaural benefits in acoustic hearing individuals listening via cochlear implant microphones. PMID:21412155
Azimuthal sound localization in the European starling (Sturnus vulgaris): I. Physical binaural cues.
Klump, G M; Larsen, O N
1992-02-01
The physical measurements reported here test whether the European starling (Sturnus vulgaris) evaluates the azimuth direction of a sound source with a peripheral auditory system composed of two acoustically coupled pressure-difference receivers (1) or of two decoupled pressure receivers (2). A directional pattern of sound intensity in the free-field was measured at the entrance of the auditory meatus using a probe microphone, and at the tympanum using laser vibrometry. The maximum differences in the sound-pressure level measured with the microphone between various speaker positions and the frontal speaker position were 2.4 dB at 1 and 2 kHz, 7.3 dB at 4 kHz, 9.2 dB at 6 kHz, and 10.9 dB at 8 kHz. The directional amplitude pattern measured by laser vibrometry did not differ from that measured with the microphone. Neither did the directional pattern of travel times to the ear. Measurements of the amplitude and phase transfer function of the starling's interaural pathway using a closed sound system were in accord with the results of the free-field measurements. In conclusion, although some sound transmission via the interaural canal occurred, the present experiments support the hypothesis 2 above that the starling's peripheral auditory system is best described as consisting of two functionally decoupled pressure receivers.
Binaural sensitivity changes between cortical on and off responses
Dahmen, Johannes C.; King, Andrew J.; Schnupp, Jan W. H.
2011-01-01
Neurons exhibiting on and off responses with different frequency tuning have previously been described in the primary auditory cortex (A1) of anesthetized and awake animals, but it is unknown whether other tuning properties, including sensitivity to binaural localization cues, also differ between on and off responses. We measured the sensitivity of A1 neurons in anesthetized ferrets to 1) interaural level differences (ILDs), using unmodulated broadband noise with varying ILDs and average binaural levels, and 2) interaural time delays (ITDs), using sinusoidally amplitude-modulated broadband noise with varying envelope ITDs. We also assessed fine-structure ITD sensitivity and frequency tuning, using pure-tone stimuli. Neurons most commonly responded to stimulus onset only, but purely off responses and on-off responses were also recorded. Of the units exhibiting significant binaural sensitivity nearly one-quarter showed binaural sensitivity in both on and off responses, but in almost all (∼97%) of these units the binaural tuning of the on responses differed significantly from that seen in the off responses. Moreover, averaged, normalized ILD and ITD tuning curves calculated from all units showing significant sensitivity to binaural cues indicated that on and off responses displayed different sensitivity patterns across the population. A principal component analysis of ITD response functions suggested a continuous cortical distribution of binaural sensitivity, rather than discrete response classes. Rather than reflecting a release from inhibition without any functional significance, we propose that binaural off responses may be important to cortical encoding of sound-source location. PMID:21562191
Intelligibility of speech in a virtual 3-D environment.
MacDonald, Justin A; Balakrishnan, J D; Orosz, Michael D; Karplus, Walter J
2002-01-01
In a simulated air traffic control task, improvement in the detection of auditory warnings when using virtual 3-D audio depended on the spatial configuration of the sounds. Performance improved substantially when two of four sources were placed to the left and the remaining two were placed to the right of the participant. Surprisingly, little or no benefits were observed for configurations involving the elevation or transverse (front/back) dimensions of virtual space, suggesting that position on the interaural (left/right) axis is the crucial factor to consider in auditory display design. The relative importance of interaural spacing effects was corroborated in a second, free-field (real space) experiment. Two additional experiments showed that (a) positioning signals to the side of the listener is superior to placing them in front even when two sounds are presented in the same location, and (b) the optimal distance on the interaural axis varies with the amplitude of the sounds. These results are well predicted by the behavior of an ideal observer under the different display conditions. This suggests that guidelines for auditory display design that allow for effective perception of speech information can be developed from an analysis of the physical sound patterns.
Bernstein, Leslie R; Trahiotis, Constantine
2014-02-01
Sensitivity to ongoing interaural temporal disparities (ITDs) was measured using bandpass-filtered pulse trains centered at 4600, 6500, or 9200 Hz. Save for minor differences in the exact center frequencies, those target stimuli were those employed by Majdak and Laback [J. Acoust. Soc. Am. 125, 3903-3913 (2009)]. At each center frequency, threshold ITD was measured for pulse repetition rates ranging from 64 to 609 Hz. The results and quantitative predictions by a cross-correlation-based model indicated that (1) at most pulse repetition rates, threshold ITD increased with center frequency, (2) the cutoff frequency of the putative envelope low-pass filter that determines sensitivity to ITD at high envelope rates appears to be inversely related to center frequency, and (3) both outcomes were accounted for by assuming that, independent of the center frequency, the listeners' decision variable was a constant criterion change in interaural correlation of the stimuli as processed internally. The finding of an inverse relation between center frequency and the envelope rate limitation, while consistent with much prior literature, runs counter to the conclusion reached by Majdak and Laback.
Post interaural neural net-based vowel recognition
NASA Astrophysics Data System (ADS)
Jouny, Ismail I.
2001-10-01
Interaural head related transfer functions are used to process speech signatures prior to neural net based recognition. Data representing the head related transfer function of a dummy has been collected at MIT and made available on the Internet. This data is used to pre-process vowel signatures to mimic the effects of human ear on speech perception. Signatures representing various vowels of the English language are then presented to a multi-layer perceptron trained using the back propagation algorithm for recognition purposes. The focus in this paper is to assess the effects of human interaural system on vowel recognition performance particularly when using a classification system that mimics the human brain such as a neural net.
Bernstein, Leslie R; Trahiotis, Constantine
2014-12-01
Binaural detection was measured as a function of the center frequency, bandwidth, and interaural correlation of masking noise. Thresholds were obtained for 500-Hz or 125-Hz Sπ tonal signals and for the latter stimuli (noise or signal-plus-noise) transposed to 4 kHz. A primary goal was assessment of the generality of van der Heijden and Trahiotis' [J. Acoust. Soc. Am. 101, 1019-1022 (1997)] hypothesis that thresholds could be accounted for by the "additive" masking effects of the underlying No and Nπ components of a masker having an interaural correlation of ρ. Results indicated that (1) the overall patterning of the data depended neither upon center frequency nor whether information was conveyed via the waveform or by its envelope; (2) thresholds for transposed stimuli improved relative to their low-frequency counterparts as bandwidth of the masker was increased; (3) the additivity approach accounted well for the data across stimulus conditions but consistently overestimated MLDs, especially for narrowband maskers; (4) a quantitative approach explicitly taking into account the distributions of time-varying ITD-based lateral positions produced by masker-alone and signal-plus-masker waveforms proved more successful, albeit while employing a larger set of assumptions, parameters, and computational complexity.
Congenital amusia: a cognitive disorder limited to resolved harmonics and with no peripheral basis.
Cousineau, Marion; Oxenham, Andrew J; Peretz, Isabelle
2015-01-01
Pitch plays a fundamental role in audition, from speech and music perception to auditory scene analysis. Congenital amusia is a neurogenetic disorder that appears to affect primarily pitch and melody perception. Pitch is normally conveyed by the spectro-temporal fine structure of low harmonics, but some pitch information is available in the temporal envelope produced by the interactions of higher harmonics. Using 10 amusic subjects and 10 matched controls, we tested the hypothesis that amusics suffer exclusively from impaired processing of spectro-temporal fine structure. We also tested whether the inability of amusics to process acoustic temporal fine structure extends beyond pitch by measuring sensitivity to interaural time differences, which also rely on temporal fine structure. Further tests were carried out on basic intensity and spectral resolution. As expected, pitch perception based on spectro-temporal fine structure was impaired in amusics; however, no significant deficits were observed in amusics' ability to perceive the pitch conveyed via temporal-envelope cues. Sensitivity to interaural time differences was also not significantly different between the amusic and control groups, ruling out deficits in the peripheral coding of temporal fine structure. Finally, no significant differences in intensity or spectral resolution were found between the amusic and control groups. The results demonstrate a pitch-specific deficit in fine spectro-temporal information processing in amusia that seems unrelated to temporal or spectral coding in the auditory periphery. These results are consistent with the view that there are distinct mechanisms dedicated to processing resolved and unresolved harmonics in the general population, the former being altered in congenital amusia while the latter is spared. Copyright © 2014 Elsevier Ltd. All rights reserved.
Perceptual consequences of disrupted auditory nerve activity.
Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold
2005-06-01
Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique contribution of neural synchrony to sensory perception but also provide guidance for translational research in terms of better diagnosis and management of human communication disorders.
The Relationship Between Intensity Coding and Binaural Sensitivity in Adults With Cochlear Implants
Todd, Ann E.; Goupell, Matthew J.; Litovsky, Ruth Y.
2016-01-01
Objectives Many bilateral cochlear implant users show sensitivity to binaural information when stimulation is provided using a pair of synchronized electrodes. However, there is large variability in binaural sensitivity between and within participants across stimulation sites in the cochlea. It was hypothesized that within-participant variability in binaural sensitivity is in part affected by limitations and characteristics of the auditory periphery which may be reflected by monaural hearing performance. The objective of this study was to examine the relationship between monaural and binaural hearing performance within participants with bilateral cochlear implants. Design Binaural measures included dichotic signal detection and interaural time difference discrimination thresholds. Diotic signal detection thresholds were also measured. Monaural measures included dynamic range and amplitude modulation detection. In addition, loudness growth was compared between ears. Measures were made at three stimulation sites per listener. Results Greater binaural sensitivity was found with larger dynamic ranges. Poorer interaural time difference discrimination was found with larger difference between comfortable levels of the two ears. In addition, poorer diotic signal detection thresholds were found with larger differences between the dynamic ranges of the two ears. No relationship was found between amplitude modulation detection thresholds or symmetry of loudness growth and the binaural measures. Conclusions The results suggest that some of the variability in binaural hearing performance within listeners across stimulation sites can be explained by factors non-specific to binaural processing. The results are consistent with the idea that dynamic range and comfortable levels relate to peripheral neural survival and the width of the excitation pattern which could affect the fidelity with which central binaural nuclei process bilateral inputs. PMID:27787393
Development of the sound localization cues in cats
NASA Astrophysics Data System (ADS)
Tollin, Daniel J.
2004-05-01
Cats are a common model for developmental studies of the psychophysical and physiological mechanisms of sound localization. Yet, there are few studies on the development of the acoustical cues to location in cats. The magnitude of the three main cues, interaural differences in time (ITDs) and level (ILDs), and monaural spectral shape cues, vary with location in adults. However, the increasing interaural distance associated with a growing head and pinnae during development will result in cues that change continuously until maturation is complete. Here, we report measurements, in cats aged 1 week to adulthood, of the physical dimensions of the head and pinnae and the localization cues, computed from measurements of directional transfer functions. At 1 week, ILD depended little on azimuth for frequencies <6-7 kHz, maximum ITD was 175 μs, and for sources varying in elevation, a prominent spectral notch was located at higher frequencies than in the older cats. As cats develop, the spectral cues and the frequencies at which ILDs become substantial (>10 dB) shift to lower frequencies, and the maximum ITD increases to nearly 370 μs. Changes in the cues are correlated with the increasing size of the head and pinnae. [Work supported by NIDCD DC05122.
Cervical Vestibular-Evoked Myogenic Potentials: Norms and Protocols
Isaradisaikul, Suwicha; Navacharoen, Niramon; Hanprasertpong, Charuk; Kangsanarak, Jaran
2012-01-01
Vestibular-evoked myogenic potential (VEMP) testing is a vestibular function test used for evaluating saccular and inferior vestibular nerve function. Parameters of VEMP testing include VEMP threshold, latencies of p1 and n1, and p1-n1 interamplitude. Less commonly used parameters were p1-n1 interlatency, interaural difference of p1 and n1 latency, and interaural amplitude difference (IAD) ratio. This paper recommends using air-conducted 500 Hz tone burst auditory stimulation presented monoaurally via an inserted ear phone while the subject is turning his head to the contralateral side in the sitting position and recording the responses from the ipsilateral sternocleidomastoid muscle. Normative values of VEMP responses in 50 normal audiovestibular volunteers were presented. VEMP testing protocols and normative values in other literature were reviewed and compared. The study is beneficial to clinicians as a reference guide to set up VEMP testing and interpretation of the VEMP responses. PMID:22577386
Binaural enhancement for bilateral cochlear implant users.
Brown, Christopher A
2014-01-01
Bilateral cochlear implant (BCI) users receive limited binaural cues and, thus, show little improvement to speech intelligibility from spatial cues. The feasibility of a method for enhancing the binaural cues available to BCI users is investigated. This involved extending interaural differences of levels, which typically are restricted to high frequencies, into the low-frequency region. Speech intelligibility was measured in BCI users listening over headphones and with direct stimulation, with a target talker presented to one side of the head in the presence of a masker talker on the other side. Spatial separation was achieved by applying either naturally occurring binaural cues or enhanced cues. In this listening configuration, BCI patients showed greater speech intelligibility with the enhanced binaural cues than with naturally occurring binaural cues. In some situations, it is possible for BCI users to achieve greater speech intelligibility when binaural cues are enhanced by applying interaural differences of levels in the low-frequency region.
Characteristics of stereo reproduction with parametric loudspeakers
NASA Astrophysics Data System (ADS)
Aoki, Shigeaki; Toba, Masayoshi; Tsujita, Norihisa
2012-05-01
A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. The parametric loudspeaker is one of the prominent applications of nonlinear ultrasonics. So far, the applications have been limited monaural reproduction sound system for public address in museum, station and street etc. In this paper, we discussed characteristics of stereo reproduction with two parametric loudspeakers by comparing with those with two ordinary dynamic loudspeakers. In subjective tests, three typical listening positions were selected to investigate the possibility of correct sound localization in a wide listening area. The binaural information was ILD (Interaural Level Difference) or ITD (Interaural Time Delay). The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 and 112 mm, respectively. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones and pink noise. Three young males listened to test signals 10 times in each listening condition. Subjective test results showed that listeners at the three typical listening positions perceived correct sound localization of all signals using the parametric loudspeakers. It was almost similar to those using the ordinary dynamic loudspeakers, however, except for the case of sinusoidal waves with ITD. It was determined the parametric loudspeaker could exclude the contradiction between the binaural information ILD and ITD that occurred in stereo reproduction with ordinary dynamic loudspeakers because the super directivity of parametric loudspeaker suppressed the cross talk components.
Glycinergic inhibition tunes coincidence detection in the auditory brainstem
Myoga, Michael H.; Lehnert, Simon; Leibold, Christian; Felmy, Felix; Grothe, Benedikt
2014-01-01
Neurons in the medial superior olive (MSO) detect microsecond differences in the arrival time of sounds between the ears (interaural time differences or ITDs), a crucial binaural cue for sound localization. Synaptic inhibition has been implicated in tuning ITD sensitivity, but the cellular mechanisms underlying its influence on coincidence detection are debated. Here we determine the impact of inhibition on coincidence detection in adult Mongolian gerbil MSO brain slices by testing precise temporal integration of measured synaptic responses using conductance-clamp. We find that inhibition dynamically shifts the peak timing of excitation, depending on its relative arrival time, which in turn modulates the timing of best coincidence detection. Inhibitory control of coincidence detection timing is consistent with the diversity of ITD functions observed in vivo and is robust under physiologically relevant conditions. Our results provide strong evidence that temporal interactions between excitation and inhibition on microsecond timescales are critical for binaural processing. PMID:24804642
Underwater localization of pure tones by harbor seals (Phoca vitulina).
Bodson, Anaïs; Miersch, Lars; Dehnhardt, Guido
2007-10-01
The underwater sound localization acuity of harbor seals (Phoca vitulina) was measured in the horizontal plane. Minimum audible angles (MAAs) of pure tones were determined as a function of frequency from 0.2 to 16 kHz for two seals. Testing was conducted in a 10-m-diam underwater half circle using a right/left psychophysical procedure. The results indicate that for both harbor seals, MAAs were large at high frequencies (13.5 degrees and 17.4 degrees at 16 kHz), transitional at intermediate frequencies (9.6 degrees and 10.1 degrees at 4 kHz), and particularly small at low frequencies (3.2 degrees and 3.1 degrees at 0.2 kHz). Harbor seals seem to be able to utilize both binaural cues, interaural time differences (ITDs) and interaural intensity differences (IIDs), but a significant decrease in the sound localization acuity with increasing frequency suggests that IID cues may not be as robust as ITD cues under water. These results suggest that the harbor seal can be regarded as a low-frequency specialist. Additionally, to obtain a MAA more representative of the species, the horizontal underwater MAA of six adult harbor seals was measured at 2 kHz under identical conditions. The MAAs of the six animals ranged from 8.8 degrees to 11.7 degrees , resulting in a mean MAA of 10.3 degrees .
Brown, Andrew D; Tollin, Daniel J
2016-09-21
In mammals, localization of sound sources in azimuth depends on sensitivity to interaural differences in sound timing (ITD) and level (ILD). Paradoxically, while typical ILD-sensitive neurons of the auditory brainstem require millisecond synchrony of excitatory and inhibitory inputs for the encoding of ILDs, human and animal behavioral ILD sensitivity is robust to temporal stimulus degradations (e.g., interaural decorrelation due to reverberation), or, in humans, bilateral clinical device processing. Here we demonstrate that behavioral ILD sensitivity is only modestly degraded with even complete decorrelation of left- and right-ear signals, suggesting the existence of a highly integrative ILD-coding mechanism. Correspondingly, we find that a majority of auditory midbrain neurons in the central nucleus of the inferior colliculus (of chinchilla) effectively encode ILDs despite complete decorrelation of left- and right-ear signals. We show that such responses can be accounted for by relatively long windows of bilateral excitatory-inhibitory interaction, which we explicitly measure using trains of narrowband clicks. Neural and behavioral data are compared with the outputs of a simple model of ILD processing with a single free parameter, the duration of excitatory-inhibitory interaction. Behavioral, neural, and modeling data collectively suggest that ILD sensitivity depends on binaural integration of excitation and inhibition within a ≳3 ms temporal window, significantly longer than observed in lower brainstem neurons. This relatively slow integration potentiates a unique role for the ILD system in spatial hearing that may be of particular importance when informative ITD cues are unavailable. In mammalian hearing, interaural differences in the timing (ITD) and level (ILD) of impinging sounds carry critical information about source location. However, natural sounds are often decorrelated between the ears by reverberation and background noise, degrading the fidelity of both ITD and ILD cues. Here we demonstrate that behavioral ILD sensitivity (in humans) and neural ILD sensitivity (in single neurons of the chinchilla auditory midbrain) remain robust under stimulus conditions that render ITD cues undetectable. This result can be explained by "slow" temporal integration arising from several-millisecond-long windows of excitatory-inhibitory interaction evident in midbrain, but not brainstem, neurons. Such integrative coding can account for the preservation of ILD sensitivity despite even extreme temporal degradations in ecological acoustic stimuli. Copyright © 2016 the authors 0270-6474/16/369908-14$15.00/0.
Siveke, Ida; Leibold, Christian; Grothe, Benedikt
2007-11-01
We are regularly exposed to several concurrent sounds, producing a mixture of binaural cues. The neuronal mechanisms underlying the localization of concurrent sounds are not well understood. The major binaural cues for localizing low-frequency sounds in the horizontal plane are interaural time differences (ITDs). Auditory brain stem neurons encode ITDs by firing maximally in response to "favorable" ITDs and weakly or not at all in response to "unfavorable" ITDs. We recorded from ITD-sensitive neurons in the dorsal nucleus of the lateral lemniscus (DNLL) while presenting pure tones at different ITDs embedded in noise. We found that increasing levels of concurrent white noise suppressed the maximal response rate to tones with favorable ITDs and slightly enhanced the response rate to tones with unfavorable ITDs. Nevertheless, most of the neurons maintained ITD sensitivity to tones even for noise intensities equal to that of the tone. Using concurrent noise with a spectral composition in which the neuron's excitatory frequencies are omitted reduced the maximal response similar to that obtained with concurrent white noise. This finding indicates that the decrease of the maximal rate is mediated by suppressive cross-frequency interactions, which we also observed during monaural stimulation with additional white noise. In contrast, the enhancement of the firing rate to tones at unfavorable ITD might be due to early binaural interactions (e.g., at the level of the superior olive). A simple simulation corroborates this interpretation. Taken together, these findings suggest that the spectral composition of a concurrent sound strongly influences the spatial processing of ITD-sensitive DNLL neurons.
Sensitivity to Envelope Interaural Time Differences at High Modulation Rates
Bleeck, Stefan; McAlpine, David
2015-01-01
Sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure of low-frequency tones and the modulated envelopes of high-frequency sounds are considered comparable, particularly for envelopes shaped to transmit similar fidelity of temporal information normally present for low-frequency sounds. Nevertheless, discrimination performance for envelope modulation rates above a few hundred Hertz is reported to be poor—to the point of discrimination thresholds being unattainable—compared with the much higher (>1,000 Hz) limit for low-frequency ITD sensitivity, suggesting the presence of a low-pass filter in the envelope domain. Further, performance for identical modulation rates appears to decline with increasing carrier frequency, supporting the view that the low-pass characteristics observed for envelope ITD processing is carrier-frequency dependent. Here, we assessed listeners’ sensitivity to ITDs conveyed in pure tones and in the modulated envelopes of high-frequency tones. ITD discrimination for the modulated high-frequency tones was measured as a function of both modulation rate and carrier frequency. Some well-trained listeners appear able to discriminate ITDs extremely well, even at modulation rates well beyond 500 Hz, for 4-kHz carriers. For one listener, thresholds were even obtained for a modulation rate of 800 Hz. The highest modulation rate for which thresholds could be obtained declined with increasing carrier frequency for all listeners. At 10 kHz, the highest modulation rate at which thresholds could be obtained was 600 Hz. The upper limit of sensitivity to ITDs conveyed in the envelope of high-frequency modulated sounds appears to be higher than previously considered. PMID:26721926
Ozmeral, Erol J; Eddins, David A; Eddins, Ann C
2016-12-01
Previous electrophysiological studies of interaural time difference (ITD) processing have demonstrated that ITDs are represented by a nontopographic population rate code. Rather than narrow tuning to ITDs, neural channels have broad tuning to ITDs in either the left or right auditory hemifield, and the relative activity between the channels determines the perceived lateralization of the sound. With advancing age, spatial perception weakens and poor temporal processing contributes to declining spatial acuity. At present, it is unclear whether age-related temporal processing deficits are due to poor inhibitory controls in the auditory system or degraded neural synchrony at the periphery. Cortical processing of spatial cues based on a hemifield code are susceptible to potential age-related physiological changes. We consider two distinct predictions of age-related changes to ITD sensitivity: declines in inhibitory mechanisms would lead to increased excitation and medial shifts to rate-azimuth functions, whereas a general reduction in neural synchrony would lead to reduced excitation and shallower slopes in the rate-azimuth function. The current study tested these possibilities by measuring an evoked response to ITD shifts in a narrow-band noise. Results were more in line with the latter outcome, both from measured latencies and amplitudes of the global field potentials and source-localized waveforms in the left and right auditory cortices. The measured responses for older listeners also tended to have reduced asymmetric distribution of activity in response to ITD shifts, which is consistent with other sensory and cognitive processing models of aging. Copyright © 2016 the American Physiological Society.
McAlpine, D; Jiang, D; Palmer, A R
1996-08-01
Monaural and binaural response properties of single units in the inferior colliculus (IC) of the guinea pig were investigated. Neurones were classified according to the effect of monaural stimulation of either ear alone and the effect of binaural stimulation. The majority (309/334) of IC units were excited (E) by stimulation of the contralateral ear, of which 41% (127/309) were also excited by monaural ipsilateral stimulation (EE), and the remainder (182/309) were unresponsive to monaural ipsilateral stimulation (EO). For units with best frequencies (BF) up to 3 kHz, similar proportions of EE and EO units were observed. Above 3 kHz, however, significantly more EO than EE units were observed. Units were also classified as either facilitated (F), suppressed (S), or unaffected (O) by binaural stimulation. More EO than EE units were suppressed or unaffected by binaural stimulation, and more EE than EO units were facilitated. There were more EO/S units above 1.5 kHz than below. Binaural beats were used to examine the interaural delay sensitivity of low-BF (BF < 1.5 kHz) units. The distributions of preferred interaural phases and, by extension, interaural delays, resembled those seen in other species, and those obtained using static interaural delays in the IC of the guinea pig. Units with best phase (BP) angles closer to zero generally showed binaural facilitation, whilst those with larger BPs generally showed binaural suppression. The classification of units based upon binaural stimulation with BF tones was consistent with their interaural-delay sensitivity. Characteristic delays (CD) were examined for 96 low-BF units. A clear relationship between BF and CD was observed. CDs of units with very low BFs (< 200 Hz) were long and positive, becoming progressively shorter as BF increased until, for units with BFs between 400 and 800 Hz, the majority of CDs were negative. Above 800 Hz, both positive and negative CDs were observed. A relationship between CD and characteristic phase (CP) was also observed, with CPs increasing in value as CDs became more negative. These results demonstrate that binaural processing in the guinea pig at low frequencies is similar to that reported in all other species studied. However, the dependence of CD on BF would suggest that the delay line system that sets up the interaural-delay sensitivity in the lower brainstem varies across frequency as well as within each frequency band.
Keating, Peter; Nodal, Fernando R; King, Andrew J
2014-01-01
For over a century, the duplex theory has guided our understanding of human sound localization in the horizontal plane. According to this theory, the auditory system uses interaural time differences (ITDs) and interaural level differences (ILDs) to localize low-frequency and high-frequency sounds, respectively. Whilst this theory successfully accounts for the localization of tones by humans, some species show very different behaviour. Ferrets are widely used for studying both clinical and fundamental aspects of spatial hearing, but it is not known whether the duplex theory applies to this species or, if so, to what extent the frequency range over which each binaural cue is used depends on acoustical or neurophysiological factors. To address these issues, we trained ferrets to lateralize tones presented over earphones and found that the frequency dependence of ITD and ILD sensitivity broadly paralleled that observed in humans. Compared with humans, however, the transition between ITD and ILD sensitivity was shifted toward higher frequencies. We found that the frequency dependence of ITD sensitivity in ferrets can partially be accounted for by acoustical factors, although neurophysiological mechanisms are also likely to be involved. Moreover, we show that binaural cue sensitivity can be shaped by experience, as training ferrets on a 1-kHz ILD task resulted in significant improvements in thresholds that were specific to the trained cue and frequency. Our results provide new insights into the factors limiting the use of different sound localization cues and highlight the importance of sensory experience in shaping the underlying neural mechanisms. PMID:24256073
Can monaural temporal masking explain the ongoing precedence effect?
Freyman, Richard L; Morse-Fortier, Charlotte; Griffin, Amanda M; Zurek, Patrick M
2018-02-01
The precedence effect for transient sounds has been proposed to be based primarily on monaural processes, manifested by asymmetric temporal masking. This study explored the potential for monaural explanations with longer ("ongoing") sounds exhibiting the precedence effect. Transient stimuli were single lead-lag noise burst pairs; ongoing stimuli were trains of 63 burst pairs. Unlike with transients, monaural masking data for ongoing sounds showed no advantage for the lead, and are inconsistent with asymmetric audibility as an explanation for ongoing precedence. This result, along with supplementary measurements of interaural time discrimination, suggests different explanations for transient and ongoing precedence.
Echolocation of insects using intermittent frequency-modulated sounds.
Matsuo, Ikuo; Takanashi, Takuma
2015-09-01
Using echolocation influenced by Doppler shift, bats can capture flying insects in real three-dimensional space. On the basis of this principle, a model that estimates object locations using frequency modulated (FM) sound was proposed. However, no investigation was conducted to verify whether the model can localize flying insects from their echoes. This study applied the model to estimate the range and direction of flying insects by extracting temporal changes from the time-frequency pattern and interaural range difference, respectively. The results obtained confirm that a living insect's position can be estimated using this model with echoes measured while emitting intermittent FM sounds.
[The significance of the interaural latency difference of VEMP].
Wu, Ziming; Zhang, Suzhen; Ji, Fei; Zhou, Na; Guo, Weiwei; Yang, Weiyan; Han, Dongyi
2005-05-01
To investigate the significance of the interaural latency (IAL) difference of the latency of VEMP and to raise the sensitivity of the test. Vestibular evoked myogenic potentials (VEMP) were tested in 20 healthy subjects; 13 patients with acoustic neuromaor cerebellopontile angle occupying lesions and 1 patient with multiple sclerosis. IAL differences of the wave p13,n23 and p13-n23 (abbreviatd as /delta p13/, /delta n23/ and /delta p13-n23/, respectively) were analysed to determine the normal range and the upper limit of the norm data. Four illustrative cases with the abnormality of the IAL difference were given as examples. The upper limit of the IAL of /delta p13/ was 1.13 ms; that of the /delta n23/ was 1.38 ms and that of /delta p13-n23/ was 1.54 ms. The /p13-n23/ latency between the right and left side had no significant difference (P > 0.05). /delta p13/, /delta n23/ and /delta p13-n23/, especially /delta p13/ of VEMP can suggest abnormality in the neural pathway and it may be applicable in practice.
Akeroyd, Michael A
2004-08-01
The equalization stage in the equalization-cancellation model of binaural unmasking compensates for the interaural time delay (ITD) of a masking noise by introducing an opposite, internal delay [N. I. Durlach, in Foundations of Modern Auditory Theory, Vol. II., edited by J. V. Tobias (Academic, New York, 1972)]. Culling and Summerfield [J. Acoust. Soc. Am. 98, 785-797 (1995)] developed a multi-channel version of this model in which equalization was "free" to use the optimal delay in each channel. Two experiments were conducted to test if equalization was indeed free or if it was "restricted" to the same delay in all channels. One experiment measured binaural detection thresholds, using an adaptive procedure, for 1-, 5-, or 17-component tones against a broadband masking noise, in three binaural configurations (N0S180, N180S0, and N90S270). The thresholds for the 1-component stimuli were used to normalize the levels of each of the 5- and 17-component stimuli so that they were equally detectable. If equalization was restricted, then, for the 5- and 17-component stimuli, the N90S270 and N180S0 configurations would yield a greater threshold than the N0S180 configurations. No such difference was found. A subsequent experiment measured binaural detection thresholds, via psychometric functions, for a 2-component complex tone in the same three binaural configurations. Again, no differential effect of configuration was observed. An analytic model of the detection of a complex tone showed that the results were more consistent with free equalization than restricted equalization, although the size of the differences was found to depend on the shape of the psychometric function for detection.
Tollin, Daniel J.; Yin, Tom C. T.
2006-01-01
The lateral superior olive (LSO) is believed to encode differences in sound level at the two ears, a cue for azimuthal sound location. Most high-frequency-sensitive LSO neurons are binaural, receiving inputs from both ears. An inhibitory input from the contralateral ear, via the medial nucleus of the trapezoid body (MNTB), and excitatory input from the ipsilateral ear enable level differences to be encoded. However, the classical descriptions of low-frequency-sensitive neurons report primarily monaural cells with no contralateral inhibition. Anatomical and physiological evidence, however, shows that low-frequency LSO neurons receive low-frequency inhibitory input from ipsilateral MNTB, which in turn receives excitatory input from the contralateral cochlear nucleus and low-frequency excitatory input from the ipsilateral cochlear nucleus. Therefore, these neurons would be expected to be binaural with contralateral inhibition. Here, we re-examined binaural interaction in low-frequency (less than ~3 kHz) LSO neurons and phase locking in the MNTB. Phase locking to low-frequency tones in MNTB and ipsilaterally driven LSO neurons with frequency sensitivities < 1.2 kHz was enhanced relative to the auditory nerve. Moreover, most low-frequency LSO neurons exhibited contralateral inhibition: ipsilaterally driven responses were suppressed by raising the level of the contralateral stimulus; most neurons were sensitive to interaural time delays in pure tone and noise stimuli such that inhibition was nearly maximal when the stimuli were presented to the ears in-phase. The data demonstrate that low-frequency LSO neurons of cat are not monaural and can exhibit contralateral inhibition like their high-frequency counterparts. PMID:16291937
Directional hearing by linear summation of binaural inputs at the medial superior olive
van der Heijden, Marcel; Lorteije, Jeannette A. M.; Plauška, Andrius; Roberts, Michael T.; Golding, Nace L.; Borst, J. Gerard G.
2013-01-01
SUMMARY Neurons in the medial superior olive (MSO) enable sound localization by their remarkable sensitivity to submillisecond interaural time differences (ITDs). Each MSO neuron has its own “best ITD” to which it responds optimally. A difference in physical path length of the excitatory inputs from both ears cannot fully account for the ITD tuning of MSO neurons. As a result, it is still debated how these inputs interact and whether the segregation of inputs to opposite dendrites, well-timed synaptic inhibition, or asymmetries in synaptic potentials or cellular morphology further optimize coincidence detection or ITD tuning. Using in vivo whole-cell and juxtacellular recordings, we show here that ITD tuning of MSO neurons is determined by the timing of their excitatory inputs. The inputs from both ears sum linearly, whereas spike probability depends nonlinearly on the size of synaptic inputs. This simple coincidence detection scheme thus makes accurate sound localization possible. PMID:23764292
Binaural Release from Masking for a Speech Sound in Infants, Preschoolers, and Adults.
ERIC Educational Resources Information Center
Nozza, Robert J.
1988-01-01
Binaural masked thresholds for a speech sound (/ba/) were estimated under two interaural phase conditions in three age groups (infants, preschoolers, adults). Differences as a function of both age and condition and effects of reducing intensity for adults were significant in indicating possible developmental binaural hearing changes, especially…
Kuwada, S; Yin, T C; Wickesberg, R E
1979-11-02
The interaural phase sensitivity of neurons was studied through the use of binaural beat stimuli. The response of most cells was phase-locked to the beat frequency, which provides a possible neural correlate to the human sensation of binaural beats. In addition, this stimulus allowed the direction and rate of interaural phase change to be varied. Some neurons in our sample responded selectively to manipulations of these two variables, which suggests a sensitivity to direction or speed of movement.
Altmann, Christian F; Ueda, Ryuhei; Bucher, Benoit; Furukawa, Shigeto; Ono, Kentaro; Kashino, Makio; Mima, Tatsuya; Fukuyama, Hidenao
2017-10-01
Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex. Copyright © 2017 Elsevier Inc. All rights reserved.
Lee, Norman; Schrode, Katrina M.; Johns, Anastasia R.; Christensen-Dalsgaard, Jakob; Bee, Mark A.
2014-01-01
Anuran ears function as pressure difference receivers, and the amplitude and phase of tympanum vibrations are inherently directional, varying with sound incident angle. We quantified the nature of this directionality for Cope’s gray treefrog, Hyla chrysoscelis. We presented subjects with pure tones, advertisement calls, and frequency-modulated sweeps to examine the influence of frequency, signal level, lung inflation, and sex on ear directionality. Interaural differences in the amplitude of tympanum vibrations were 1–4 dB greater than sound pressure differences adjacent to the two tympana, while interaural differences in the phase of tympanum vibration were similar to or smaller than those in sound phase. Directionality in the amplitude and phase of tympanum vibration were highly dependent on sound frequency, and directionality in amplitude varied slightly with signal level. Directionality in the amplitude and phase of tone- and call-evoked responses did not differ between sexes. Lung inflation strongly affected tympanum directionality over a narrow frequency range that, in females, included call frequencies. This study provides a foundation for further work on the biomechanics and neural mechanisms of spatial hearing in H. chrysoscelis, and lends valuable perspective to behavioral studies on the use of spatial information by this species and other frogs. PMID:24504183
Caldwell, Michael S; Lee, Norman; Schrode, Katrina M; Johns, Anastasia R; Christensen-Dalsgaard, Jakob; Bee, Mark A
2014-04-01
Anuran ears function as pressure difference receivers, and the amplitude and phase of tympanum vibrations are inherently directional, varying with sound incident angle. We quantified the nature of this directionality for Cope's gray treefrog, Hyla chrysoscelis. We presented subjects with pure tones, advertisement calls, and frequency-modulated sweeps to examine the influence of frequency, signal level, lung inflation, and sex on ear directionality. Interaural differences in the amplitude of tympanum vibrations were 1-4 dB greater than sound pressure differences adjacent to the two tympana, while interaural differences in the phase of tympanum vibration were similar to or smaller than those in sound phase. Directionality in the amplitude and phase of tympanum vibration were highly dependent on sound frequency, and directionality in amplitude varied slightly with signal level. Directionality in the amplitude and phase of tone- and call-evoked responses did not differ between sexes. Lung inflation strongly affected tympanum directionality over a narrow frequency range that, in females, included call frequencies. This study provides a foundation for further work on the biomechanics and neural mechanisms of spatial hearing in H. chrysoscelis, and lends valuable perspective to behavioral studies on the use of spatial information by this species and other frogs.
Cortical Measures of Binaural Processing Predict Spatial Release from Masking Performance
Papesh, Melissa A.; Folmer, Robert L.; Gallun, Frederick J.
2017-01-01
Binaural sensitivity is an important contributor to the ability to understand speech in adverse acoustical environments such as restaurants and other social gatherings. The ability to accurately report on binaural percepts is not commonly measured, however, as extensive training is required before reliable measures can be obtained. Here, we investigated the use of auditory evoked potentials (AEPs) as a rapid physiological indicator of detection of interaural phase differences (IPDs) by assessing cortical responses to 180° IPDs embedded in amplitude-modulated carrier tones. We predicted that decrements in encoding of IPDs would be evident in middle age, with further declines found with advancing age and hearing loss. Thus, participants in experiment #1 were young to middle-aged adults with relatively good hearing thresholds while participants in experiment #2 were older individuals with typical age-related hearing loss. Results revealed that while many of the participants in experiment #1 could encode IPDs in stimuli up to 1,000 Hz, few of the participants in experiment #2 had discernable responses to stimuli above 750 Hz. These results are consistent with previous studies that have found that aging and hearing loss impose frequency limits on the ability to encode interaural phase information present in the fine structure of auditory stimuli. We further hypothesized that AEP measures of binaural sensitivity would be predictive of participants' ability to benefit from spatial separation between sound sources, a phenomenon known as spatial release from masking (SRM) which depends upon binaural cues. Results indicate that not only were objective IPD measures well correlated with and predictive of behavioral SRM measures in both experiments, but that they provided much stronger predictive value than age or hearing loss. Overall, the present work shows that objective measures of the encoding of interaural phase information can be readily obtained using commonly available AEP equipment, allowing accurate determination of the degree to which binaural sensitivity has been reduced in individual listeners due to aging and/or hearing loss. In fact, objective AEP measures of interaural phase encoding are actually better predictors of SRM in speech-in-speech conditions than are age, hearing loss, or the combination of age and hearing loss. PMID:28377706
Cortical Measures of Binaural Processing Predict Spatial Release from Masking Performance.
Papesh, Melissa A; Folmer, Robert L; Gallun, Frederick J
2017-01-01
Binaural sensitivity is an important contributor to the ability to understand speech in adverse acoustical environments such as restaurants and other social gatherings. The ability to accurately report on binaural percepts is not commonly measured, however, as extensive training is required before reliable measures can be obtained. Here, we investigated the use of auditory evoked potentials (AEPs) as a rapid physiological indicator of detection of interaural phase differences (IPDs) by assessing cortical responses to 180° IPDs embedded in amplitude-modulated carrier tones. We predicted that decrements in encoding of IPDs would be evident in middle age, with further declines found with advancing age and hearing loss. Thus, participants in experiment #1 were young to middle-aged adults with relatively good hearing thresholds while participants in experiment #2 were older individuals with typical age-related hearing loss. Results revealed that while many of the participants in experiment #1 could encode IPDs in stimuli up to 1,000 Hz, few of the participants in experiment #2 had discernable responses to stimuli above 750 Hz. These results are consistent with previous studies that have found that aging and hearing loss impose frequency limits on the ability to encode interaural phase information present in the fine structure of auditory stimuli. We further hypothesized that AEP measures of binaural sensitivity would be predictive of participants' ability to benefit from spatial separation between sound sources, a phenomenon known as spatial release from masking (SRM) which depends upon binaural cues. Results indicate that not only were objective IPD measures well correlated with and predictive of behavioral SRM measures in both experiments, but that they provided much stronger predictive value than age or hearing loss. Overall, the present work shows that objective measures of the encoding of interaural phase information can be readily obtained using commonly available AEP equipment, allowing accurate determination of the degree to which binaural sensitivity has been reduced in individual listeners due to aging and/or hearing loss. In fact, objective AEP measures of interaural phase encoding are actually better predictors of SRM in speech-in-speech conditions than are age, hearing loss, or the combination of age and hearing loss.
Kawashima, Takayuki; Sato, Takao
2012-01-01
When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue. In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter's ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz). The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing.
Marquardt, Torsten; Stange, Annette; Pecka, Michael; Grothe, Benedikt; McAlpine, David
2014-01-01
Recently, with the use of an amplitude-modulated binaural beat (AMBB), in which sound amplitude and interaural-phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al. 2013b), we demonstrated that the human auditory system uses interaural timing differences in the temporal fine structure of modulated sounds only during the rising portion of each modulation cycle. However, the degree to which peripheral or central mechanisms contribute to the observed strong dominance of the rising slope remains to be determined. Here, by recording responses of single neurons in the medial superior olive (MSO) of anesthetized gerbils and in the inferior colliculus (IC) of anesthetized guinea pigs to AMBBs, we report a correlation between the position within the amplitude-modulation (AM) cycle generating the maximum response rate and the position at which the instantaneous IPD dominates the total neural response. The IPD during the rising segment dominates the total response in 78% of MSO neurons and 69% of IC neurons, with responses of the remaining neurons predominantly coding the IPD around the modulation maximum. The observed diversity of dominance regions within the AM cycle, especially in the IC, and its comparison with the human behavioral data suggest that only the subpopulation of neurons with rising slope dominance codes the sound-source location in complex listening conditions. A comparison of two models to account for the data suggests that emphasis on IPDs during the rising slope of the AM cycle depends on adaptation processes occurring before binaural interaction. PMID:24554782
Spatial orientation of optokinetic nystagmus and ocular pursuit during orbital space flight
NASA Technical Reports Server (NTRS)
Moore, Steven T.; Cohen, Bernard; Raphan, Theodore; Berthoz, Alain; Clement, Gilles
2005-01-01
On Earth, eye velocity of horizontal optokinetic nystagmus (OKN) orients to gravito-inertial acceleration (GIA), the sum of linear accelerations acting on the head and body. We determined whether adaptation to micro-gravity altered this orientation and whether ocular pursuit exhibited similar properties. Eye movements of four astronauts were recorded with three-dimensional video-oculography. Optokinetic stimuli were stripes moving horizontally, vertically, and obliquely at 30 degrees/s. Ocular pursuit was produced by a spot moving horizontally or vertically at 20 degrees/s. Subjects were either stationary or were centrifuged during OKN with 1 or 0.5 g of interaural or dorsoventral centripetal linear acceleration. Average eye position during OKN (the beating field) moved into the quick-phase direction by 10 degrees during lateral and upward field movement in all conditions. The beating field did not shift up during downward OKN on Earth, but there was a strong upward movement of the beating field (9 degrees) during downward OKN in the absence of gravity; this likely represents an adaptation to the lack of a vertical 1-g bias in-flight. The horizontal OKN velocity axis tilted 9 degrees in the roll plane toward the GIA during interaural centrifugation, both on Earth and in space. During oblique OKN, the velocity vector tilted towards the GIA in the roll plane when there was a disparity between the direction of stripe motion and the GIA, but not when the two were aligned. In contrast, dorsoventral acceleration tilted the horizontal OKN velocity vector 6 degrees in pitch away from the GIA. Roll tilts of the horizontal OKN velocity vector toward the GIA during interaural centrifugation are consistent with the orientation properties of velocity storage, but pitch tilts away from the GIA when centrifuged while supine are not. We speculate that visual suppression during OKN may have caused the velocity vector to tilt away from the GIA during dorsoventral centrifugation. Vertical OKN and ocular pursuit did not exhibit orientation toward the GIA in any condition. Static full-body roll tilts and centrifugation generating an equivalent interaural acceleration produced the same tilts in the horizontal OKN velocity before and after flight. Thus, the magnitude of tilt in OKN velocity was dependent on the magnitude of interaural linear acceleration, rather than the tilt of the GIA with regard to the head. These results favor a 'filter' model of spatial orientation in which orienting eye movements are proportional to the magnitude of low frequency interaural linear acceleration, rather than models that postulate an internal representation of gravity as the basis for spatial orientation.
Investigations in mechanisms and strategies to enhance hearing with cochlear implants
NASA Astrophysics Data System (ADS)
Churchill, Tyler H.
Cochlear implants (CIs) produce hearing sensations by stimulating the auditory nerve (AN) with current pulses whose amplitudes are modulated by filtered acoustic temporal envelopes. While this technology has provided hearing for multitudinous CI recipients, even bilaterally-implanted listeners have more difficulty understanding speech in noise and localizing sounds than normal hearing (NH) listeners. Three studies reported here have explored ways to improve electric hearing abilities. Vocoders are often used to simulate CIs for NH listeners. Study 1 was a psychoacoustic vocoder study examining the effects of harmonic carrier phase dispersion and simulated CI current spread on speech intelligibility in noise. Results showed that simulated current spread was detrimental to speech understanding and that speech vocoded with carriers whose components' starting phases were equal was the least intelligible. Cross-correlogram analyses of AN model simulations confirmed that carrier component phase dispersion resulted in better neural envelope representation. Localization abilities rely on binaural processing mechanisms in the brainstem and mid-brain that are not fully understood. In Study 2, several potential mechanisms were evaluated based on the ability of metrics extracted from stereo AN simulations to predict azimuthal locations. Results suggest that unique across-frequency patterns of binaural cross-correlation may provide a strong cue set for lateralization and that interaural level differences alone cannot explain NH sensitivity to lateral position. While it is known that many bilateral CI users are sensitive to interaural time differences (ITDs) in low-rate pulsatile stimulation, most contemporary CI processing strategies use high-rate, constant-rate pulse trains. In Study 3, we examined the effects of pulse rate and pulse timing on ITD discrimination, ITD lateralization, and speech recognition by bilateral CI listeners. Results showed that listeners were able to use low-rate pulse timing cues presented redundantly on multiple electrodes for ITD discrimination and lateralization of speech stimuli even when mixed with high rates on other electrodes. These results have contributed to a better understanding of those aspects of the auditory system that support speech understanding and binaural hearing, suggested vocoder parameters that may simulate aspects of electric hearing, and shown that redundant, low-rate pulse timing supports improved spatial hearing for bilateral CI listeners.
Using Evoked Potentials to Match Interaural Electrode Pairs with Bilateral Cochlear Implants
Delgutte, Bertrand
2007-01-01
Bilateral cochlear implantation seeks to restore the advantages of binaural hearing to the profoundly deaf by providing binaural cues normally important for accurate sound localization and speech reception in noise. Psychophysical observations suggest that a key issue for the implementation of a successful binaural prosthesis is the ability to match the cochlear positions of stimulation channels in each ear. We used a cat model of bilateral cochlear implants with eight-electrode arrays implanted in each cochlea to develop and test a noninvasive method based on evoked potentials for matching interaural electrodes. The arrays allowed the cochlear location of stimulation to be independently varied in each ear. The binaural interaction component (BIC) of the electrically evoked auditory brainstem response (EABR) was used as an assay of binaural processing. BIC amplitude peaked for interaural electrode pairs at the same relative cochlear position and dropped with increasing cochlear separation in either direction. To test the hypothesis that BIC amplitude peaks when electrodes from the two sides activate maximally overlapping neural populations, we measured multiunit neural activity along the tonotopic gradient of the inferior colliculus (IC) with 16-channel recording probes and determined the spatial pattern of IC activation for each stimulating electrode. We found that the interaural electrode pairings that produced the best aligned IC activation patterns were also those that yielded maximum BIC amplitude. These results suggest that EABR measurements may provide a method for assigning frequency–channel mappings in bilateral implant recipients, such as pediatric patients, for which psychophysical measures of pitch ranking or binaural fusion are unavailable. PMID:17225976
Using evoked potentials to match interaural electrode pairs with bilateral cochlear implants.
Smith, Zachary M; Delgutte, Bertrand
2007-03-01
Bilateral cochlear implantation seeks to restore the advantages of binaural hearing to the profoundly deaf by providing binaural cues normally important for accurate sound localization and speech reception in noise. Psychophysical observations suggest that a key issue for the implementation of a successful binaural prosthesis is the ability to match the cochlear positions of stimulation channels in each ear. We used a cat model of bilateral cochlear implants with eight-electrode arrays implanted in each cochlea to develop and test a noninvasive method based on evoked potentials for matching interaural electrodes. The arrays allowed the cochlear location of stimulation to be independently varied in each ear. The binaural interaction component (BIC) of the electrically evoked auditory brainstem response (EABR) was used as an assay of binaural processing. BIC amplitude peaked for interaural electrode pairs at the same relative cochlear position and dropped with increasing cochlear separation in either direction. To test the hypothesis that BIC amplitude peaks when electrodes from the two sides activate maximally overlapping neural populations, we measured multiunit neural activity along the tonotopic gradient of the inferior colliculus (IC) with 16-channel recording probes and determined the spatial pattern of IC activation for each stimulating electrode. We found that the interaural electrode pairings that produced the best aligned IC activation patterns were also those that yielded maximum BIC amplitude. These results suggest that EABR measurements may provide a method for assigning frequency-channel mappings in bilateral implant recipients, such as pediatric patients, for which psychophysical measures of pitch ranking or binaural fusion are unavailable.
Lateralization of the Huggins pitch
NASA Astrophysics Data System (ADS)
Zhang, Peter Xinya; Hartmann, William M.
2004-05-01
The lateralization of the Huggins pitch (HP) was measured using a direct estimation method. The background noise was initially N0 or Nπ, and then the laterality of the entire stimulus was varied with a frequency-independent interaural delay, ranging from -1 to +1 ms. Two versions of the HP boundary region were used, stepped phase and linear phase. When presented in isolation, without the broadband background, the stepped boundary can be lateralized on its own but the linear boundary cannot. Nevertheless, the lateralizations of both forms of HP were found to be almost identical functions both of the interaural delay and of the boundary frequency over a two-octave range. In a third experiment, the same listeners lateralized sine tones in quiet as a function of interaural delay. Good agreement was found between lateralizations of the HP and of the corresponding sine tones. The lateralization judgments depended on the boundary frequency according to the expected hyperbolic law except when the frequency-independent delay was zero. For the latter case, the dependence on boundary frequency was much slower than hyperbolic. [Work supported by the NIDCD grant DC 00181.
Panniello, Mariangela; King, Andrew J; Dahmen, Johannes C; Walker, Kerry M M
2018-01-01
Abstract Despite decades of microelectrode recordings, fundamental questions remain about how auditory cortex represents sound-source location. Here, we used in vivo 2-photon calcium imaging to measure the sensitivity of layer II/III neurons in mouse primary auditory cortex (A1) to interaural level differences (ILDs), the principal spatial cue in this species. Although most ILD-sensitive neurons preferred ILDs favoring the contralateral ear, neurons with either midline or ipsilateral preferences were also present. An opponent-channel decoder accurately classified ILDs using the difference in responses between populations of neurons that preferred contralateral-ear-greater and ipsilateral-ear-greater stimuli. We also examined the spatial organization of binaural tuning properties across the imaged neurons with unprecedented resolution. Neurons driven exclusively by contralateral ear stimuli or by binaural stimulation occasionally formed local clusters, but their binaural categories and ILD preferences were not spatially organized on a more global scale. In contrast, the sound frequency preferences of most neurons within local cortical regions fell within a restricted frequency range, and a tonotopic gradient was observed across the cortical surface of individual mice. These results indicate that the representation of ILDs in mouse A1 is comparable to that of most other mammalian species, and appears to lack systematic or consistent spatial order. PMID:29136122
The dynamic contributions of the otolith organs to human ocular torsion
NASA Technical Reports Server (NTRS)
Merfeld, D. M.; Teiwes, W.; Clarke, A. H.; Scherer, H.; Young, L. R.
1996-01-01
We measured human ocular torsion (OT) monocularly (using video) and binocularly (using search coils) while sinusoidally accelerating (0.7 g) five human subjects along an earth-horizontal axis at five frequencies (0.35, 0.4, 0.5, 0.75, and 1.0 Hz). The compensatory nature of OT was investigated by changing the relative orientation of the dynamic (linear acceleration) and static (gravitational) cues. Four subject orientations were investigated: (1) Y-upright-acceleration along the interaural (y) axis while upright; (2) Y-supine-acceleration along the y-axis while supine; (3) Z-RED-acceleration along the dorsoventral (z) axis with right ear down; (4) Z-supine-acceleration along the z-axis while supine. Linear acceleration in the Y-upright, Y-supine and Z-RED orientations elicited conjugate OT. The smaller response in the Z-supine orientation appeared disconjugate. The amplitude of the response decreased and the phase lag increased with increasing frequency for each orientation. This frequency dependence does not match the frequency response of the regular or irregular afferent otolith neurons; therefore the response dynamics cannot be explained by simple peripheral mechanisms. The Y-upright responses were larger than the Y-supine responses (P < 0.05). This difference indicates that OT must be more complicated than a simple low-pass filtered response to interaural shear force, since the dynamic shear force along the interaural axis was identical in these two orientations. The Y-supine responses were, in turn, larger than the Z-RED responses (P < 0.01). Interestingly, the vector sum of the Y-supine responses plus Z-RED responses was not significantly different (P = 0.99) from the Y-upright responses. This suggests that, in this frequency range, the conjugate OT response during Y-upright stimulation might be composed of two components: (1) a response to shear force along the y-axis (as in Y-supine stimulation), and (2) a response to roll tilt of gravitoinertial force (as in Z-RED stimulation).
Marques do Carmo, Diego; Costa, Márcio Holsbach
2018-04-01
This work presents an online approximation method for the multichannel Wiener filter (MWF) noise reduction technique with preservation of the noise interaural level difference (ILD) for binaural hearing-aids. The steepest descent method is applied to a previously proposed MWF-ILD cost function to both approximate the optimal linear estimator of the desired speech and keep the subjective perception of the original acoustic scenario. The computational cost of the resulting algorithm is estimated in terms of multiply and accumulate operations, whose number can be controlled by setting the number of iterations at each time frame. Simulation results for the particular case of one speech and one-directional noise source show that the proposed method increases the signal-to-noise ratio SNR of the originally acquired speech by up to 16.9 dB in the assessed scenarios. As compared to the online implementation of the conventional MWF technique, the proposed technique provides a reduction of up to 7 dB in the noise ILD error at the price of a reduction of up 3 dB in the output SNR. Subjective experiments with volunteers complement these objective measures with psychoacoustic results, which corroborate the expected spatial preservation of the original acoustic scenario. The proposed method allows practical online implementation of the MWF-ILD noise reduction technique under constrained computational resources. Predicted SNR improvements from 12 dB to 16.9 dB can be obtained in application-specific integrated circuits for hearing-aids and state-of-the-art digital signal processors. Copyright © 2018 Elsevier Ltd. All rights reserved.
Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed
2014-11-01
Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
The acoustical cues to sound location in the Guinea pig (cavia porcellus)
Greene, Nathanial T; Anbuhl, Kelsey L; Williams, Whitney; Tollin, Daniel J.
2014-01-01
There are three main acoustical cues to sound location, each attributable to space-and frequency-dependent filtering of the propagating sound waves by the outer ears, head, and torso: Interaural differences in time (ITD) and level (ILD) as well as monaural spectral shape cues. While the guinea pig has been a common model for studying the anatomy, physiology, and behavior of binaural and spatial hearing, extensive measurements of their available acoustical cues are lacking. Here, these cues were determined from directional transfer functions (DTFs), the directional components of the head-related transfer functions, for eleven adult guinea pigs. In the frontal hemisphere, monaural spectral notches were present for frequencies from ~10 to 20 kHz; in general, the notch frequency increased with increasing sound source elevation and in azimuth toward the contralateral ear. The maximum ITDs calculated from low-pass filtered (2 kHz cutoff frequency) DTFs were ~250 µs, whereas the maximum ITD measured with low frequency tone pips was over 320 µs. A spherical head model underestimates ITD magnitude under normal conditions, but closely approximates values when the pinnae were removed. Interaural level differences (ILDs) strongly depended on location and frequency; maximum ILDs were < 10 dB for frequencies < 4 kHz and were as large as 40 dB for frequencies > 10 kHz. Removal of the pinna reduced the depth and sharpness of spectral notches, altered the acoustical axis, and reduced the acoustical gain, ITDs, and ILDs; however, spectral shape features and acoustical gain were not completely eliminated, suggesting a substantial contribution of the head and torso in altering the sounds present at the tympanic membrane. PMID:25051197
Relative size of auditory pathways in symmetrically and asymmetrically eared owls.
Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N; Wylie, Douglas R
2011-01-01
Owls are highly efficient predators with a specialized auditory system designed to aid in the localization of prey. One of the most unique anatomical features of the owl auditory system is the evolution of vertically asymmetrical ears in some species, which improves their ability to localize the elevational component of a sound stimulus. In the asymmetrically eared barn owl, interaural time differences (ITD) are used to localize sounds in azimuth, whereas interaural level differences (ILD) are used to localize sounds in elevation. These two features are processed independently in two separate neural pathways that converge in the external nucleus of the inferior colliculus to form an auditory map of space. Here, we present a comparison of the relative volume of 11 auditory nuclei in both the ITD and the ILD pathways of 8 species of symmetrically and asymmetrically eared owls in order to investigate evolutionary changes in the auditory pathways in relation to ear asymmetry. Overall, our results indicate that asymmetrically eared owls have much larger auditory nuclei than owls with symmetrical ears. In asymmetrically eared owls we found that both the ITD and ILD pathways are equally enlarged, and other auditory nuclei, not directly involved in binaural comparisons, are also enlarged. We suggest that the hypertrophy of auditory nuclei in asymmetrically eared owls likely reflects both an improved ability to precisely locate sounds in space and an expansion of the hearing range. Additionally, our results suggest that the hypertrophy of nuclei that compute space may have preceded that of the expansion of the hearing range and evolutionary changes in the size of the auditory system occurred independently of phylogeny. Copyright © 2011 S. Karger AG, Basel.
Litovsky, Ruth Y.; Gordon, Karen
2017-01-01
Spatial hearing skills are essential for children as they grow, learn and play. They provide critical cues for determining the locations of sources in the environment, and enable segregation of important sources, such as speech, from background maskers or interferers. Spatial hearing depends on availability of monaural cues and binaural cues. The latter result from integration of inputs arriving at the two ears from sounds that vary in location. The binaural system has exquisite mechanisms for capturing differences between the ears in both time of arrival and intensity. The major cues that are thus referred to as being vital for binaural hearing are: interaural differences in time (ITDs) and interaural differences in levels (ILDs). In children with normal hearing (NH), spatial hearing abilities are fairly well developed by age 4–5 years. In contrast, children who are deaf and hear through cochlear implants (CIs) do not have an opportunity to experience normal, binaural acoustic hearing early in life. These children may function by having to utilize auditory cues that are degraded with regard to numerous stimulus features. In recent years there has been a notable increase in the number of children receiving bilateral CIs, and evidence suggests that while having two CIs helps them function better than when listening through a single CI, they generally perform worse than their NH peers. This paper reviews some of the recent work on bilaterally implanted children. The focus is on measures of spatial hearing, including sound localization, release from masking for speech understanding in noise and binaural sensitivity using research processors. Data from behavioral and electrophysiological studies are included, with a focus on the recent work of the authors and their collaborators. The effects of auditory plasticity and deprivation on the emergence of binaural and spatial hearing are discussed along with evidence for reorganized processing from both behavioral and electrophysiological studies. The consequences of both unilateral and bilateral auditory deprivation during development suggest that the relevant set of issues is highly complex with regard to successes and the limitations experienced by children receiving bilateral cochlear implants. PMID:26828740
Binaural sluggishness in the perception of tone sequences and speech in noise.
Culling, J F; Colburn, H S
2000-01-01
The binaural system is well-known for its sluggish response to changes in the interaural parameters to which it is sensitive. Theories of binaural unmasking have suggested that detection of signals in noise is mediated by detection of differences in interaural correlation. If these theories are correct, improvements in the intelligibility of speech in favorable binaural conditions is most likely mediated by spectro-temporal variations in interaural correlation of the stimulus which mirror the spectro-temporal amplitude modulations of the speech. However, binaural sluggishness should limit the temporal resolution of the representation of speech recovered by this means. The present study tested this prediction in two ways. First, listeners' masked discrimination thresholds for ascending vs descending pure-tone arpeggios were measured as a function of rate of frequency change in the NoSo and NoSpi binaural configurations. Three-tone arpeggios were presented repeatedly and continuously for 1.6 s, masked by a 1.6-s burst of noise. In a two-interval task, listeners determined the interval in which the arpeggios were ascending. The results showed a binaural advantage of 12-14 dB for NoSpi at 3.3 arpeggios per s (arp/s), which reduced to 3-5 dB at 10.4 arp/s. This outcome confirmed that the discrimination of spectro-temporal patterns in noise is susceptible to the effects of binaural sluggishness. Second, listeners' masked speech-reception thresholds were measured in speech-shaped noise using speech which was 1, 1.5, and 2 times the original articulation rate. The articulation rate was increased using a phase-vocoder technique which increased all the modulation frequencies in the speech without altering its pitch. Speech-reception thresholds were, on average, 5.2 dB lower for the NoSpi than for the NoSo configuration, at the original articulation rate. This binaural masking release was reduced to 2.8 dB when the articulation rate was doubled, but the most notable effect was a 6-8 dB increase in thresholds with articulation rate for both configurations. These results suggest that higher modulation frequencies in masked signals cannot be temporally resolved by the binaural system, but that the useful modulation frequencies in speech are sufficiently low (<5 Hz) that they are invulnerable to the effects of binaural sluggishness, even at elevated articulation rates.
NASA Astrophysics Data System (ADS)
Bernstein, Leslie R.; Trahiotis, Constantine
2003-06-01
An acoustic pointing task was used to determine whether interaural temporal disparities (ITDs) conveyed by high-frequency ``transposed'' stimuli would produce larger extents of laterality than ITDs conveyed by bands of high-frequency Gaussian noise. The envelopes of transposed stimuli are designed to provide high-frequency channels with information similar to that conveyed by the waveforms of low-frequency stimuli. Lateralization was measured for low-frequency Gaussian noises, the same noises transposed to 4 kHz, and high-frequency Gaussian bands of noise centered at 4 kHz. Extents of laterality obtained with the transposed stimuli were greater than those obtained with bands of Gaussian noise centered at 4 kHz and, in some cases, were equivalent to those obtained with low-frequency stimuli. In a second experiment, the general effects on lateral position produced by imposed combinations of bandwidth, ITD, and interaural phase disparities (IPDs) on low-frequency stimuli remained when those stimuli were transposed to 4 kHz. Overall, the data were fairly well accounted for by a model that computes the cross-correlation subsequent to known stages of peripheral auditory processing augmented by low-pass filtering of the envelopes within the high-frequency channels of each ear.
Jones, Heath G; Kan, Alan; Litovsky, Ruth Y
2016-01-01
This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.
Moore, Brian C J; Sęk, Aleksander
2016-09-07
Multichannel amplitude compression is widely used in hearing aids. The preferred compression speed varies across individuals. Moore (2008) suggested that reduced sensitivity to temporal fine structure (TFS) may be associated with preference for slow compression. This idea was tested using a simulated hearing aid. It was also assessed whether preferences for compression speed depend on the type of stimulus: speech or music. Twenty-two hearing-impaired subjects were tested, and the stimulated hearing aid was fitted individually using the CAM2A method. On each trial, a given segment of speech or music was presented twice. One segment was processed with fast compression and the other with slow compression, and the order was balanced across trials. The subject indicated which segment was preferred and by how much. On average, slow compression was preferred over fast compression, more so for music, but there were distinct individual differences, which were highly correlated for speech and music. Sensitivity to TFS was assessed using the difference limen for frequency at 2000 Hz and by two measures of sensitivity to interaural phase at low frequencies. The results for the difference limens for frequency, but not the measures of sensitivity to interaural phase, supported the suggestion that preference for compression speed is affected by sensitivity to TFS. © The Author(s) 2016.
Theory of acoustic design of opera house and a design proposal
NASA Astrophysics Data System (ADS)
Ando, Yoichi
2004-05-01
First of all, the theory of subjective preference for sound fields based on the model of auditory-brain system is briefly mentioned. It consists of the temporal factors and spatial factors associated with the left and right cerebral hemispheres, respectively. The temporal criteria are the initial time delay gap between the direct sound and the first Reflection (Dt1) and the subsequent reverberation time (Tsub). These preferred conditions are related to the minimum value of effective duration of the running autocorrelation function of source signals (te)min. The spatial criteria are binaural listening level (LL) and the IACC, which may be extracted from the interaural crosscorrelation function. In the opera house, there are two different kind of sound sources, i.e., the vocal source of relatively short values of (te)min in the stage and the orchestra music of long values of (te)min in the pit. For these sources, a proposal is made here.
Spiking Models for Level-Invariant Encoding
Brette, Romain
2012-01-01
Levels of ecological sounds vary over several orders of magnitude, but the firing rate and membrane potential of a neuron are much more limited in range. In binaural neurons of the barn owl, tuning to interaural delays is independent of level differences. Yet a monaural neuron with a fixed threshold should fire earlier in response to louder sounds, which would disrupt the tuning of these neurons. How could spike timing be independent of input level? Here I derive theoretical conditions for a spiking model to be insensitive to input level. The key property is a dynamic change in spike threshold. I then show how level invariance can be physiologically implemented, with specific ionic channel properties. It appears that these ingredients are indeed present in monaural neurons of the sound localization pathway of birds and mammals. PMID:22291634
Performance on Tests of Central Auditory Processing by Individuals Exposed to High-Intensity Blasts
2012-07-01
percent (gap detected on at least four of the six presentations), with all longer durations receiving a score greater than 50 percent. Binaural ...Processing and Sound Localization Temporal precision of neural firing is also involved in binaural processing and localization of sound in space. The...Masking Level Difference (MLD) test evaluates the integrity of the earliest sites of binaural comparison and sensitivity to interaural phase in the
Bernstein, Leslie R.; Trahiotis, Constantine
2009-01-01
This study addressed how manipulating certain aspects of the envelopes of high-frequency stimuli affects sensitivity to envelope-based interaural temporal disparities (ITDs). Listener’s threshold ITDs were measured using an adaptive two-alternative paradigm employing “raised-sine” stimuli [John, M. S., et al. (2002). Ear Hear. 23, 106–117] which permit independent variation in their modulation frequency, modulation depth, and modulation exponent. Threshold ITDs were measured while manipulating modulation exponent for stimuli having modulation frequencies between 32 and 256 Hz. The results indicated that graded increases in the exponent led to graded decreases in envelope-based threshold ITDs. Threshold ITDs were also measured while parametrically varying modulation exponent and modulation depth. Overall, threshold ITDs decreased with increases in the modulation depth. Unexpectedly, increases in the exponent of the raised-sine led to especially large decreases in threshold ITD when the modulation depth was low. An interaural correlation-based model was generally able to capture changes in threshold ITD stemming from changes in the exponent, depth of modulation, and frequency of modulation of the raised-sine stimuli. The model (and several variations of it), however, could not account for the unexpected interaction between the value of raised-sine exponent and its modulation depth. PMID:19425666
Bernstein, Leslie R; Trahiotis, Constantine
2016-11-01
This study assessed whether audiometrically-defined "slight" or "hidden" hearing losses might be associated with degradations in binaural processing as measured in binaural detection experiments employing interaurally delayed signals and maskers. Thirty-one listeners participated, all having no greater than slight hearing losses (i.e., no thresholds greater than 25 dB HL). Across the 31 listeners and consistent with the findings of Bernstein and Trahiotis [(2015). J. Acoust. Soc. Am. 138, EL474-EL479] binaural detection thresholds at 500 Hz and 4 kHz increased with increasing magnitude of interaural delay, suggesting a loss of precision of coding with magnitude of interaural delay. Binaural detection thresholds were consistently found to be elevated for listeners whose absolute thresholds at 4 kHz exceeded 7.5 dB HL. No such elevations were observed in conditions having no binaural cues available to aid detection (i.e., "monaural" conditions). Partitioning and analyses of the data revealed that those elevated thresholds (1) were more attributable to hearing level than to age and (2) result from increased levels of internal noise. The data suggest that listeners whose high-frequency monaural hearing status would be classified audiometrically as being normal or "slight loss" may exhibit substantial and perceptually meaningful losses of binaural processing.
Tympanometric findings in superior semicircular canal dehiscence syndrome.
Castellucci, A; Brandolini, C; Piras, G; Modugno, G C
2013-04-01
The diagnostic role of audio-impedancemetry in superior semicircular canal dehiscence (SSCD) disease is well known. In particular, since the first reports, the presence of evoked acoustic reflexes has represented a determining instrumental exhibit in differential diagnosis with other middle ear pathologies that are responsible for a mild-low frequencies air-bone gap (ABG). Even though high resolution computed tomography (HRCT) completed by parasagittal reformatted images still represents the diagnostic gold standard, several instrumental tests can support a suspect of labyrinthine capsule dehiscence when "suggestive" symptoms occur. Objective and subjective audiometry often represents the starting point of the diagnostic course aimed at investigating the cause responsible for the so-called "intra-labyrinthine conductive hearing loss". The purpose of this study is to evaluate the role of tympanometry, in particular of the inter-aural asymmetry ratio in peak compliance as a function of different mild-low frequencies ABG on the affected side, in the diagnostic work-up in patients with unilateral SSCD. The working hypothesis is that an increase in admittance of the "inner-middle ear" conduction system due to a "third mobile window" could be detected by tympanometry. A retrospective review of the clinical records of 45 patients with unilateral dehiscence selected from a pool of 140 subjects diagnosed with SSCD at our institution from 2003 to 2011 was performed. Values of ABG amplitude on the dehiscent side and tympanometric measurements of both ears were collected for each patient in the study group (n = 45). An asymmetry between tympanometric peak compliance of the involved side and that of the contralateral side was investigated by calculating the inter-aural difference and the asymmetry ratio of compliance at the eardrum. A statistically significant correlation (p = 0.015 by Fisher's test) between an asymmetry ratio ≥ 14% in favour of the pathologic ear and an ABG > 20 dB nHL on the same side was found. When "evocative" symptoms of SSCD associated with important ABG occur, the inter-aural difference in tympanometric peak compliance at the eardrum in favour of the "suspected" side could suggest an intra-labyrinthine origin for the asymmetry. Tympanometry would thus prove to be a useful instrument in clinical-instrumental diagnosis of SSCD in detection of cases associated with alterations of inner ear impedance.
Whiteford, Kelly L; Kreft, Heather A; Oxenham, Andrew J
2017-08-01
Natural sounds can be characterized by their fluctuations in amplitude and frequency. Ageing may affect sensitivity to some forms of fluctuations more than others. The present study used individual differences across a wide age range (20-79 years) to test the hypothesis that slow-rate, low-carrier frequency modulation (FM) is coded by phase-locked auditory-nerve responses to temporal fine structure (TFS), whereas fast-rate FM is coded via rate-place (tonotopic) cues, based on amplitude modulation (AM) of the temporal envelope after cochlear filtering. Using a low (500 Hz) carrier frequency, diotic FM and AM detection thresholds were measured at slow (1 Hz) and fast (20 Hz) rates in 85 listeners. Frequency selectivity and TFS coding were assessed using forward masking patterns and interaural phase disparity tasks (slow dichotic FM), respectively. Comparable interaural level disparity tasks (slow and fast dichotic AM and fast dichotic FM) were measured to control for effects of binaural processing not specifically related to TFS coding. Thresholds in FM and AM tasks were correlated, even across tasks thought to use separate peripheral codes. Age was correlated with slow and fast FM thresholds in both diotic and dichotic conditions. The relationship between age and AM thresholds was generally not significant. Once accounting for AM sensitivity, only diotic slow-rate FM thresholds remained significantly correlated with age. Overall, results indicate stronger effects of age on FM than AM. However, because of similar effects for both slow and fast FM when not accounting for AM sensitivity, the effects cannot be unambiguously ascribed to TFS coding.
Representation of Dynamic Interaural Phase Difference in Auditory Cortex of Awake Rhesus Macaques
Scott, Brian H.; Malone, Brian J.; Semple, Malcolm N.
2009-01-01
Neurons in auditory cortex of awake primates are selective for the spatial location of a sound source, yet the neural representation of the binaural cues that underlie this tuning remains undefined. We examined this representation in 283 single neurons across the low-frequency auditory core in alert macaques, trained to discriminate binaural cues for sound azimuth. In response to binaural beat stimuli, which mimic acoustic motion by modulating the relative phase of a tone at the two ears, these neurons robustly modulate their discharge rate in response to this directional cue. In accordance with prior studies, the preferred interaural phase difference (IPD) of these neurons typically corresponds to azimuthal locations contralateral to the recorded hemisphere. Whereas binaural beats evoke only transient discharges in anesthetized cortex, neurons in awake cortex respond throughout the IPD cycle. In this regard, responses are consistent with observations at earlier stations of the auditory pathway. Discharge rate is a band-pass function of the frequency of IPD modulation in most neurons (73%), but both discharge rate and temporal synchrony are independent of the direction of phase modulation. When subjected to a receiver operator characteristic analysis, the responses of individual neurons are insufficient to account for the perceptual acuity of these macaques in an IPD discrimination task, suggesting the need for neural pooling at the cortical level. PMID:19164111
Representation of dynamic interaural phase difference in auditory cortex of awake rhesus macaques.
Scott, Brian H; Malone, Brian J; Semple, Malcolm N
2009-04-01
Neurons in auditory cortex of awake primates are selective for the spatial location of a sound source, yet the neural representation of the binaural cues that underlie this tuning remains undefined. We examined this representation in 283 single neurons across the low-frequency auditory core in alert macaques, trained to discriminate binaural cues for sound azimuth. In response to binaural beat stimuli, which mimic acoustic motion by modulating the relative phase of a tone at the two ears, these neurons robustly modulate their discharge rate in response to this directional cue. In accordance with prior studies, the preferred interaural phase difference (IPD) of these neurons typically corresponds to azimuthal locations contralateral to the recorded hemisphere. Whereas binaural beats evoke only transient discharges in anesthetized cortex, neurons in awake cortex respond throughout the IPD cycle. In this regard, responses are consistent with observations at earlier stations of the auditory pathway. Discharge rate is a band-pass function of the frequency of IPD modulation in most neurons (73%), but both discharge rate and temporal synchrony are independent of the direction of phase modulation. When subjected to a receiver operator characteristic analysis, the responses of individual neurons are insufficient to account for the perceptual acuity of these macaques in an IPD discrimination task, suggesting the need for neural pooling at the cortical level.
Hausmann, Laura; von Campenhausen, Mark; Endler, Frank; Singheiser, Martin; Wagner, Hermann
2009-11-05
When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs), which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba) are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD), interaural intensity differences (ILD), and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl's ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests. HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly) smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes. The facial ruff a) improves azimuthal sound localization by increasing the ITD range and b) improves elevational sound localization in the frontal field by introducing a shift of iso-ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the virtual removal of the ruff. These data provide new insights into the function of external hearing structures and open up the possibility to apply the results on autonomous agents, creation of virtual auditory environments for humans, or in hearing aids.
Borisyuk, Alla; Semple, Malcolm N; Rinzel, John
2002-10-01
A mathematical model was developed for exploring the sensitivity of low-frequency inferior colliculus (IC) neurons to interaural phase disparity (IPD). The formulation involves a firing-rate-type model that does not include spikes per se. The model IC neuron receives IPD-tuned excitatory and inhibitory inputs (viewed as the output of a collection of cells in the medial superior olive). The model cell possesses cellular properties of firing rate adaptation and postinhibitory rebound (PIR). The descriptions of these mechanisms are biophysically reasonable, but only semi-quantitative. We seek to explain within a minimal model the experimentally observed mismatch between responses to IPD stimuli delivered dynamically and those delivered statically (McAlpine et al. 2000; Spitzer and Semple 1993). The model reproduces many features of the responses to static IPD presentations, binaural beat, and partial range sweep stimuli. These features include differences in responses to a stimulus presented in static or dynamic context: sharper tuning and phase shifts in response to binaural beats, and hysteresis and "rise-from-nowhere" in response to partial range sweeps. Our results suggest that dynamic response features are due to the structure of inputs and the presence of firing rate adaptation and PIR mechanism in IC cells, but do not depend on a specific biophysical mechanism. We demonstrate how the model's various components contribute to shaping the observed phenomena. For example, adaptation, PIR, and transmission delay shape phase advances and delays in responses to binaural beats, adaptation and PIR shape hysteresis in different ranges of IPD, and tuned inhibition underlies asymmetry in dynamic tuning properties. We also suggest experiments to test our modeling predictions: in vitro simulation of the binaural beat (phase advance at low beat frequencies, its dependence on firing rate), in vivo partial range sweep experiments (dependence of the hysteresis curve on parameters), and inhibition blocking experiments (to study inhibitory tuning properties by observation of phase shifts).
Franken, Tom P.; Bremen, Peter; Joris, Philip X.
2014-01-01
Coincidence detection by binaural neurons in the medial superior olive underlies sensitivity to interaural time difference (ITD) and interaural correlation (ρ). It is unclear whether this process is akin to a counting of individual coinciding spikes, or rather to a correlation of membrane potential waveforms resulting from converging inputs from each side. We analyzed spike trains of axons of the cat trapezoid body (TB) and auditory nerve (AN) in a binaural coincidence scheme. ITD was studied by delaying “ipsi-” vs. “contralateral” inputs; ρ was studied by using responses to different noises. We varied the number of inputs; the monaural and binaural threshold and the coincidence window duration. We examined physiological plausibility of output “spike trains” by comparing their rate and tuning to ITD and ρ to those of binaural cells. We found that multiple inputs are required to obtain a plausible output spike rate. In contrast to previous suggestions, monaural threshold almost invariably needed to exceed binaural threshold. Elevation of the binaural threshold to values larger than 2 spikes caused a drastic decrease in rate for a short coincidence window. Longer coincidence windows allowed a lower number of inputs and higher binaural thresholds, but decreased the depth of modulation. Compared to AN fibers, TB fibers allowed higher output spike rates for a low number of inputs, but also generated more monaural coincidences. We conclude that, within the parameter space explored, the temporal patterns of monaural fibers require convergence of multiple inputs to achieve physiological binaural spike rates; that monaural coincidences have to be suppressed relative to binaural ones; and that the neuron has to be sensitive to single binaural coincidences of spikes, for a number of excitatory inputs per side of 10 or less. These findings suggest that the fundamental operation in the mammalian binaural circuit is coincidence counting of single binaural input spikes. PMID:24822037
Comparative physiology of sound localization in four species of owls.
Volman, S F; Konishi, M
1990-01-01
Bilateral ear asymmetry is found in some, but not all, species of owls. We investigated the neural basis of sound localization in symmetrical and asymmetrical species, to deduce how ear asymmetry might have evolved from the ancestral condition, by comparing the response properties of neurons in the external nucleus of the inferior colliculus (ICx) of the symmetrical burrowing owl and asymmetrical long-eared owl with previous findings in the symmetrical great horned owl and asymmetrical barn owl. In the ICx of all of these owls, the neurons had spatially restricted receptive fields, and auditory space was topographically mapped. In the symmetrical owls, ICx units were not restricted in elevation, and only azimuth was mapped in ICx. In the barn owl, the space map is two-dimensional, with elevation forming the second dimension. Receptive fields in the long-eared owl were somewhat restricted in elevation, but their tuning was not sharp enough to determine if elevation is mapped. In every species, the primary cue for azimuth was interaural time difference, although ICx units were also tuned for interaural intensity difference (IID). In the barn owl, the IIDs of sounds with frequencies between about 5 and 8 kHz vary systematically with elevation, and the IID selectivity of ICx neurons primarily encodes elevation. In the symmetrical owls, whose ICx neurons do not respond to frequencies above about 5 kHz, IID appears to be a supplementary cue for azimuth. We hypothesize that ear asymmetry can be exploited by owls that have evolved the higher-frequency hearing necessary to generate elevation cues. Thus, the IID selectivity of ICx neurons in symmetrical owls may preadapt them for asymmetry; the neural circuitry that underlies IID selectivity is already present in symmetrical owls, but because IID is not absolutely required to encode azimuth it can come to encode elevation in asymmetrical owls.
Epp, Bastian; Yasin, Ifat; Verhey, Jesko L
2013-12-01
The audibility of important sounds is often hampered due to the presence of other masking sounds. The present study investigates if a correlate of the audibility of a tone masked by noise is found in late auditory evoked potentials measured from human listeners. The audibility of the target sound at a fixed physical intensity is varied by introducing auditory cues of (i) interaural target signal phase disparity and (ii) coherent masker level fluctuations in different frequency regions. In agreement with previous studies, psychoacoustical experiments showed that both stimulus manipulations result in a masking release (i: binaural masking level difference; ii: comodulation masking release) compared to a condition where those cues are not present. Late auditory evoked potentials (N1, P2) were recorded for the stimuli at a constant masker level, but different signal levels within the same set of listeners who participated in the psychoacoustical experiment. The data indicate differences in N1 and P2 between stimuli with and without interaural phase disparities. However, differences for stimuli with and without coherent masker modulation were only found for P2, i.e., only P2 is sensitive to the increase in audibility, irrespective of the cue that caused the masking release. The amplitude of P2 is consistent with the psychoacoustical finding of an addition of the masking releases when both cues are present. Even though it cannot be concluded where along the auditory pathway the audibility is represented, the P2 component of auditory evoked potentials is a candidate for an objective measure of audibility in the human auditory system. Copyright © 2013 Elsevier B.V. All rights reserved.
Gifford, René H; Dorman, Michael F; Skarzynski, Henryk; Lorens, Artur; Polak, Marek; Driscoll, Colin L W; Roland, Peter; Buchman, Craig A
2013-01-01
The aim of this study was to assess the benefit of having preserved acoustic hearing in the implanted ear for speech recognition in complex listening environments. The present study included a within-subjects, repeated-measures design including 21 English-speaking and 17 Polish-speaking cochlear implant (CI) recipients with preserved acoustic hearing in the implanted ear. The patients were implanted with electrodes that varied in insertion depth from 10 to 31 mm. Mean preoperative low-frequency thresholds (average of 125, 250, and 500 Hz) in the implanted ear were 39.3 and 23.4 dB HL for the English- and Polish-speaking participants, respectively. In one condition, speech perception was assessed in an eight-loudspeaker environment in which the speech signals were presented from one loudspeaker and restaurant noise was presented from all loudspeakers. In another condition, the signals were presented in a simulation of a reverberant environment with a reverberation time of 0.6 sec. The response measures included speech reception thresholds (SRTs) and percent correct sentence understanding for two test conditions: CI plus low-frequency hearing in the contralateral ear (bimodal condition) and CI plus low-frequency hearing in both ears (best-aided condition). A subset of six English-speaking listeners were also assessed on measures of interaural time difference thresholds for a 250-Hz signal. Small, but significant, improvements in performance (1.7-2.1 dB and 6-10 percentage points) were found for the best-aided condition versus the bimodal condition. Postoperative thresholds in the implanted ear were correlated with the degree of electric and acoustic stimulation (EAS) benefit for speech recognition in diffuse noise. There was no reliable relationship among measures of audiometric threshold in the implanted ear nor elevation in threshold after surgery and improvement in speech understanding in reverberation. There was a significant correlation between interaural time difference threshold at 250 Hz and EAS-related benefit for the adaptive speech reception threshold. The findings of this study suggest that (1) preserved low-frequency hearing improves speech understanding for CI recipients, (2) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing, and (3) preservation of binaural timing cues, although poorer than observed for individuals with normal hearing, is possible after unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. The results of this study demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of CI criteria to include individuals with low-frequency thresholds in even the normal to near-normal range.
Aihara, Noritaka; Murakami, Shingo; Takahashi, Mariko; Yamada, Kazuo
2014-01-01
We classified the results of preoperative auditory brainstem response (ABR) in 121 patients with useful hearing and considered the utility of preoperative ABR as a preliminary assessment for intraoperative monitoring. Wave V was confirmed in 113 patients and was not confirmed in 8 patients. Intraoperative ABR could not detect wave V in these 8 patients. The 8 patients without wave V were classified into two groups (flat and wave I only), and the reason why wave V could not be detected may have differed between the groups. Because high-frequency hearing was impaired in flat patients, an alternative to click stimulation may be more effective. Monitoring cochlear nerve action potential (CNAP) may be useful because CNAP could be detected in 4 of 5 wave I only patients. Useful hearing was preserved after surgery in 1 patient in the flat group and 2 patients in wave I only group. Among patients with wave V, the mean interaural latency difference of wave V was 0.88 ms in Class A (n = 57) and 1.26 ms in Class B (n = 56). Because the latency of wave V is already prolonged before surgery, to estimate delay in wave V latency during surgery probably underestimates cochlear nerve damage. Recording intraoperative ABR is indispensable to avoid cochlear nerve damage and to provide information for surgical decisions. Confirming the condition of ABR before surgery helps to solve certain problems, such as choosing to monitor the interaural latency difference of wave V, CNAP, or alternative sound-evoked ABR.
Młynarski, Wiktor
2015-05-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
Mokri, Yasamin; Worland, Kate; Ford, Mark; Rajan, Ramesh
2015-01-01
Humans can accurately localize sounds even in unfavourable signal-to-noise conditions. To investigate the neural mechanisms underlying this, we studied the effect of background wide-band noise on neural sensitivity to variations in interaural level difference (ILD), the predominant cue for sound localization in azimuth for high-frequency sounds, at the characteristic frequency of cells in rat inferior colliculus (IC). Binaural noise at high levels generally resulted in suppression of responses (55.8%), but at lower levels resulted in enhancement (34.8%) as well as suppression (30.3%). When recording conditions permitted, we then examined if any binaural noise effects were related to selective noise effects at each of the two ears, which we interpreted in light of well-known differences in input type (excitation and inhibition) from each ear shaping particular forms of ILD sensitivity in the IC. At high signal-to-noise ratios (SNR), in most ILD functions (41%), the effect of background noise appeared to be due to effects on inputs from both ears, while for a large percentage (35.8%) appeared to be accounted for by effects on excitatory input. However, as SNR decreased, change in excitation became the dominant contributor to the change due to binaural background noise (63.6%). These novel findings shed light on the IC neural mechanisms for sound localization in the presence of continuous background noise. They also suggest that some effects of background noise on encoding of sound location reported to be emergent in upstream auditory areas can also be observed at the level of the midbrain. PMID:25865218
Sensitivity to binaural timing in bilateral cochlear implant users.
van Hoesel, Richard J M
2007-04-01
Various measures of binaural timing sensitivity were made in three bilateral cochlear implant users, who had demonstrated moderate-to-good interaural time delay (ITD) sensitivity at 100 pulses-per-second (pps). Overall, ITD thresholds increased at higher pulse rates, lower levels, and shorter durations, although intersubject differences were evident. Monaural rate-discrimination thresholds, using the same stimulation parameters, showed more substantial elevation than ITDs with increased rate. ITD sensitivity with 6000 pps stimuli, amplitude-modulated at 100 Hz, was similar to that with unmodulated pulse trains at 100 pps, but at 200 and 300 Hz performance was poorer than with unmodulated signals. Measures of sensitivity to binaural beats with unmodulated pulse-trains showed that all three subjects could use time-varying ITD cues at 100 pps, but not 300 pps, even though static ITD sensitivity was relatively unaffected over that range. The difference between static and dynamic ITD thresholds is discussed in terms of relative contributions from initial and later arriving cues, which was further examined in an experiment using two-pulse stimuli as a function of interpulse separation. In agreement with the binaural-beat data, findings from that experiment showed poor discrimination of ITDs on the second pulse when the interval between pulses was reduced to a few milliseconds.
Audible sonar images generated with proprioception for target analysis.
Kuc, Roman B
2017-05-01
Some blind humans have demonstrated the ability to detect and classify objects with echolocation using palatal clicks. An audible-sonar robot mimics human click emissions, binaural hearing, and head movements to extract interaural time and level differences from target echoes. Targets of various complexity are examined by transverse displacements of the sonar and by target pose rotations that model movements performed by the blind. Controlled sonar movements executed by the robot provide data that model proprioception information available to blind humans for examining targets from various aspects. The audible sonar uses this sonar location and orientation information to form two-dimensional target images that are similar to medical diagnostic ultrasound tomograms. Simple targets, such as single round and square posts, produce distinguishable and recognizable images. More complex targets configured with several simple objects generate diffraction effects and multiple reflections that produce image artifacts. The presentation illustrates the capabilities and limitations of target classification from audible sonar images.
Greene, Nathaniel T; Anbuhl, Kelsey L; Ferber, Alexander T; DeGuzman, Marisa; Allen, Paul D; Tollin, Daniel J
2018-08-01
Despite the common use of guinea pigs in investigations of the neural mechanisms of binaural and spatial hearing, their behavioral capabilities in spatial hearing tasks have surprisingly not been thoroughly investigated. To begin to fill this void, we tested the spatial hearing of adult male guinea pigs in several experiments using a paradigm based on the prepulse inhibition (PPI) of the acoustic startle response. In the first experiment, we presented continuous broadband noise from one speaker location and switched to a second speaker location (the "prepulse") along the azimuth prior to presenting a brief, ∼110 dB SPL startle-eliciting stimulus. We found that the startle response amplitude was systematically reduced for larger changes in speaker swap angle (i.e., greater PPI), indicating that using the speaker "swap" paradigm is sufficient to assess stimulus detection of spatially separated sounds. In a second set of experiments, we swapped low- and high-pass noise across the midline to estimate their ability to utilize interaural time- and level-difference cues, respectively. The results reveal that guinea pigs can utilize both binaural cues to discriminate azimuthal sound sources. A third set of experiments examined spatial release from masking using a continuous broadband noise masker and a broadband chirp signal, both presented concurrently at various speaker locations. In general, animals displayed an increase in startle amplitude (i.e., lower PPI) when the masker was presented at speaker locations near that of the chirp signal, and reduced startle amplitudes (increased PPI) indicating lower detection thresholds when the noise was presented from more distant speaker locations. In summary, these results indicate that guinea pigs can: 1) discriminate changes in source location within a hemifield as well as across the midline, 2) discriminate sources of low- and high-pass sounds, demonstrating that they can effectively utilize both low-frequency interaural time and high-frequency level difference sound localization cues, and 3) utilize spatial release from masking to discriminate sound sources. This report confirms the guinea pig as a suitable spatial hearing model and reinforces prior estimates of guinea pig hearing ability from acoustical and physiological measurements. Copyright © 2018 Elsevier B.V. All rights reserved.
Modeling off-frequency binaural masking for short- and long-duration signals.
Nitschmann, Marc; Yasin, Ifat; Henning, G Bruce; Verhey, Jesko L
2017-08-01
Experimental binaural masking-pattern data are presented together with model simulations for 12- and 600-ms signals. The masker was a diotic 11-Hz wide noise centered on 500 Hz. The tonal signal was presented either diotically or dichotically (180° interaural phase difference) with frequencies ranging from 400 to 600 Hz. The results and the modeling agree with previous data and hypotheses; simulations with a binaural model sensitive to monaural modulation cues show that the effect of duration on off-frequency binaural masking-level differences is mainly a result of modulation cues which are only available in the monaural detection of long signals.
Spatial separation benefit for unaided and aided listening
Ahlstrom, Jayne B.; Horwitz, Amy R.; Dubno, Judy R.
2013-01-01
Consonant recognition in noise was measured at a fixed signal-to-noise ratio as a function of low-pass-cutoff frequency and noise location in older adults fit with bilateral hearing aids. To quantify age-related differences, spatial benefit was assessed in younger and older adults with normal hearing. Spatial benefit was similar for all groups suggesting that older adults used interaural difference cues to improve speech recognition in noise equivalently to younger adults. Although amplification was sufficient to increase high-frequency audibility with spatial separation, hearing-aid benefit was minimal, suggesting that factors beyond simple audibility may be responsible for limited hearing-aid benefit. PMID:24121648
Why Internally Coupled Ears (ICE) Work Well
NASA Astrophysics Data System (ADS)
van Hemmen, J. Leo
2014-03-01
Many vertebrates, such as frogs and lizards, have an air-filled cavity between left and right eardrum, i.e., internally coupled ears (ICE). Depending on source direction, internal time (iTD) and level (iLD) difference as experienced by the animal's auditory system may greatly exceed [C. Vossen et al., JASA 128 (2010) 909-918] the external, or interaural, time and level difference (ITD and ILD). Sensory processing only encodes iTD and iLD. We present an extension of ICE theory so as to elucidate the underlying physics. First, the membrane properties of the eardrum explain why for low frequencies iTD dominates whereas iLD does so for higher frequencies. Second, the plateau of iTD = γ ITD for constant 1 < γ < 5 and variable input frequency <ν∘ follows; e.g., for the Tockay gecko ν∘ ~ 1 . 5 kHz. Third, we use a sectorial instead of circular membrane to quantify the effect of the extracolumella embedded in the tympanum and connecting with the cochlea. The main parameters can be adjusted so that the model is species independent. Work done in collaboration with A.P. Vedurmudi and J. Goulet; partially supported by BCCN-Munich.
Glackin, Brendan; Wall, Julie A.; McGinnity, Thomas M.; Maguire, Liam P.; McDaid, Liam J.
2010-01-01
Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz–1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of ±10° is used. For angular resolutions down to 2.5°, it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance. PMID:20802855
Binaural processing of speech in light aircraft.
DOT National Transportation Integrated Search
1972-09-01
Laboratory studies have shown that the human binaural auditory system can extract signals from noise more effectively when the signals (or the noise) are presented in one of several interaurally disparate configurations. Questions arise as to whether...
Interaural attenuation for Sennheiser HDA 200 circumaural earphones.
Brännström, K Jonas; Lantz, Johannes
2010-06-01
Interaural attenuation (IA) was evaluated for pure tones (frequency range 125 to 16000 Hz) using Sennheiser HDA 200 circumaural earphones and Telephonics TDH-39P earphones in nine unilaterally deaf subjects. Audiometry was conducted in 1-dB steps using the manual ascending technique in accordance with ISO 8253-1. For all subjects and for all tested frequencies, the lowest IA value for HDA 200 was 42 dB. The present IA values for TDH-39P earphones closely resemble previously reported data. The findings show that the HDA 200 earphones provide more IA than the TDH-39P, especially at lower frequencies (
Tonotopic tuning in a sound localization circuit.
Slee, Sean J; Higgs, Matthew H; Fairhall, Adrienne L; Spain, William J
2010-05-01
Nucleus laminaris (NL) neurons encode interaural time difference (ITD), the cue used to localize low-frequency sounds. A physiologically based model of NL input suggests that ITD information is contained in narrow frequency bands around harmonics of the sound frequency. This suggested a theory, which predicts that, for each tone frequency, there is an optimal time course for synaptic inputs to NL that will elicit the largest modulation of NL firing rate as a function of ITD. The theory also suggested that neurons in different tonotopic regions of NL require specialized tuning to take advantage of the input gradient. Tonotopic tuning in NL was investigated in brain slices by separating the nucleus into three regions based on its anatomical tonotopic map. Patch-clamp recordings in each region were used to measure both the synaptic and the intrinsic electrical properties. The data revealed a tonotopic gradient of synaptic time course that closely matched the theoretical predictions. We also found postsynaptic band-pass filtering. Analysis of the combined synaptic and postsynaptic filters revealed a frequency-dependent gradient of gain for the transformation of tone amplitude to NL firing rate modulation. Models constructed from the experimental data for each tonotopic region demonstrate that the tonotopic tuning measured in NL can improve ITD encoding across sound frequencies.
Mechanisms underlying the temporal precision of sound coding at the inner hair cell ribbon synapse
Moser, Tobias; Neef, Andreas; Khimich, Darina
2006-01-01
Our auditory system is capable of perceiving the azimuthal location of a low frequency sound source with a precision of a few degrees. This requires the auditory system to detect time differences in sound arrival between the two ears down to tens of microseconds. The detection of these interaural time differences relies on network computation by auditory brainstem neurons sharpening the temporal precision of the afferent signals. Nevertheless, the system requires the hair cell synapse to encode sound with the highest possible temporal acuity. In mammals, each auditory nerve fibre receives input from only one inner hair cell (IHC) synapse. Hence, this single synapse determines the temporal precision of the fibre. As if this was not enough of a challenge, the auditory system is also capable of maintaining such high temporal fidelity with acoustic signals that vary greatly in their intensity. Recent research has started to uncover the cellular basis of sound coding. Functional and structural descriptions of synaptic vesicle pools and estimates for the number of Ca2+ channels at the ribbon synapse have been obtained, as have insights into how the receptor potential couples to the release of synaptic vesicles. Here, we review current concepts about the mechanisms that control the timing of transmitter release in inner hair cells of the cochlea. PMID:16901948
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
Effect of source location and listener location on ILD cues in a reverberant room
NASA Astrophysics Data System (ADS)
Ihlefeld, Antje; Shinn-Cunningham, Barbara G.
2004-05-01
Short-term interaural level differences (ILDs) were analyzed for simulations of the signals that would reach a listener in a reverberant room. White noise was convolved with manikin head-related impulse responses measured in a classroom to simulate different locations of the source relative to the manikin and different manikin positions in the room. The ILDs of the signals were computed within each third-octave band over a relatively short time window to investigate how reliably ILD cues encode source laterality. Overall, the mean of the ILD magnitude increases with lateral angle and decreases with distance, as expected. Increasing reverberation decreases the mean ILD magnitude and increases the variance of the short-term ILD, so that the spatial information carried by ILD cues is degraded by reverberation. These results suggest that the mean ILD is not a reliable cue for determining source laterality in a reverberant room. However, by taking into account both the mean and variance, the distribution of high-frequency short-term ILDs provides some spatial information. This analysis suggests that, in order to use ILDs to judge source direction in reverberant space, listeners must accumulate information about how the short-term ILD varies over time. [Work supported by NIDCD and AFOSR.
NASA Technical Reports Server (NTRS)
Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, S. J.
2011-01-01
The neural mechanisms to resolve ambiguous tilt-translation motion have been hypothesized to be different for motion perception and eye movements. Previous studies have demonstrated differences in ocular and perceptual responses using a variety of motion paradigms, including Off-Vertical Axis Rotation (OVAR), Variable Radius Centrifugation (VRC), translation along a linear track, and tilt about an Earth-horizontal axis. While the linear acceleration across these motion paradigms is presumably equivalent, there are important differences in semicircular canal cues. The purpose of this study was to compare translation motion perception and horizontal slow phase velocity to quantify consistencies, or lack thereof, across four different motion paradigms. Twelve healthy subjects were exposed to sinusoidal interaural linear acceleration between 0.01 and 0.6 Hz at 1.7 m/s/s (equivalent to 10 tilt) using OVAR, VRC, roll tilt, and lateral translation. During each trial, subjects verbally reported the amount of perceived peak-to-peak lateral translation and indicated the direction of motion with a joystick. Binocular eye movements were recorded using video-oculography. In general, the gain of translation perception (ratio of reported linear displacement to equivalent linear stimulus displacement) increased with stimulus frequency, while the phase did not significantly vary. However, translation perception was more pronounced during both VRC and lateral translation involving actual translation, whereas perceptions were less consistent and more variable during OVAR and roll tilt which did not involve actual translation. For each motion paradigm, horizontal eye movements were negligible at low frequencies and showed phase lead relative to the linear stimulus. At higher frequencies, the gain of the eye movements increased and became more inphase with the acceleration stimulus. While these results are consistent with the hypothesis that the neural computational strategies for motion perception and eye movements differ, they also indicate that the specific motion platform employed can have a significant effect on both the amplitude and phase of each.
The Clinical Utility of Vestibular-Evoked Myogenic Potentials in the Diagnosis of Ménière’s Disease
Maheu, Maxime; Alvarado-Umanzor, Jenny Marylin; Delcenserie, Audrey; Champoux, François
2017-01-01
Ménière’s disease (MD) is a condition that has been proposed over 150 years ago, which involves audiological and vestibular manifestations, such as aural fullness, tinnitus, vertigo, and fluctuating hearing thresholds. Over the past few years, many researchers have assessed different techniques to help diagnose this pathology. Vestibular-evoked myogenic potential (VEMP) is an electrophysiological method assessing the saccule (cVEMP) and the utricule (oVEMP). Its clinical utility in the diagnosis of multiple pathologies, such as superior canal dehiscence, has made this tool a common method used in otologic clinics. The main objective of the present review is to determine the current state of knowledge of the VEMP in the identification of MD, such as the type of stimuli, the frequency tuning, and the interaural asymmetry ratio of the cVEMP and the oVEMP. Results show that the type of stimulation, the frequency sensitivity shift and the interaural asymmetry ratio (IAR) could be useful tool to diagnose and describe the evolution of MD. It is, however, important to emphasize that further studies are needed to confirm the utility of VEMP in the identification of MD in its early stage, using either bone-conduction vibration or air-conduction stimulation, which is of clinical importance when it comes to early intervention. PMID:28861037
Andrade, Isabel Vaamonde Sanchez; Santos-Perez, Sofia; Diz, Pilar Gayoso; Caballero, Torcuato Labella; Soto-Varela, Andrés
2013-05-01
Bithermal caloric testing and vestibular evoked myogenic potentials (VEMPs) are both diagnostic tools for the study of the vestibular system. The first tests the horizontal semicircular canal and the second evaluates the saccule and lower vestibular nerve. The results of these two tests can therefore be expected to be correlated. The aim of this study was to compare bithermal caloric test results with VEMP records in normal subjects to verify whether they are correlated. A prospective study was conducted in 60 healthy subjects (30 men and 30 women) who underwent otoscopy, pure tone audiometry, bithermal caloric testing and VEMPs. From the caloric test, we assessed the presence of possible vestibular hypofunction, whether there was directional preponderance and reflectivity of each ear (all based on both slow phase velocity and nystagmus frequency). The analysed VEMPs variables were: p1 and n1 latency, corrected amplitude, interaural p1 latency difference and p1 interaural amplitude asymmetry. We compared the reflectivity, hypofunction and directional preponderance of the caloric tests with the corrected amplitudes and amplitude asymmetries of the VEMPs. No correlations were found in the different comparisons between bithermal caloric testing results and VEMPs except for a weak correlation (p = 0.039) when comparing preponderance based on the number of nystagmus in the caloric test and amplitude asymmetry with 99 dB tone burst in the VEMPs test. The results indicate that the two diagnostic tests are not comparable, so one of them cannot replace the other, but the use of both increases diagnostic success in some conditions.
From microseconds to seconds and minutes—time computation in insect hearing
Hartbauer, Manfred; Römer, Heiner
2014-01-01
The computation of time in the auditory system of insects is of relevance at rather different time scales, covering a large range from microseconds to several minutes. At the one end of this range, only a few microseconds of interaural time differences are available for directional hearing, due to the small distance between the ears, usually considered too small to be processed reliably by simple nervous systems. Synapses of interneurons in the afferent auditory pathway are, however, very sensitive to a time difference of only 1–2 ms provided by the latency shift of afferent activity with changing sound direction. At a much larger time scale of several tens of milliseconds to seconds, time processing is important in the context species recognition, but also for those insects where males produce acoustic signals within choruses, and the temporal relationship between song elements strongly deviates from a random distribution. In these situations, some species exhibit a more or less strict phase relationship of song elements, based on phase response properties of their song oscillator. Here we review evidence on how this may influence mate choice decisions. In the same dimension of some tens of milliseconds we find species of katydids with a duetting communication scheme, where one sex only performs phonotaxis to the other sex if the acoustic response falls within a very short time window after its own call. Such time windows show some features unique to insects, and although its neuronal implementation is unknown so far, the similarity with time processing for target range detection in bat echolocation will be discussed. Finally, the time scale being processed must be extended into the range of many minutes, since some acoustic insects produce singing bouts lasting quite long, and female preferences may be based on total signaling time. PMID:24782783
Sound source localization identification accuracy: Envelope dependencies.
Yost, William A
2017-07-01
Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.
Klump, Georg M.; Tollin, Daniel J.
2016-01-01
The auditory brainstem response (ABR) is a sound-evoked non-invasively measured electrical potential representing the sum of neuronal activity in the auditory brainstem and midbrain. ABR peak amplitudes and latencies are widely used in human and animal auditory research and for clinical screening. The binaural interaction component (BIC) of the ABR stands for the difference between the sum of the monaural ABRs and the ABR obtained with binaural stimulation. The BIC comprises a series of distinct waves, the largest of which (DN1) has been used for evaluating binaural hearing in both normal hearing and hearing-impaired listeners. Based on data from animal and human studies, we discuss the possible anatomical and physiological bases of the BIC (DN1 in particular). The effects of electrode placement and stimulus characteristics on the binaurally evoked ABR are evaluated. We review how inter-aural time and intensity differences affect the BIC and, analyzing these dependencies, draw conclusion about the mechanism underlying the generation of the BIC. Finally, the utility of the BIC for clinical diagnoses are summarized. PMID:27232077
Modulation cues influence binaural masking-level difference in masking-pattern experiments.
Nitschmann, Marc; Verhey, Jesko L
2012-03-01
Binaural masking patterns show a steep decrease in the binaural masking-level difference (BMLD) when masker and signal have no frequency component in common. Experimental threshold data are presented together with model simulations for a diotic masker centered at 250 or 500 Hz and a bandwidth of 10 or 100 Hz masking a sinusoid interaurally in phase (S(0)) or in antiphase (S(π)). Simulations with a binaural model, including a modulation filterbank for the monaural analysis, indicate that a large portion of the decrease in the BMLD in remote-masking conditions may be due to an additional modulation cue available for monaural detection. © 2012 Acoustical Society of America
Modeling the utility of binaural cues for underwater sound localization.
Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo
2014-06-01
The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.
Underwater hearing and sound localization with and without an air interface.
Shupak, Avi; Sharoni, Zohara; Yanir, Yoav; Keynan, Yoav; Alfie, Yechezkel; Halpern, Pinchas
2005-01-01
Underwater hearing acuity and sound localization are improved by the presence of an air interface around the pinnae and inside the external ear canals. Hearing threshold and the ability to localize sound sources are reduced underwater. The resonance frequency of the external ear is lowered when the external ear canal is filled with water, and the impedance-matching ability of the middle ear is significantly reduced due to elevation of the ambient pressure, the water-mass load on the tympanic membrane, and the addition of a fluid-air interface during submersion. Sound lateralization on land is largely explained by the mechanisms of interaural intensity differences and interaural temporal or phase differences. During submersion, these differences are largely lost due to the increase in underwater sound velocity and cancellation of the head's acoustic shadow effect because of the similarity between the impedance of the skull and the surrounding water. Ten scuba divers wearing a regular opaque face mask or an opaque ProEar 2000 (Safe Dive, Ltd., Hofit, Israel) mask that enables the presence of air at ambient pressure in and around the ear made a dive to a depth of 3 m in the open sea. Four underwater speakers arranged on the horizontal plane at 90-degree intervals and at a distance of 5 m from the diver were used for testing pure-tone hearing thresholds (PTHT), the reception threshold for the recorded sound of a rubber-boat engine, and sound localization. For sound localization, the sound of the rubber boat's engine was randomly delivered by one speaker at a time at 40 dB HL above the recorded sound of a rubber-boat engine, and the diver was asked to point to the sound source. The azimuth was measured by the diver's companion using a navigation board. Underwater PTHT with both masks were significantly higher for frequencies of 250 to 6000 Hz when compared with the thresholds on land (p <0.0001). No differences were found in the PTHT or the reception threshold for the recorded sound of a rubber-boat engine for dry or wet ear conditions. There was no difference in the sound localization error between the regular mask and the ProEar 2000 mask. The presence of air around the pinna and inside the external ear canal did not improve underwater hearing sensitivity or sound localization. These results support the argument that bone conduction plays the main role in underwater hearing.
Van Hoesel, Richard; Ramsden, Richard; Odriscoll, Martin
2002-04-01
To characterize some of the benefits available from using two cochlear implants compared with just one, sound-direction identification (ID) abilities, sensitivity to interaural time delays (ITDs) and speech intelligibility in noise were measured for a bilateral multi-channel cochlear implant user. Sound-direction ID in the horizontal plane was tested with a bilateral cochlear implant user. The subject was tested both unilaterally and bilaterally using two independent behind-the-ear ESPRIT (Cochlear Ltd.) processors, as well as bilaterally using custom research processors. Pink noise bursts were presented using an 11-loudspeaker array spanning the subject's frontal 180 degrees arc in an anechoic room. After each burst, the subject was asked to identify which loudspeaker had produced the sound. No explicit training, and no feedback were given. Presentation levels were nominally at 70 dB SPL, except for a repeat experiment using the clinical devices where the presentation levels were reduced to 60 dB SPL to avoid activation of the devices' automatic gain control (AGC) circuits. Overall presentation levels were randomly varied by +/- 3 dB. For the research processor, a "low-update-rate" and a "high-update-rate" strategy were tested. Direct measurements of ITD just noticeable differences (JNDs) were made using a 3 AFC paradigm targeting 70% correct performance on the psychometric function. Stimuli included simple, low-rate electrical pulse trains as well as high-rate pulse trains modulated at 100 Hz. Speech data comparing monaural and binaural performance in noise were also collected with both low, and high update-rate strategies on the research processors. Open-set sentences were presented from directly in front of the subject and competing multi-talker babble noise was presented from the same loudspeaker, or from a loudspeaker placed 90 degrees to the left or right of the subject. For the sound-direction ID task, monaural performance using the clinical devices showed large mean absolute errors of 81 degrees and 73 degrees, with standard deviations (averaged across all 11 loud-speakers) of 10 degrees and 17 degrees, for left and right ears, respectively. Fore bilateral device use at a presentation level of 70 dB SPL, the mean error improved to about 16 degrees with an average standard deviation of 18 degrees. When the presentation level was decreased to 60 dB SPL to avoid activation of the automatic gain control (AGC) circuits in the clinical processors, the mean response error improved further to 8 degrees with a standard deviation of 13 degrees. Further tests with the custom research processors, which had a higher stimulation rate and did not include AGCs, showed comparable response errors: around 8 or 9 degrees and a standard deviation of about 11 degrees for both update rates. The best ITD JNDs measured for this subject were between 350 to 400 microsec for simple low-rate pulse trains. Speech results showed a substantial headshadow advantage for bilateral device use when speech and noise were spatially separated, but little evidence of binaural unmasking. For spatially coincident speech and noise, listening with both ears showed similar results to listening with either side alone when loudness summation was compensated for. No significant differences were observed between binaural results for high and low update-rates in any test configuration. Only for monaural listening in one test configuration did the high rate show a small significant improvement over the low rate. Results show that even if interaural time delay cues are not well coded or perceived, bilateral implants can offer important advantages, both for speech in noise as well as for sound-direction identification.
Auditory and visual orienting responses in listeners with and without hearing-impairment
Brimijoin, W. Owen; McShefferty, David; Akeroyd, Michael A.
2015-01-01
Head movements are intimately involved in sound localization and may provide information that could aid an impaired auditory system. Using an infrared camera system, head position and orientation was measured for 17 normal-hearing and 14 hearing-impaired listeners seated at the center of a ring of loudspeakers. Listeners were asked to orient their heads as quickly as was comfortable toward a sequence of visual targets, or were blindfolded and asked to orient toward a sequence of loudspeakers playing a short sentence. To attempt to elicit natural orienting responses, listeners were not asked to reorient their heads to the 0° loudspeaker between trials. The results demonstrate that hearing-impairment is associated with several changes in orienting responses. Hearing-impaired listeners showed a larger difference in auditory versus visual fixation position and a substantial increase in initial and fixation latency for auditory targets. Peak velocity reached roughly 140 degrees per second in both groups, corresponding to a rate of change of approximately 1 microsecond of interaural time difference per millisecond of time. Most notably, hearing-impairment was associated with a large change in the complexity of the movement, changing from smooth sigmoidal trajectories to ones characterized by abruptly-changing velocities, directional reversals, and frequent fixation angle corrections. PMID:20550266
Stecker, G Christopher; McLaughlin, Susan A; Higgins, Nathan C
2015-10-15
Whole-brain functional magnetic resonance imaging was used to measure blood-oxygenation-level-dependent (BOLD) responses in human auditory cortex (AC) to sounds with intensity varying independently in the left and right ears. Echoplanar images were acquired at 3 Tesla with sparse image acquisition once per 12-second block of sound stimulation. Combinations of binaural intensity and stimulus presentation rate were varied between blocks, and selected to allow measurement of response-intensity functions in three configurations: monaural 55-85 dB SPL, binaural 55-85 dB SPL with intensity equal in both ears, and binaural with average binaural level of 70 dB SPL and interaural level differences (ILD) ranging ±30 dB (i.e., favoring the left or right ear). Comparison of response functions equated for contralateral intensity revealed that BOLD-response magnitudes (1) generally increased with contralateral intensity, consistent with positive drive of the BOLD response by the contralateral ear, (2) were larger for contralateral monaural stimulation than for binaural stimulation, consistent with negative effects (e.g., inhibition) of ipsilateral input, which were strongest in the left hemisphere, and (3) also increased with ipsilateral intensity when contralateral input was weak, consistent with additional, positive, effects of ipsilateral stimulation. Hemispheric asymmetries in the spatial extent and overall magnitude of BOLD responses were generally consistent with previous studies demonstrating greater bilaterality of responses in the right hemisphere and stricter contralaterality in the left hemisphere. Finally, comparison of responses to fast (40/s) and slow (5/s) stimulus presentation rates revealed significant rate-dependent adaptation of the BOLD response that varied across ILD values. Copyright © 2015. Published by Elsevier Inc.
Ho, Cheng-Yu; Li, Pei-Chun; Chiang, Yuan-Chuan; Young, Shuenn-Tsong; Chu, Woei-Chyn
2015-01-01
Binaural hearing involves using information relating to the differences between the signals that arrive at the two ears, and it can make it easier to detect and recognize signals in a noisy environment. This phenomenon of binaural hearing is quantified in laboratory studies as the binaural masking-level difference (BMLD). Mandarin is one of the most commonly used languages, but there are no publication values of BMLD or BILD based on Mandarin tones. Therefore, this study investigated the BMLD and BILD of Mandarin tones. The BMLDs of Mandarin tone detection were measured based on the detection threshold differences for the four tones of the voiced vowels /i/ (i.e., /i1/, /i2/, /i3/, and /i4/) and /u/ (i.e., /u1/, /u2/, /u3/, and /u4/) in the presence of speech-spectrum noise when presented interaurally in phase (S0N0) and interaurally in antiphase (SπN0). The BILDs of Mandarin tone recognition in speech-spectrum noise were determined as the differences in the target-to-masker ratio (TMR) required for 50% correct tone recognitions between the S0N0 and SπN0 conditions. The detection thresholds for the four tones of /i/ and /u/ differed significantly (p<0.001) between the S0N0 and SπN0 conditions. The average detection thresholds of Mandarin tones were all lower in the SπN0 condition than in the S0N0 condition, and the BMLDs ranged from 7.3 to 11.5 dB. The TMR for 50% correct Mandarin tone recognitions differed significantly (p<0.001) between the S0N0 and SπN0 conditions, at –13.4 and –18.0 dB, respectively, with a mean BILD of 4.6 dB. The study showed that the thresholds of Mandarin tone detection and recognition in the presence of speech-spectrum noise are improved when phase inversion is applied to the target speech. The average BILDs of Mandarin tones are smaller than the average BMLDs of Mandarin tones. PMID:25835987
Noble, William; Gatehouse, Stuart
2004-02-01
A series of comparative analyses is presented between a group with relatively similar degrees of hearing loss in each ear (n = 103: symmetry group) and one with dissimilar losses (n = 50: asymmetry group). Asymmetry was defined as an interaural difference of more than 10dB in hearing levels averaged over 0.5. 1, 2 and 4kHz. Comparison was focused on self-rated disabilities as reflected in responses on the Speech, Spatial and Qualities of Hearing Scale (SSQ). The connections between SSQ ratings and a global self-rating of handicap were also observed. The interrelationships among SSQ items for the two groups were analysed to determine how the SSQ behaves when applied to groups in whom binaural hearing is more (asymmetry) versus less compromised. As expected, spatial hearing is severely disabled in the group with asymmetry; this group is generally more disabled than the symmetry group across all SSQ domains. In the linkages with handicap, spatial hearing, especially in dynamic settings, was strongly represented in the asymmetry group, while all aspects of hearing were moderately to strongly represented in the symmetry group. Item intercorrelations showed that speech hearing is a relatively autonomous function for the symmetry group, whereas it is enmeshed with segregation, clarity and naturalness factors for the asymmetry group. Spatial functions were more independent of others in the asymmetry group. The SSQ shows promise in the assessment of outcomes in the case of bilateral versus unilateral amplification and/or implantation.
Audiometric asymmetry and tinnitus laterality.
Tsai, Betty S; Sweetow, Robert W; Cheung, Steven W
2012-05-01
To identify an optimal audiometric asymmetry index for predicting tinnitus laterality. Retrospective medical record review. Data from adult tinnitus patients (80 men and 44 women) were extracted for demographic, audiometric, tinnitus laterality, and related information. The main measures were sensitivity, specificity, positive predictive value (PPV), and receiver operating characteristic (ROC) curves. Three audiometric asymmetry indices were constructed using one, two, or three frequency elements to compute the average interaural threshold difference (aITD). Tinnitus laterality predictive performance of a particular index was assessed by increasing the cutoff or minimum magnitude of the aITD from 10 to 35 dB in 5-dB steps to determine its ROC curve. Single frequency index performance was inferior to the other two (P < .05). Double and triple frequency indices were indistinguishable (P > .05). Two adjoining frequency elements with aITD ≥ 15 dB performed optimally for predicting tinnitus laterality (sensitivity = 0.59, specificity = 0.71, and PPV = 0.76). Absolute and relative magnitudes of hearing loss in the poorer ear were uncorrelated with tinnitus distress. An optimal audiometric asymmetry index to predict tinnitus laterality is one whereby 15 dB is the minimum aITD of two adjoining frequencies, inclusive of the maximal ITD. Tinnitus laterality dependency on magnitude of interaural asymmetry may inform design and interpretation of neuroimaging studies. Monaural acoustic tinnitus therapy may be an initial consideration for asymmetric hearing loss meeting the criterion of aITD ≥ 15 dB. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.
Functional relevance of acoustic tracheal design in directional hearing in crickets.
Schmidt, Arne K D; Römer, Heiner
2016-10-15
Internally coupled ears (ICEs) allow small animals to reliably determine the direction of a sound source. ICEs are found in a variety of taxa, but crickets have evolved the most complex arrangement of coupled ears: an acoustic tracheal system composed of a large cross-body trachea that connects two entry points for sound in the thorax with the leg trachea of both ears. The key structure that allows for the tuned directionality of the ear is a tracheal inflation (acoustic vesicle) in the midline of the cross-body trachea holding a thin membrane (septum). Crickets are known to display a wide variety of acoustic tracheal morphologies, most importantly with respect to the presence of a single or double acoustic vesicle. However, the functional relevance of this variation is still not known. In this study, we investigated the peripheral directionality of three co-occurring, closely related cricket species of the subfamily Gryllinae. No support could be found for the hypothesis that a double vesicle should be regarded as an evolutionary innovation to (1) increase interaural directional cues, (2) increase the selectivity of the directional filter or (3) provide a better match between directional and sensitivity tuning. Nonetheless, by manipulating the double acoustic vesicle in the rainforest cricket Paroecanthus podagrosus, selectively eliminating the sound-transmitting pathways, we revealed that these pathways contribute almost equally to the total amount of interaural intensity differences, emphasizing their functional relevance in the system. © 2016. Published by The Company of Biologists Ltd.
Evolutionary trends in directional hearing
Carr, Catherine E.; Christensen-Dalsgaard, Jakob
2016-01-01
Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850
Vonderschen, Katrin; Wagner, Hermann
2012-04-25
Birds and mammals exploit interaural time differences (ITDs) for sound localization. Subsequent to ITD detection by brainstem neurons, ITD processing continues in parallel midbrain and forebrain pathways. In the barn owl, both ITD detection and processing in the midbrain are specialized to extract ITDs independent of frequency, which amounts to a pure time delay representation. Recent results have elucidated different mechanisms of ITD detection in mammals, which lead to a representation of small ITDs in high-frequency channels and large ITDs in low-frequency channels, resembling a phase delay representation. However, the detection mechanism does not prevent a change in ITD representation at higher processing stages. Here we analyze ITD tuning across frequency channels with pure tone and noise stimuli in neurons of the barn owl's auditory arcopallium, a nucleus at the endpoint of the forebrain pathway. To extend the analysis of ITD representation across frequency bands to a large neural population, we employed Fourier analysis for the spectral decomposition of ITD curves recorded with noise stimuli. This method was validated using physiological as well as model data. We found that low frequencies convey sensitivity to large ITDs, whereas high frequencies convey sensitivity to small ITDs. Moreover, different linear phase frequency regimes in the high-frequency and low-frequency ranges suggested an independent convergence of inputs from these frequency channels. Our results are consistent with ITD being remodeled toward a phase delay representation along the forebrain pathway. This indicates that sensory representations may undergo substantial reorganization, presumably in relation to specific behavioral output.
Li, Chuan; Han, Lei; Ma, Chun-Wai; Lai, Suk-King; Lai, Chun-Hong; Shum, Daisy Kwok Yan; Chan, Ying-Shing
2013-07-01
Using sinusoidal oscillations of linear acceleration along both the horizontal and vertical planes to stimulate otolith organs in the inner ear, we charted the postnatal time at which responsive neurons in the rat inferior olive (IO) first showed Fos expression, an indicator of neuronal recruitment into the otolith circuit. Neurons in subnucleus dorsomedial cell column (DMCC) were activated by vertical stimulation as early as P9 and by horizontal (interaural) stimulation as early as P11. By P13, neurons in the β subnucleus of IO (IOβ) became responsive to horizontal stimulation along the interaural and antero-posterior directions. By P21, neurons in the rostral IOβ became also responsive to vertical stimulation, but those in the caudal IOβ remained responsive only to horizontal stimulation. Nearly all functionally activated neurons in DMCC and IOβ were immunopositive for the NR1 subunit of the NMDA receptor and the GluR2/3 subunit of the AMPA receptor. In situ hybridization studies further indicated abundant mRNA signals of the glutamate receptor subunits by the end of the second postnatal week. This is reinforced by whole-cell patch-clamp data in which glutamate receptor-mediated miniature excitatory postsynaptic currents of rostral IOβ neurons showed postnatal increase in amplitude, reaching the adult level by P14. Further, these neurons exhibited subthreshold oscillations in membrane potential as from P14. Taken together, our results support that ionotropic glutamate receptors in the IO enable postnatal coding of gravity-related information and that the rostral IOβ is the only IO subnucleus that encodes spatial orientations in 3-D.
Perceptually relevant parameters for virtual listening simulation of small room acoustics
Zahorik, Pavel
2009-01-01
Various physical aspects of room-acoustic simulation techniques have been extensively studied and refined, yet the perceptual attributes of the simulations have received relatively little attention. Here a method of evaluating the perceptual similarity between rooms is described and tested using 15 small-room simulations based on binaural room impulse responses (BRIRs) either measured from a real room or estimated using simple geometrical acoustic modeling techniques. Room size and surface absorption properties were varied, along with aspects of the virtual simulation including the use of individualized head-related transfer function (HRTF) measurements for spatial rendering. Although differences between BRIRs were evident in a variety of physical parameters, a multidimensional scaling analysis revealed that when at-the-ear signal levels were held constant, the rooms differed along just two perceptual dimensions: one related to reverberation time (T60) and one related to interaural coherence (IACC). Modeled rooms were found to differ from measured rooms in this perceptual space, but the differences were relatively small and should be easily correctable through adjustment of T60 and IACC in the model outputs. Results further suggest that spatial rendering using individualized HRTFs offers little benefit over nonindividualized HRTF rendering for room simulation applications where source direction is fixed. PMID:19640043
Stellmack, Mark A.; Byrne, Andrew J.; Viemeister, Neal F.
2010-01-01
When different components of a stimulus carry different binaural information, processing of binaural information in a target component is often affected. The present experiments examine whether such interference is affected by amplitude modulation and the relative phase of modulation of the target and distractors. In all experiments, listeners attempted to discriminate interaural time differences of a target stimulus in the presence of distractor stimuli with ITD=0. In Experiment 1, modulation of the distractors but not the target reduced interference between components. In Experiment 2, synthesized musical notes exhibited little binaural interference when there were slight asynchronies between different streams of notes (31 or 62 ms). The remaining experiments suggested that the reduction in binaural interference in the previous experiments was due neither to the complex spectra of the synthesized notes nor to greater detectability of the target in the presence of modulated distractors. These data suggest that this interference is reduced when components are modulated in ways that result in the target appearing briefly in isolation, not because of segregation cues. These data also suggest that modulation and asynchronies between modulators that might be encountered in real-world listening situations are adequate to reduce binaural interference to inconsequential levels. PMID:20815459
Spatial selectivity and binaural responses in the inferior colliculus of the great horned owl.
Volman, S F; Konishi, M
1989-09-01
In this study we have investigated the processing of auditory cues for sound localization in the great horned owl (Bubo virginianus). Previous studies have shown that the barn owl, whose ears are asymmetrically oriented in the vertical plane, has a 2-dimensional, topographic representation of auditory space in the external division of the inferior colliculus (ICx). As in the barn owl, the great horned owl's ICx is anatomically distinct and projects to the optic tectum. Neurons in ICx respond over only a small range of azimuths (mean = 32 degrees), and azimuth is topographically mapped. In contrast to the barn owl, the great horned owl has bilaterally symmetrical ears and its receptive fields are not restricted in elevation. The binaural cues available for sound localization were measured both with cochlear microphonic recordings and with a microphone attached to a probe tube in the auditory canal. Interaural time disparity (ITD) varied monotonically with azimuth. Interaural intensity differences (IID) also changed with azimuth, but the largest IIDs were less than 15 dB, and the variation was not monotonic. Neither ITD nor IID varied systematically with changes in the vertical position of a sound source. We used dichotic stimulation to determine the sensitivity of ICx neurons to these binaural cues. Best ITD of ICx units was topographically mapped and strongly correlated with receptive-field azimuth. The width of ITD tuning curves, measured at 50% of the maximum response, averaged 72 microseconds. All ICx neurons responded only to binaural stimulation and had nonmonotonic IID tuning curves. Best IID was weakly, but significantly, correlated with best ITD (r = 0.39, p less than 0.05). The IID tuning curves, however, were broad (mean 50% width = 24 dB), and 67% of the units had best IIDs within 5 dB of 0 dB IID. ITD tuning was sensitive to variations in IID in the direction opposite to that expected for time-intensity trading, but the magnitude of this effect was only 1.5 microseconds/dB IID. We conclude that, in the great horned owl, the spatial selectivity of ICx neurons arises primarily from their ITD tuning. Except for the absence of elevation selectivity and the narrow range of best IIDs, ICx in the great horned owl appears to be organized much the same as in the barn owl.
Aurally-adequate time-frequency analysis for scattered sound in auditoria
NASA Astrophysics Data System (ADS)
Norris, Molly K.; Xiang, Ning; Kleiner, Mendel
2005-04-01
The goal of this work was to apply an aurally-adequate time-frequency analysis technique to the analysis of sound scattering effects in auditoria. Time-frequency representations were developed as a motivated effort that takes into account binaural hearing, with a specific implementation of interaural cross-correlation process. A model of the human auditory system was implemented in the MATLAB platform based on two previous models [A. Härmä and K. Palomäki, HUTear, Espoo, Finland; and M. A. Akeroyd, A. Binaural Cross-correlogram Toolbox for MATLAB (2001), University of Sussex, Brighton]. These stages include proper frequency selectivity, the conversion of the mechanical motion of the basilar membrane to neural impulses, and binaural hearing effects. The model was then used in the analysis of room impulse responses with varying scattering characteristics. This paper discusses the analysis results using simulated and measured room impulse responses. [Work supported by the Frank H. and Eva B. Buck Foundation.
ERIC Educational Resources Information Center
Passow, Susanne; Müller, Maike; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R.; Lindenberger, Ulman; Li, Shu-Chen
2013-01-01
Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation…
A real-time biomimetic acoustic localizing system using time-shared architecture
NASA Astrophysics Data System (ADS)
Nourzad Karl, Marianne; Karl, Christian; Hubbard, Allyn
2008-04-01
In this paper a real-time sound source localizing system is proposed, which is based on previously developed mammalian auditory models. Traditionally, following the models, which use interaural time delay (ITD) estimates, the amount of parallel computations needed by a system to achieve real-time sound source localization is a limiting factor and a design challenge for hardware implementations. Therefore a new approach using a time-shared architecture implementation is introduced. The proposed architecture is a purely sample-base-driven digital system, and it follows closely the continuous-time approach described in the models. Rather than having dedicated hardware on a per frequency channel basis, a specialized core channel, shared for all frequency bands is used. Having an optimized execution time, which is much less than the system's sample rate, the proposed time-shared solution allows the same number of virtual channels to be processed as the dedicated channels in the traditional approach. Hence, the time-shared approach achieves a highly economical and flexible implementation using minimal silicon area. These aspects are particularly important in efficient hardware implementation of a real time biomimetic sound source localization system.
Yakushin, Sergei B; Bukharina, Svetlana E; Raphan, Theodore; Buttner-Ennever, Jean; Cohen, Bernard
2003-10-01
Alterations in the gain of the vertical angular vestibulo-ocular reflex (VOR) are dependent on the head position in which the gain changes were produced. We determined how long gravity-dependent gain changes last in monkeys after four hours of adaptation, and whether the adaptation is mediated through the nodulus and uvula of the vestibulocerebellum. Vertical VOR gains were adaptively modified by rotation about an interaural axis, in phase or out of phase with the visual surround. Vertical VOR gains were modified with the animals in one of three orientations: upright, left-side down, or right-side down. Monkeys were tested in darkness for up to four days after adaptation using sinusoidal rotation about an interaural axis that was incrementally tilted in 10 degrees steps from vertical to side down positions. Animals were unrestrained in their cages in normal light conditions between tests. Gravity-dependent gain changes lasted for a day or less after adaptation while upright, but persisted for two days or more after on-side adaptation. These data show that gravity-dependent gain changes can last for prolonged periods after only four hours of adaptation in monkeys, as in humans. They also demonstrate that natural head movements made while upright do not provide an adequate stimulus for rapid recovery of vertical VOR gains that were induced on side. In two animals, the nodulus and uvula were surgically ablated. Vertical gravity-dependent gain changes were not significantly different before and after surgery, indicating that the nodulus and uvula do not have a critical role in producing them.
[The characteristics of VEMP in patients with acoustic neuroma].
Xue, Bin; Yang, Jun
2008-01-01
To establish the normal value of the vestibular evoked myogenic potential (VEMP), and to determine the characteristics of VEMP in patients with acoustic neuroma (AN) and to explore the significance of VEMP in diagnosis of AN. Click-evoked VEMP was recorded with surface electrodes attached on the sternocleidomastoid muscle. Latencies and amplitudes of specific waveform of VEMP were measured. The hearing normal subjects including 26 males and 20 females were chosen to establish the normal value of VEMP. VEMP was investigated in 14 patients with AN who underwent surgery during the period of 2006-2007 as well as auditory brainstem response (ABR) and vestibular caloric test. Of 46 subjects with normal hearing, VEMP was present in both ears in 43 subjects, absent in either ear in three subjects. The reducible rate is 93.5% (86/92). The nor-mal value obtained from 86 reducible ears were as follows (means +/- standard deviation): latency of p13 (11.86 +/- 2.11) ms, latency of n23 (18.57 +/- 2.19) ms, interval time between p13 and n23 (6.71 +/- 1.69) ms, amplitude of p13n23 (24.18 +/- 8.22) microV. Interaural variances in 43 subjects whose VEMP were available were as follows (means +/- standard deviation): /deltap13 (0.64 +/- 0.61) ms, /deltan23/(1.05 +/- 0.97) ms, interval time between /delta13n23/ (0.84 +/- 0.81) ms, amplitude ratio (max/min) 1.32 +/- 0. 37, interaural asymmetric ratio of VEMP 0.12 +/- 0.11. Of the 14 patients with AN, VEMP was absent on the affected side in eight patients, absent on either side in three patients, and present on the unaffected side in 11 patients. VEMP presented on the affected side in three patients was significantly prolonged in /deltapl3/ and /deltap13n23/. Patients with AN characterized with VEMP could be useful in the diagnosis of AN combined together with other tests.
NASA Astrophysics Data System (ADS)
Shin, Ki Hoon; Park, Youngjin
Human's ability to perceive elevation of a sound and distinguish whether a sound is coming from the front or rear strongly depends on the monaural spectral features of the pinnae. In order to realize an effective virtual auditory display by HRTF (head-related transfer function) customization, the pinna responses were isolated from the median HRIRs (head-related impulse responses) of 45 individual HRIRs in the CIPIC HRTF database and modeled as linear combinations of 4 or 5 basic temporal shapes (basis functions) per each elevation on the median plane by PCA (principal components analysis) in the time domain. By tuning the weight of each basis function computed for a specific height to replace the pinna response in the KEMAR HRIR at the same height with the resulting customized pinna response and listening to the filtered stimuli over headphones, 4 individuals with normal hearing sensitivity were able to create a set of HRIRs that outperformed the KEMAR HRIRs in producing vertical effects with reduced front/back ambiguity in the median plane. Since the monaural spectral features of the pinnae are almost independent of azimuthal variation of the source direction, similar vertical effects could also be generated at different azimuthal directions simply by varying the ITD (interaural time difference) according to the direction as well as the size of each individual's own head.
Individual Differences Reveal Correlates of Hidden Hearing Deficits
Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G.
2015-01-01
Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing.” PMID:25653371
Localizing the sources of two independent noises: Role of time varying amplitude differences
Yost, William A.; Brown, Christopher A.
2013-01-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597
Localizing the sources of two independent noises: role of time varying amplitude differences.
Yost, William A; Brown, Christopher A
2013-04-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.
Ocular motor responses to abrupt interaural head translation in normal humans
NASA Technical Reports Server (NTRS)
Ramat, Stefano; Zee, David S.; Shelhamer, M. J. (Principal Investigator)
2003-01-01
We characterized the interaural translational vestibulo-ocular reflex (tVOR) in 6 normal humans to brief (approximately 200 ms), high-acceleration (0.4-1.4g) stimuli, while they fixed targets at 15 or 30 cm. The latency was 19 +/- 5 ms at 15-cm and 20 +/- 12 ms at 30-cm viewing. The gain was quantified using the ratio of actual to ideal behavior. The median position gain (at time of peak head velocity) was 0.38 and 0.37, and the median velocity gain, 0.52 and 0.62, at 15- and 30-cm viewing, respectively. These results suggest the tVOR scales proportionally at these viewing distances. Likewise, at both viewing distances, peak eye velocity scaled linearly with peak head velocity and gain was independent of peak head acceleration. A saccade commonly occurred in the compensatory direction, with a greater latency (165 vs. 145 ms) and lesser amplitude (1.8 vs. 3.2 deg) at 30- than 15-cm viewing. Even with saccades, the overall gain at the end of head movement was still considerably undercompensatory (medians 0.68 and 0.77 at 15- and 30-cm viewing). Monocular viewing was also assessed at 15-cm viewing. In 4 of 6 subjects, gains were the same as during binocular viewing and scaled closely with vergence angle. In sum the low tVOR gain and scaling of the response with viewing distance and head velocity extend previous results to higher acceleration stimuli. tVOR latency (approximately 20 ms) was lower than previously reported. Saccades are an integral part of the tVOR, and also scale with viewing distance.
ERIC Educational Resources Information Center
Huang, Ying; Huang, Qiang; Chen, Xun; Wu, Xihong; Li, Liang
2009-01-01
Perceptual integration of the sound directly emanating from the source with reflections needs both temporal storage and correlation computation of acoustic details. We examined whether the temporal storage is frequency dependent and associated with speech unmasking. In Experiment 1, a break in correlation (BIC) between interaurally correlated…
NASA Astrophysics Data System (ADS)
Martens, William
2005-04-01
Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.
Mismatch negativity to acoustical illusion of beat: how and where the change detection takes place?
Chakalov, Ivan; Paraskevopoulos, Evangelos; Wollbrink, Andreas; Pantev, Christo
2014-10-15
In case of binaural presentation of two tones with slightly different frequencies the structures of brainstem can no longer follow the interaural time differences (ITD) resulting in an illusionary perception of beat corresponding to frequency difference between the two prime tones. Hence, the beat-frequency does not exist in the prime tones presented to either ear. This study used binaural beats to explore the nature of acoustic deviance detection in humans by means of magnetoencephalography (MEG). Recent research suggests that the auditory change detection is a multistage process. To test this, we employed 26 Hz-binaural beats in a classical oddball paradigm. However, the prime tones (250 Hz and 276 Hz) were switched between the ears in the case of the deviant-beat. Consequently, when the deviant is presented, the cochleae and auditory nerves receive a "new afferent", although the standards and the deviants are heard identical (26 Hz-beats). This allowed us to explore the contribution of auditory periphery to change detection process, and furthermore, to evaluate its influence on beats-related auditory steady-state responses (ASSRs). LORETA-source current density estimates of the evoked fields in a typical mismatch negativity time-window (MMN) and the subsequent difference-ASSRs were determined and compared. The results revealed an MMN generated by a complex neural network including the right parietal lobe and the left middle frontal gyrus. Furthermore, difference-ASSR was generated in the paracentral gyrus. Additionally, psychophysical measures showed no perceptual difference between the standard- and deviant-beats when isolated by noise. These results suggest that the auditory periphery has an important contribution to novelty detection already at sub-cortical level. Overall, the present findings support the notion of hierarchically organized acoustic novelty detection system. Copyright © 2014 Elsevier Inc. All rights reserved.
Auditory display for the blind
NASA Technical Reports Server (NTRS)
Fish, R. M. (Inventor)
1974-01-01
A system for providing an auditory display of two-dimensional patterns as an aid to the blind is described. It includes a scanning device for producing first and second voltages respectively indicative of the vertical and horizontal positions of the scan and a further voltage indicative of the intensity at each point of the scan and hence of the presence or absence of the pattern at that point. The voltage related to scan intensity controls transmission of the sounds to the subject so that the subject knows that a portion of the pattern is being encountered by the scan when a tone is heard, the subject determining the position of this portion of the pattern in space by the frequency and interaural difference information contained in the tone.
Noise reduction of coincidence detector output by the inferior colliculus of the barn owl.
Christianson, G Björn; Peña, José Luis
2006-05-31
A recurring theme in theoretical work is that integration over populations of similarly tuned neurons can reduce neural noise. However, there are relatively few demonstrations of an explicit noise reduction mechanism in a neural network. Here we demonstrate that the brainstem of the barn owl includes a stage of processing apparently devoted to increasing the signal-to-noise ratio in the encoding of the interaural time difference (ITD), one of two primary binaural cues used to compute the position of a sound source in space. In the barn owl, the ITD is processed in a dedicated neural pathway that terminates at the core of the inferior colliculus (ICcc). The actual locus of the computation of the ITD is before ICcc in the nucleus laminaris (NL), and ICcc receives no inputs carrying information that did not originate in NL. Unlike in NL, the rate-ITD functions of ICcc neurons require as little as a single stimulus presentation per ITD to show coherent ITD tuning. ICcc neurons also displayed a greater dynamic range with a maximal difference in ITD response rates approximately double that seen in NL. These results indicate that ICcc neurons perform a computation functionally analogous to averaging across a population of similarly tuned NL neurons.
Whiteford, Kelly L.; Oxenham, Andrew J.
2015-01-01
The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding. PMID:26627783
Whiteford, Kelly L; Oxenham, Andrew J
2015-11-01
The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding.
Dykstra, Andrew R; Burchard, Daniel; Starzynski, Christian; Riedel, Helmut; Rupp, Andre; Gutschalk, Alexander
2016-08-01
We used magnetoencephalography to examine lateralization and binaural interaction of the middle-latency and late-brainstem components of the auditory evoked response (the MLR and SN10, respectively). Click stimuli were presented either monaurally, or binaurally with left- or right-leading interaural time differences (ITDs). While early MLR components, including the N19 and P30, were larger for monaural stimuli presented contralaterally (by approximately 30 and 36 % in the left and right hemispheres, respectively), later components, including the N40 and P50, were larger ipsilaterally. In contrast, MLRs elicited by binaural clicks with left- or right-leading ITDs did not differ. Depending on filter settings, weak binaural interaction could be observed as early as the P13 but was clearly much larger for later components, beginning at the P30, indicating some degree of binaural linearity up to early stages of cortical processing. The SN10, an obscure late-brainstem component, was observed consistently in individuals and showed linear binaural additivity. The results indicate that while the MLR is lateralized in response to monaural stimuli-and not ITDs-this lateralization reverses from primarily contralateral to primarily ipsilateral as early as 40 ms post stimulus and is never as large as that seen with fMRI.
Monaural Congenital Deafness Affects Aural Dominance and Degrades Binaural Processing
Tillein, Jochen; Hubka, Peter; Kral, Andrej
2016-01-01
Cortical development extensively depends on sensory experience. Effects of congenital monaural and binaural deafness on cortical aural dominance and representation of binaural cues were investigated in the present study. We used an animal model that precisely mimics the clinical scenario of unilateral cochlear implantation in an individual with single-sided congenital deafness. Multiunit responses in cortical field A1 to cochlear implant stimulation were studied in normal-hearing cats, bilaterally congenitally deaf cats (CDCs), and unilaterally deaf cats (uCDCs). Binaural deafness reduced cortical responsiveness and decreased response thresholds and dynamic range. In contrast to CDCs, in uCDCs, cortical responsiveness was not reduced, but hemispheric-specific reorganization of aural dominance and binaural interactions were observed. Deafness led to a substantial drop in binaural facilitation in CDCs and uCDCs, demonstrating the inevitable role of experience for a binaural benefit. Sensitivity to interaural time differences was more reduced in uCDCs than in CDCs, particularly at the hemisphere ipsilateral to the hearing ear. Compared with binaural deafness, unilateral hearing prevented nonspecific reduction in cortical responsiveness, but extensively reorganized aural dominance and binaural responses. The deaf ear remained coupled with the cortex in uCDCs, demonstrating a significant difference to deprivation amblyopia in the visual system. PMID:26803166
Monaural Congenital Deafness Affects Aural Dominance and Degrades Binaural Processing.
Tillein, Jochen; Hubka, Peter; Kral, Andrej
2016-04-01
Cortical development extensively depends on sensory experience. Effects of congenital monaural and binaural deafness on cortical aural dominance and representation of binaural cues were investigated in the present study. We used an animal model that precisely mimics the clinical scenario of unilateral cochlear implantation in an individual with single-sided congenital deafness. Multiunit responses in cortical field A1 to cochlear implant stimulation were studied in normal-hearing cats, bilaterally congenitally deaf cats (CDCs), and unilaterally deaf cats (uCDCs). Binaural deafness reduced cortical responsiveness and decreased response thresholds and dynamic range. In contrast to CDCs, in uCDCs, cortical responsiveness was not reduced, but hemispheric-specific reorganization of aural dominance and binaural interactions were observed. Deafness led to a substantial drop in binaural facilitation in CDCs and uCDCs, demonstrating the inevitable role of experience for a binaural benefit. Sensitivity to interaural time differences was more reduced in uCDCs than in CDCs, particularly at the hemisphere ipsilateral to the hearing ear. Compared with binaural deafness, unilateral hearing prevented nonspecific reduction in cortical responsiveness, but extensively reorganized aural dominance and binaural responses. The deaf ear remained coupled with the cortex in uCDCs, demonstrating a significant difference to deprivation amblyopia in the visual system. © The Author 2016. Published by Oxford University Press.
Individual differences reveal correlates of hidden hearing deficits.
Bharadwaj, Hari M; Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G
2015-02-04
Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of "normal hearing." Copyright © 2015 the authors 0270-6474/15/352161-12$15.00/0.
Adaptation to stimulus statistics in the perception and neural representation of auditory space.
Dahmen, Johannes C; Keating, Peter; Nodal, Fernando R; Schulz, Andreas L; King, Andrew J
2010-06-24
Sensory systems are known to adapt their coding strategies to the statistics of their environment, but little is still known about the perceptual implications of such adjustments. We investigated how auditory spatial processing adapts to stimulus statistics by presenting human listeners and anesthetized ferrets with noise sequences in which interaural level differences (ILD) rapidly fluctuated according to a Gaussian distribution. The mean of the distribution biased the perceived laterality of a subsequent stimulus, whereas the distribution's variance changed the listeners' spatial sensitivity. The responses of neurons in the inferior colliculus changed in line with these perceptual phenomena. Their ILD preference adjusted to match the stimulus distribution mean, resulting in large shifts in rate-ILD functions, while their gain adapted to the stimulus variance, producing pronounced changes in neural sensitivity. Our findings suggest that processing of auditory space is geared toward emphasizing relative spatial differences rather than the accurate representation of absolute position.
Hemispheric asymmetry of ERPs and MMNs evoked by slow, fast and abrupt auditory motion.
Shestopalova, L B; Petropavlovskaia, E A; Vaitulevich, S Ph; Nikitin, N I
2016-10-01
The current MMN study investigates whether brain lateralization during automatic discrimination of sound stimuli moving at different velocities is consistent with one of the three models of asymmetry: the right-hemispheric dominance model, the contralateral dominance model, or the neglect model. Auditory event-related potentials (ERPs) were recorded for three patterns of sound motion produced by linear or abrupt changes of interaural time differences. The slow motion (450deg/s) was used as standard, and the fast motion (620deg/s) and the abrupt sound shift served as deviants in the oddball blocks. All stimuli had the same onset/offset spatial positions. We compared the effects of the recording side (left, right) and of the direction of sound displacement (ipsi- or contralateral with reference to the side of recording) on the ERPs and mismatch negativity (MMN). Our results indicated different patterns of asymmetry for the ERPs and MMN responses. The ERPs showed a velocity-independent right-hemispheric dominance that emerged at the descending limb of N1 wave (at around 120-160ms) and could be related to overall context of the preattentive spatial perception. The MMNs elicited in the left hemisphere (at around 230-270ms) exhibited a contralateral dominance, whereas the right-hemispheric MMNs were insensitive to the direction of sound displacement. These differences in contralaterality between MMN responses produced by the left and the right hemisphere favour the neglect model of the preattentive motion processing indexed by MMN. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tardif, Eric; Spierer, Lucas; Clarke, Stephanie; Murray, Micah M
2008-03-07
Partially segregated neuronal pathways ("what" and "where" pathways, respectively) are thought to mediate sound recognition and localization. Less studied are interactions between these pathways. In two experiments, we investigated whether near-threshold pitch discrimination sensitivity (d') is altered by supra-threshold task-irrelevant position differences and likewise whether near-threshold position discrimination sensitivity is altered by supra-threshold task-irrelevant pitch differences. Each experiment followed a 2 x 2 within-subjects design regarding changes/no change in the task-relevant and task-irrelevant stimulus dimensions. In Experiment 1, subjects discriminated between 750 Hz and 752 Hz pure tones, and d' for this near-threshold pitch change significantly increased by a factor of 1.09 when accompanied by a task-irrelevant position change of 65 micros interaural time difference (ITD). No response bias was induced by the task-irrelevant position change. In Experiment 2, subjects discriminated between 385 micros and 431 micros ITDs, and d' for this near-threshold position change significantly increased by a factor of 0.73 when accompanied by task-irrelevant pitch changes (6 Hz). In contrast to Experiment 1, task-irrelevant pitch changes induced a response criterion bias toward responding that the two stimuli differed. The collective results are indicative of facilitative interactions between "what" and "where" pathways. By demonstrating how these pathways may cooperate under impoverished listening conditions, our results bear implications for possible neuro-rehabilitation strategies. We discuss our results in terms of the dual-pathway model of auditory processing.
Examination of Insert Ear Interaural Attenuation (IA)Values in Audiological Evaluations.
Gumus, Nebi M; Gumus, Merve; Unsal, Selim; Yuksel, Mustafa; Gunduz, Mehmet
2016-12-01
The purpose of this study was to evaluate Interaural Attenuation (IA) in frequency base in the insert earphones that are used in audiological assessments. Thirty healthy subjects between 18-65 years of age (14 female and 16 male) participated in our study. Otoscopic examination was performed on all participants. Audiological evaluations were performed using the Interacoustics AC40 clinical audiometer and ER-3A insert earphones. IA value was calculated by subtracting good ear bone conduction hearing thresholds of the worst airway hearing threshold. In our measuring for 0.125-8.0 kHz frequency were performed in our audiometry device separately for each frequency. IA amount in the results we found in 1000 Hz and below frequencies about 75-110 dB range avarage is 89±5dB, in above 1000 Hz frequencies in 50-95 dB range and avarage it is changed to 69±5dB. According to the obtained findings the quantity of melting in the transition between the ears are increasing with the insert earphones. The insert earphone should be beside supraaural earphone that is routinely used in clinics. Difficult masking applications due to the increase in the value of IA can be easily done with insert earphones.
Toward a more ecologically valid measure of speech understanding in background noise.
Jerger, J; Greenwald, R; Wambacq, I; Seipel, A; Moncrieff, D
2000-05-01
In an attempt to develop a more ecologically valid measure of speech understanding in a background of competing speech, we constructed a quasidichotic procedure based on the monitoring of continuous speech from loudspeakers placed directly to the listener's right and left sides. The listener responded to the presence of incongruous or anomalous words imbedded within the context of two children's fairy tales. Attention was directed either to the right or to the left side in blocks of 25 utterances. Within each block, there were target (anomalous) and nontarget (nonanomalous) words. Responses to target words were analyzed separately for attend-right and attend-left conditions. Our purpose was twofold: (1) to evaluate the feasibility of such an approach for obtaining electrophysiologic performance measures in the sound field and (2) to gather normative interaural symmetry data for the new technique in young adults with normal hearing. Event-related potentials to target and nontarget words at 30 electrode sites were obtained in 20 right-handed young adults with normal hearing. Waveforms and associated topographic maps were characterized by a slight negativity in the region of 400 msec (N400) and robust positivity in the region of 900 msec (P900). Norms for interaural symmetry of the P900 event-related potential in young adults were derived.
Sensorimotor Model of Obstacle Avoidance in Echolocating Bats
Vanderelst, Dieter; Holderied, Marc W.; Peremans, Herbert
2015-01-01
Bat echolocation is an ability consisting of many subtasks such as navigation, prey detection and object recognition. Understanding the echolocation capabilities of bats comes down to isolating the minimal set of acoustic cues needed to complete each task. For some tasks, the minimal cues have already been identified. However, while a number of possible cues have been suggested, little is known about the minimal cues supporting obstacle avoidance in echolocating bats. In this paper, we propose that the Interaural Intensity Difference (IID) and travel time of the first millisecond of the echo train are sufficient cues for obstacle avoidance. We describe a simple control algorithm based on the use of these cues in combination with alternating ear positions modeled after the constant frequency bat Rhinolophus rouxii. Using spatial simulations (2D and 3D), we show that simple phonotaxis can steer a bat clear from obstacles without performing a reconstruction of the 3D layout of the scene. As such, this paper presents the first computationally explicit explanation for obstacle avoidance validated in complex simulated environments. Based on additional simulations modelling the FM bat Phyllostomus discolor, we conjecture that the proposed cues can be exploited by constant frequency (CF) bats and frequency modulated (FM) bats alike. We hypothesize that using a low level yet robust cue for obstacle avoidance allows bats to comply with the hard real-time constraints of this basic behaviour. PMID:26502063
Contextual effects on preattentive processing of sound motion as revealed by spatial MMN.
Shestopalova, L B; Petropavlovskaia, E A; Vaitulevich, S Ph; Nikitin, N I
2015-04-01
The magnitude of spatial distance between sound stimuli is critically important for their preattentive discrimination, yet the effect of stimulus context on auditory motion processing is not clear. This study investigated the effects of acoustical change and stimulus context on preattentive spatial change detection. Auditory event-related potentials (ERPs) were recorded for stationary midline noises and two patterns of sound motion produced by linear or abrupt changes of interaural time differences. Each of the three types of stimuli was used as standard or deviant in different blocks. Context effects on mismatch negativity (MMN) elicited by stationary and moving sound stimuli were investigated by reversing the role of standard and deviant stimuli, while the acoustical stimulus parameters were kept the same. That is, MMN amplitudes were calculated by subtracting ERPs to identical stimuli presented as standard in one block and deviant in another block. In contrast, effects of acoustical change on MMN amplitudes were calculated by subtracting ERPs of standards and deviants presented within the same block. Preattentive discrimination of moving and stationary sounds indexed by MMN was strongly dependent on the stimulus context. Higher MMNs were produced in oddball configurations where deviance represented increments of the sound velocity, as compared to configurations with velocity decrements. The effect of standard-deviant reversal was more pronounced with the abrupt sound displacement than with gradual sound motion. Copyright © 2015 Elsevier B.V. All rights reserved.
The Neural Substrate for Binaural Masking Level Differences in the Auditory Cortex
Gilbert, Heather J.; Krumbholz, Katrin; Palmer, Alan R.
2015-01-01
The binaural masking level difference (BMLD) is a phenomenon whereby a signal that is identical at each ear (S0), masked by a noise that is identical at each ear (N0), can be made 12–15 dB more detectable by inverting the waveform of either the tone or noise at one ear (Sπ, Nπ). Single-cell responses to BMLD stimuli were measured in the primary auditory cortex of urethane-anesthetized guinea pigs. Firing rate was measured as a function of signal level of a 500 Hz pure tone masked by low-passed white noise. Responses were similar to those reported in the inferior colliculus. At low signal levels, the response was dominated by the masker. At higher signal levels, firing rate either increased or decreased. Detection thresholds for each neuron were determined using signal detection theory. Few neurons yielded measurable detection thresholds for all stimulus conditions, with a wide range in thresholds. However, across the entire population, the lowest thresholds were consistent with human psychophysical BMLDs. As in the inferior colliculus, the shape of the firing-rate versus signal-level functions depended on the neurons' selectivity for interaural time difference. Our results suggest that, in cortex, BMLD signals are detected from increases or decreases in the firing rate, consistent with predictions of cross-correlation models of binaural processing and that the psychophysical detection threshold is based on the lowest neural thresholds across the population. PMID:25568115
Separation of concurrent broadband sound sources by human listeners
NASA Astrophysics Data System (ADS)
Best, Virginia; van Schaik, André; Carlile, Simon
2004-01-01
The effect of spatial separation on the ability of human listeners to resolve a pair of concurrent broadband sounds was examined. Stimuli were presented in a virtual auditory environment using individualized outer ear filter functions. Subjects were presented with two simultaneous noise bursts that were either spatially coincident or separated (horizontally or vertically), and responded as to whether they perceived one or two source locations. Testing was carried out at five reference locations on the audiovisual horizon (0°, 22.5°, 45°, 67.5°, and 90° azimuth). Results from experiment 1 showed that at more lateral locations, a larger horizontal separation was required for the perception of two sounds. The reverse was true for vertical separation. Furthermore, it was observed that subjects were unable to separate stimulus pairs if they delivered the same interaural differences in time (ITD) and level (ILD). These findings suggested that the auditory system exploited differences in one or both of the binaural cues to resolve the sources, and could not use monaural spectral cues effectively for the task. In experiments 2 and 3, separation of concurrent noise sources was examined upon removal of low-frequency content (and ITDs), onset/offset ITDs, both of these in conjunction, and all ITD information. While onset and offset ITDs did not appear to play a major role, differences in ongoing ITDs were robust cues for separation under these conditions, including those in the envelopes of high-frequency channels.
Predicting binaural responses from monaural responses in the gerbil medial superior olive
Plauška, Andrius; Borst, J. Gerard
2016-01-01
Accurate sound source localization of low-frequency sounds in the horizontal plane depends critically on the comparison of arrival times at both ears. A specialized brainstem circuit containing the principal neurons of the medial superior olive (MSO) is dedicated to this comparison. MSO neurons are innervated by segregated inputs from both ears. The coincident arrival of excitatory inputs from both ears is thought to trigger action potentials, with differences in internal delays creating a unique sensitivity to interaural time differences (ITDs) for each cell. How the inputs from both ears are integrated by the MSO neurons is still debated. Using juxtacellular recordings, we tested to what extent MSO neurons from anesthetized Mongolian gerbils function as simple cross-correlators of their bilateral inputs. From the measured subthreshold responses to monaural wideband stimuli we predicted the rate-ITD functions obtained from the same MSO neuron, which have a damped oscillatory shape. The rate of the oscillations and the position of the peaks and troughs were accurately predicted. The amplitude ratio between dominant and secondary peaks of the rate-ITD function, captured in the width of its envelope, was not always exactly reproduced. This minor imperfection pointed to the methodological limitation of using a linear representation of the monaural inputs, which disregards any temporal sharpening occurring in the cochlear nucleus. The successful prediction of the major aspects of rate-ITD curves supports a simple scheme in which the ITD sensitivity of MSO neurons is realized by the coincidence detection of excitatory monaural inputs. PMID:27009164
Index to FAA Office of Aviation Medicine Reports: 1961 through 1998.
1999-01-01
Mechanisms of action of the insecticide endrin. AD431299 63-17 Tobias, J. V: Application of a "relative" procedure to a problem in binaural beat ...selection, 90-13. ... auditory fatigue, 63-19, 65-1, 65-2. ... binaural beat perception, 63-17. ... cockpit noise intensities, 68-21, 68-25. ... ear...Communication ... ATC/pilot voice, 93-20, 95-15, 96-26, 98-17, 98-20. ... binaural beat perception, 63-17. ... earphone response, 63-7. ... interaural
Dong, Junzi; Colburn, H. Steven
2016-01-01
In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem. PMID:26866056
Todd, Ann E.; Goupell, Matthew J.; Litovsky, Ruth Y.
2016-01-01
Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs. PMID:27475132
Todd, Ann E; Goupell, Matthew J; Litovsky, Ruth Y
2016-07-01
Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs.
Dong, Junzi; Colburn, H Steven; Sen, Kamal
2016-01-01
In multisource, "cocktail party" sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.
2009-07-01
Therefore, it’s safe to assume that most large errors are due to front-back confusions. Front-back confusions occur in part because the binaural ...two ear) cues that dominate sound localization do not distinguish the front and rear hemispheres. The two binaural cues relied on are interaural...121 (5), 3094–3094. Shinn-Cunningham, B. G.; Kopčo, N.; Martin, T. J. Localizing Nearby Sound Sources in a Classroom: Binaural Room Impulse
Development of kinesthetic-motor and auditory-motor representations in school-aged children.
Kagerer, Florian A; Clark, Jane E
2015-07-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age.
Development of kinesthetic-motor and auditory-motor representations in school-aged children
Clark, Jane E.
2015-01-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age. PMID:25912609
Effects of sound source location and direction on acoustic parameters in Japanese churches.
Soeta, Yoshiharu; Ito, Ken; Shimokura, Ryota; Sato, Shin-ichi; Ohsawa, Tomohiro; Ando, Yoichi
2012-02-01
In 1965, the Catholic Church liturgy changed to allow priests to face the congregation. Whereas Church tradition, teaching, and participation have been much discussed with respect to priest orientation at Mass, the acoustical changes in this regard have not yet been examined scientifically. To discuss acoustic desired within churches, it is necessary to know the acoustical characteristics appropriate for each phase of the liturgy. In this study, acoustic measurements were taken at various source locations and directions using both old and new liturgies performed in Japanese churches. A directional loudspeaker was used as the source to provide vocal and organ acoustic fields, and impulse responses were measured. Various acoustical parameters such as reverberation time and early decay time were analyzed. The speech transmission index was higher for the new Catholic liturgy, suggesting that the change in liturgy has improved speech intelligibility. Moreover, the interaural cross-correlation coefficient and early lateral energy fraction were higher and lower, respectively, suggesting that the change in liturgy has made the apparent source width smaller. © 2012 Acoustical Society of America
Fly-ear inspired acoustic sensors for gunshot localization
NASA Astrophysics Data System (ADS)
Liu, Haijun; Currano, Luke; Gee, Danny; Yang, Benjamin; Yu, Miao
2009-05-01
The supersensitive ears of the parasitoid fly Ormia ochracea have inspired researchers to develop bio-inspired directional microphone for sound localization. Although the fly ear is optimized for localizing the narrow-band calling song of crickets at 5 kHz, experiments and simulation have shown that it can amplify directional cues for a wide frequency range. In this article, a theoretical investigation is presented to study the use of fly-ear inspired directional microphones for gunshot localization. Using an equivalent 2-DOF model of the fly ear, the time responses of the fly ear structure to a typical shock wave are obtained and the associated time delay is estimated by using cross-correlation. Both near-field and far-field scenarios are considered. The simulation shows that the fly ear can greatly amplify the time delay by ~20 times, which indicates that with an interaural distance of only 1.2 mm the fly ear is able to generate a time delay comparable to that obtained by a conventional microphone pair with a separation as large as 24 mm. Since the parameters of the fly ear structure can also be tuned for muzzle blast and other impulse stimulus, fly-ear inspired acoustic sensors offers great potential for developing portable gunshot localization systems.
Väljamäe, Aleksander; Sell, Sara
2014-01-01
In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection) is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV), focusing on participants imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types) and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD) when delivering via loudspeaker array. The significant differences in circular vection intensity showed that (1) AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; (2) ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and (3) individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection "rich" cues, i.e., acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in a similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensory induced vection.
Väljamäe, Aleksander; Sell, Sara
2014-01-01
In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection) is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV), focusing on participants imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types) and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD) when delivering via loudspeaker array. The significant differences in circular vection intensity showed that (1) AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; (2) ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and (3) individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection “rich” cues, i.e., acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in a similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensory induced vection. PMID:25520683
Neural coding of sound envelope in reverberant environments.
Slama, Michaël C C; Delgutte, Bertrand
2015-03-11
Speech reception depends critically on temporal modulations in the amplitude envelope of the speech signal. Reverberation encountered in everyday environments can substantially attenuate these modulations. To assess the effect of reverberation on the neural coding of amplitude envelope, we recorded from single units in the inferior colliculus (IC) of unanesthetized rabbit using sinusoidally amplitude modulated (AM) broadband noise stimuli presented in simulated anechoic and reverberant environments. Although reverberation degraded both rate and temporal coding of AM in IC neurons, in most neurons, the degradation in temporal coding was smaller than the AM attenuation in the stimulus. This compensation could largely be accounted for by the compressive shape of the modulation input-output function (MIOF), which describes the nonlinear transformation of modulation depth from acoustic stimuli into neural responses. Additionally, in a subset of neurons, the temporal coding of AM was better for reverberant stimuli than for anechoic stimuli having the same modulation depth at the ear. Using hybrid anechoic stimuli that selectively possess certain properties of reverberant sounds, we show that this reverberant advantage is not caused by envelope distortion, static interaural decorrelation, or spectral coloration. Overall, our results suggest that the auditory system may possess dual mechanisms that make the coding of amplitude envelope relatively robust in reverberation: one general mechanism operating for all stimuli with small modulation depths, and another mechanism dependent on very specific properties of reverberant stimuli, possibly the periodic fluctuations in interaural correlation at the modulation frequency. Copyright © 2015 the authors 0270-6474/15/354452-17$15.00/0.
Grose, John H; Buss, Emily; Hall, Joseph W
2017-01-01
The purpose of this study was to test the hypothesis that listeners with frequent exposure to loud music exhibit deficits in suprathreshold auditory performance consistent with cochlear synaptopathy. Young adults with normal audiograms were recruited who either did ( n = 31) or did not ( n = 30) have a history of frequent attendance at loud music venues where the typical sound levels could be expected to result in temporary threshold shifts. A test battery was administered that comprised three sets of procedures: (a) electrophysiological tests including distortion product otoacoustic emissions, auditory brainstem responses, envelope following responses, and the acoustic change complex evoked by an interaural phase inversion; (b) psychoacoustic tests including temporal modulation detection, spectral modulation detection, and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and speech-in-noise recognition. The results demonstrated that a history of loud music exposure can lead to a profile of peripheral auditory function that is consistent with an interpretation of cochlear synaptopathy in humans, namely, modestly abnormal auditory brainstem response Wave I/Wave V ratios in the presence of normal distortion product otoacoustic emissions and normal audiometric thresholds. However, there were no other electrophysiological, psychophysical, or speech perception effects. The absence of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synaptopathy is a valid pathophysiological condition in humans, its perceptual sequelae are either too diffuse or too inconsequential to permit a simple differential diagnosis of hidden hearing loss.
Physiological models of the lateral superior olive
2017-01-01
In computational biology, modeling is a fundamental tool for formulating, analyzing and predicting complex phenomena. Most neuron models, however, are designed to reproduce certain small sets of empirical data. Hence their outcome is usually not compatible or comparable with other models or datasets, making it unclear how widely applicable such models are. In this study, we investigate these aspects of modeling, namely credibility and generalizability, with a specific focus on auditory neurons involved in the localization of sound sources. The primary cues for binaural sound localization are comprised of interaural time and level differences (ITD/ILD), which are the timing and intensity differences of the sound waves arriving at the two ears. The lateral superior olive (LSO) in the auditory brainstem is one of the locations where such acoustic information is first computed. An LSO neuron receives temporally structured excitatory and inhibitory synaptic inputs that are driven by ipsi- and contralateral sound stimuli, respectively, and changes its spike rate according to binaural acoustic differences. Here we examine seven contemporary models of LSO neurons with different levels of biophysical complexity, from predominantly functional ones (‘shot-noise’ models) to those with more detailed physiological components (variations of integrate-and-fire and Hodgkin-Huxley-type). These models, calibrated to reproduce known monaural and binaural characteristics of LSO, generate largely similar results to each other in simulating ITD and ILD coding. Our comparisons of physiological detail, computational efficiency, predictive performances, and further expandability of the models demonstrate (1) that the simplistic, functional LSO models are suitable for applications where low computational costs and mathematical transparency are needed, (2) that more complex models with detailed membrane potential dynamics are necessary for simulation studies where sub-neuronal nonlinear processes play important roles, and (3) that, for general purposes, intermediate models might be a reasonable compromise between simplicity and biological plausibility. PMID:29281618
Lüddemann, Helge; Kollmeier, Birger; Riedel, Helmut
2016-02-01
Brief deviations of interaural correlation (IAC) can provide valuable cues for detection, segregation and localization of acoustic signals. This study investigated the processing of such "binaural gaps" in continuously running noise (100-2000 Hz), in comparison to silent "monaural gaps", by measuring late auditory evoked potentials (LAEPs) and perceptual thresholds with novel, iteratively optimized stimuli. Mean perceptual binaural gap duration thresholds exhibited a major asymmetry: they were substantially shorter for uncorrelated gaps in correlated and anticorrelated reference noise (1.75 ms and 4.1 ms) than for correlated and anticorrelated gaps in uncorrelated reference noise (26.5 ms and 39.0 ms). The thresholds also showed a minor asymmetry: they were shorter in the positive than in the negative IAC range. The mean behavioral threshold for monaural gaps was 5.5 ms. For all five gap types, the amplitude of LAEP components N1 and P2 increased linearly with the logarithm of gap duration. While perceptual and electrophysiological thresholds matched for monaural gaps, LAEP thresholds were about twice as long as perceptual thresholds for uncorrelated gaps, but half as long for correlated and anticorrelated gaps. Nevertheless, LAEP thresholds showed the same asymmetries as perceptual thresholds. For gap durations below 30 ms, LAEPs were dominated by the processing of the leading edge of a gap. For longer gap durations, in contrast, both the leading and the lagging edge of a gap contributed to the evoked response. Formulae for the equivalent rectangular duration (ERD) of the binaural system's temporal window were derived for three common window shapes. The psychophysical ERD was 68 ms for diotic and about 40 ms for anti- and uncorrelated noise. After a nonlinear Z-transform of the stimulus IAC prior to temporal integration, ERDs were about 10 ms for reference correlations of ±1 and 80 ms for uncorrelated reference. Hence, a physiologically motivated peripheral nonlinearity changed the rank order of ERDs across experimental conditions in a plausible manner. Copyright © 2015 Elsevier B.V. All rights reserved.
Garcia-Pino, Elisabet; Gessele, Nikodemus; Koch, Ursula
2017-08-02
Hypersensitivity to sounds is one of the prevalent symptoms in individuals with Fragile X syndrome (FXS). It manifests behaviorally early during development and is often used as a landmark for treatment efficacy. However, the physiological mechanisms and circuit-level alterations underlying this aberrant behavior remain poorly understood. Using the mouse model of FXS ( Fmr1 KO ), we demonstrate that functional maturation of auditory brainstem synapses is impaired in FXS. Fmr1 KO mice showed a greatly enhanced excitatory synaptic input strength in neurons of the lateral superior olive (LSO), a prominent auditory brainstem nucleus, which integrates ipsilateral excitation and contralateral inhibition to compute interaural level differences. Conversely, the glycinergic, inhibitory input properties remained unaffected. The enhanced excitation was the result of an increased number of cochlear nucleus fibers converging onto one LSO neuron, without changing individual synapse properties. Concomitantly, immunolabeling of excitatory ending markers revealed an increase in the immunolabeled area, supporting abnormally elevated excitatory input numbers. Intrinsic firing properties were only slightly enhanced. In line with the disturbed development of LSO circuitry, auditory processing was also affected in adult Fmr1 KO mice as shown with single-unit recordings of LSO neurons. These processing deficits manifested as an increase in firing rate, a broadening of the frequency response area, and a shift in the interaural level difference function of LSO neurons. Our results suggest that this aberrant synaptic development of auditory brainstem circuits might be a major underlying cause of the auditory processing deficits in FXS. SIGNIFICANCE STATEMENT Fragile X Syndrome (FXS) is the most common inheritable form of intellectual impairment, including autism. A core symptom of FXS is extreme sensitivity to loud sounds. This is one reason why individuals with FXS tend to avoid social interactions, contributing to their isolation. Here, a mouse model of FXS was used to investigate the auditory brainstem where basic sound information is first processed. Loss of the Fragile X mental retardation protein leads to excessive excitatory compared with inhibitory inputs in neurons extracting information about sound levels. Functionally, this elevated excitation results in increased firing rates, and abnormal coding of frequency and binaural sound localization cues. Imbalanced early-stage sound level processing could partially explain the auditory processing deficits in FXS. Copyright © 2017 the authors 0270-6474/17/377403-17$15.00/0.
Analysis of masking effects on speech intelligibility with respect to moving sound stimulus
NASA Astrophysics Data System (ADS)
Chen, Chiung Yao
2004-05-01
The purpose of this study is to compare the disturbed degree of speech by an immovable noise source and an apparent moving one (AMN). In the study of the sound localization, we found that source-directional sensitivity (SDS) well associates with the magnitude of interaural cross correlation (IACC). Ando et al. [Y. Ando, S. H. Kang, and H. Nagamatsu, J. Acoust. Soc. Jpn. (E) 8, 183-190 (1987)] reported that potential correlation between left and right inferior colliculus at auditory path in the brain is in harmony with the correlation function of amplitude input into two ear-canal entrances. We assume that the degree of disturbance under the apparent moving noisy source is probably different from that being installed in front of us within a constant distance in a free field (no reflection). Then, we found there is a different influence on speech intelligibility between a moving and a fixed source generated by 1/3-octave narrow-band noise with the center frequency 2 kHz. However, the reasons for the moving speed and the masking effects on speech intelligibility were uncertain.
Extinction of auditory stimuli in hemineglect: Space versus ear.
Spierer, Lucas; Meuli, Reto; Clarke, Stephanie
2007-02-01
Unilateral extinction of auditory stimuli, a key feature of the neglect syndrome, was investigated in 15 patients with right (11), left (3) or bilateral (1) hemispheric lesions using a verbal dichotic condition, in which each ear received simultaneously one word, and a interaural-time-difference (ITD) diotic condition, in which both ears received both words lateralised by means of ITD. Additional investigations included sound localisation, visuo-spatial attention and general cognitive status. Five patients presented a significant asymmetry in the ITD diotic test, due to a decrease of left hemispace reporting but no asymmetry was found in dichotic listening. Six other patients presented a significant asymmetry in the dichotic test due to a significant decrease of left or right ear reporting, but no asymmetry in diotic listening. Ten of the above patients presented mild to severe deficits in sound localisation and eight signs of visuo-spatial neglect (three with selective asymmetry in the diotic and five in the dichotic task). Four other patients presented a significant asymmetry in both the diotic and dichotic listening tasks. Three of them presented moderate deficits in localisation and all four moderate visuo-spatial neglect. Thus, extinction for left ear and left hemispace can double dissociate, suggesting distinct underlying neural processes. Furthermore, the co-occurrence with sound localisation disturbance and with visuo-spatial hemineglect speaks in favour of the involvement of multisensory attentional representations.
Park, Hong Ju; Lee, In-Sik; Shin, Jung Eun; Lee, Yeo Jin; Park, Mun Su
2010-01-01
To better characterize both ocular and cervical vestibular evoked myogenic potentials (VEMP) responses at different frequencies of sound in 20 normal subjects. Cervical and ocular VEMPs were recorded. The intensities of sound stimulation decreased from the maximal intensity, until no responses were evoked. Thresholds, amplitudes, latencies and interaural amplitude difference ratio (IADR) at the maximal stimulation were calculated. Both tests showed the similar frequency tuning, with the lowest threshold and highest amplitude for 500-Hz tone-burst stimuli. Sound stimulation at 500Hz showed the response rates of 100% in both tests. Cervical VEMPs showed higher incidence than ocular VEMPs. Ocular VEMP thresholds were significantly higher than those of cervical VEMP. Cervical VEMP amplitudes were significantly higher than ocular VEMP amplitudes. IADRs of ocular and cervical VEMPs did not differ significantly. Ocular VEMP showed the similar frequency tuning to cervical VEMP. Cervical VEMP responses showed higher incidence, lower thresholds and larger amplitudes than ocular VEMP. Cervical VEMP is a more reliable measure than ocular VEMP, though the results of both tests will be complementary. Five hundred Hertz is the optimal frequency to use. Copyright 2009 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Relating age and hearing loss to monaural, bilateral, and binaural temporal sensitivity1
Gallun, Frederick J.; McMillan, Garnett P.; Molis, Michelle R.; Kampel, Sean D.; Dann, Serena M.; Konrad-Martin, Dawn L.
2014-01-01
Older listeners are more likely than younger listeners to have difficulties in making temporal discriminations among auditory stimuli presented to one or both ears. In addition, the performance of older listeners is often observed to be more variable than that of younger listeners. The aim of this work was to relate age and hearing loss to temporal processing ability in a group of younger and older listeners with a range of hearing thresholds. Seventy-eight listeners were tested on a set of three temporal discrimination tasks (monaural gap discrimination, bilateral gap discrimination, and binaural discrimination of interaural differences in time). To examine the role of temporal fine structure in these tasks, four types of brief stimuli were used: tone bursts, broad-frequency chirps with rising or falling frequency contours, and random-phase noise bursts. Between-subject group analyses conducted separately for each task revealed substantial increases in temporal thresholds for the older listeners across all three tasks, regardless of stimulus type, as well as significant correlations among the performance of individual listeners across most combinations of tasks and stimuli. Differences in performance were associated with the stimuli in the monaural and binaural tasks, but not the bilateral task. Temporal fine structure differences among the stimuli had the greatest impact on monaural thresholds. Threshold estimate values across all tasks and stimuli did not show any greater variability for the older listeners as compared to the younger listeners. A linear mixed model applied to the data suggested that age and hearing loss are independent factors responsible for temporal processing ability, thus supporting the increasingly accepted hypothesis that temporal processing can be impaired for older compared to younger listeners with similar hearing and/or amounts of hearing loss. PMID:25009458
Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher
2017-09-05
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher
2017-01-01
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357
McAlpine, D; Jiang, D; Shackleton, T M; Palmer, A R
1998-08-01
Responses of low-frequency neurons in the inferior colliculus (IC) of anesthetized guinea pigs were studied with binaural beats to assess their mean best interaural phase (BP) to a range of stimulating frequencies. Phase plots (stimulating frequency vs BP) were produced, from which measures of characteristic delay (CD) and characteristic phase (CP) for each neuron were obtained. The CD provides an estimate of the difference in travel time from each ear to coincidence-detector neurons in the brainstem. The CP indicates the mechanism underpinning the coincidence detector responses. A linear phase plot indicates a single, constant delay between the coincidence-detector inputs from the two ears. In more than half (54 of 90) of the neurons, the phase plot was not linear. We hypothesized that neurons with nonlinear phase plots received convergent input from brainstem coincidence detectors with different CDs. Presentation of a second tone with a fixed, unfavorable delay suppressed the response of one input, linearizing the phase plot and revealing other inputs to be relatively simple coincidence detectors. For some neurons with highly complex phase plots, the suppressor tone altered BP values, but did not resolve the nature of the inputs. For neurons with linear phase plots, the suppressor tone either completely abolished their responses or reduced their discharge rate with no change in BP. By selectively suppressing inputs with a second tone, we are able to reveal the nature of underlying binaural inputs to IC neurons, confirming the hypothesis that the complex phase plots of many IC neurons are a result of convergence from simple brainstem coincidence detectors.
Seshagiri, Chandran V.; Delgutte, Bertrand
2007-01-01
The complex anatomical structure of the central nucleus of the inferior colliculus (ICC), the principal auditory nucleus in the midbrain, may provide the basis for functional organization of auditory information. To investigate this organization, we used tetrodes to record from neighboring neurons in the ICC of anesthetized cats and studied the similarity and difference among the responses of these neurons to pure-tone stimuli using widely used physiological characterizations. Consistent with the tonotopic arrangement of neurons in the ICC and reports of a threshold map, we found a high degree of correlation in the best frequencies (BFs) of neighboring neurons, which were mostly <3 kHz in our sample, and the pure-tone thresholds among neighboring neurons. However, width of frequency tuning, shapes of the frequency response areas, and temporal discharge patterns showed little or no correlation among neighboring neurons. Because the BF and threshold are measured at levels near the threshold and the characteristic frequency (CF), neighboring neurons may receive similar primary inputs tuned to their CF; however, at higher levels, additional inputs from other frequency channels may be recruited, introducing greater variability in the responses. There was also no correlation among neighboring neurons' sensitivity to interaural time differences (ITD) measured with binaural beats. However, the characteristic phases (CPs) of neighboring neurons revealed a significant correlation. Because the CP is related to the neural mechanisms generating the ITD sensitivity, this result is consistent with segregation of inputs to the ICC from the lateral and medial superior olives. PMID:17671101
Seshagiri, Chandran V; Delgutte, Bertrand
2007-10-01
The complex anatomical structure of the central nucleus of the inferior colliculus (ICC), the principal auditory nucleus in the midbrain, may provide the basis for functional organization of auditory information. To investigate this organization, we used tetrodes to record from neighboring neurons in the ICC of anesthetized cats and studied the similarity and difference among the responses of these neurons to pure-tone stimuli using widely used physiological characterizations. Consistent with the tonotopic arrangement of neurons in the ICC and reports of a threshold map, we found a high degree of correlation in the best frequencies (BFs) of neighboring neurons, which were mostly <3 kHz in our sample, and the pure-tone thresholds among neighboring neurons. However, width of frequency tuning, shapes of the frequency response areas, and temporal discharge patterns showed little or no correlation among neighboring neurons. Because the BF and threshold are measured at levels near the threshold and the characteristic frequency (CF), neighboring neurons may receive similar primary inputs tuned to their CF; however, at higher levels, additional inputs from other frequency channels may be recruited, introducing greater variability in the responses. There was also no correlation among neighboring neurons' sensitivity to interaural time differences (ITD) measured with binaural beats. However, the characteristic phases (CPs) of neighboring neurons revealed a significant correlation. Because the CP is related to the neural mechanisms generating the ITD sensitivity, this result is consistent with segregation of inputs to the ICC from the lateral and medial superior olives.
Kim, Min-Beom; Choi, Jeesun; Park, Ga Young; Cho, Yang-Sun; Hong, Sung Hwa; Chung, Won-Ho
2013-06-01
Our goal was to find the clinical value of cervical vestibular evoked myogenic potential (VEMP) in Ménière's disease (MD) and to evaluate whether the VEMP results can be useful in assessing the stage of MD. Furthermore, we tried to evaluate the clinical effectiveness of VEMP in predicting hearing outcomes. The amplitude, peak latency and interaural amplitude difference (IAD) ratio were obtained using cervical VEMP. The VEMP results of MD were compared with those of normal subjects, and the MD stages were compared with the IAD ratio. Finally, the hearing changes were analyzed according to their VEMP results. In clinically definite unilateral MD (n=41), the prevalence of cervical VEMP abnormality in the IAD ratio was 34.1%. When compared with normal subjects (n=33), the VEMP profile of MD patients showed a low amplitude and a similar latency. The mean IAD ratio in MD was 23%, which was significantly different from that of normal subjects (P=0.01). As the stage increased, the IAD ratio significantly increased (P=0.09). After stratification by initial hearing level, stage I and II subjects (hearing threshold, 0-40 dB) with an abnormal IAD ratio showed a decrease in hearing over time compared to those with a normal IAD ratio (P=0.08). VEMP parameters have an important clinical role in MD. Especially, the IAD ratio can be used to assess the stage of MD. An abnormal IAD ratio may be used as a predictor of poor hearing outcomes in subjects with early stage MD.
Intelligibility for Binaural Speech with Discarded Low-SNR Speech Components.
Schoenmaker, Esther; van de Par, Steven
2016-01-01
Speech intelligibility in multitalker settings improves when the target speaker is spatially separated from the interfering speakers. A factor that may contribute to this improvement is the improved detectability of target-speech components due to binaural interaction in analogy to the Binaural Masking Level Difference (BMLD). This would allow listeners to hear target speech components within specific time-frequency intervals that have a negative SNR, similar to the improvement in the detectability of a tone in noise when these contain disparate interaural difference cues. To investigate whether these negative-SNR target-speech components indeed contribute to speech intelligibility, a stimulus manipulation was performed where all target components were removed when local SNRs were smaller than a certain criterion value. It can be expected that for sufficiently high criterion values target speech components will be removed that do contribute to speech intelligibility. For spatially separated speakers, assuming that a BMLD-like detection advantage contributes to intelligibility, degradation in intelligibility is expected already at criterion values below 0 dB SNR. However, for collocated speakers it is expected that higher criterion values can be applied without impairing speech intelligibility. Results show that degradation of intelligibility for separated speakers is only seen for criterion values of 0 dB and above, indicating a negligible contribution of a BMLD-like detection advantage in multitalker settings. These results show that the spatial benefit is related to a spatial separation of speech components at positive local SNRs rather than to a BMLD-like detection improvement for speech components at negative local SNRs.
Alteration of frequency range for binaural beats in acute low-tone hearing loss.
Karino, Shotaro; Yamasoba, Tatsuya; Ito, Ken; Kaga, Kimitaka
2005-01-01
The effect of acute low-tone sensorineural hearing loss (ALHL) on the interaural frequency difference (IFD) required for perception of binaural beats (BBs) was investigated in 12 patients with unilateral ALHL and 7 patients in whom ALHL had lessened. A continuous pure tone of 30 dB sensation level at 250 Hz was presented to the contralateral, normal-hearing ear. The presence of BBs was determined by a subjective yes-no procedure as the frequency of a loudness-balanced test tone was gradually adjusted around 250 Hz in the affected ear. The frequency range in which no BBs were perceived (FRNB) was significantly wider in the patients with ALHL than in the controls, and FRNBs became narrower in the recovered ALHL group. Specifically, detection of slow BBs with a small IFD was impaired in this limited (10 s) observation period. The significant correlation between the hearing level at 250 Hz and FRNBs suggests that FRNBs represent the degree of cochlear damage caused by ALHL.
Neural Correlates of the Binaural Masking Level Difference in Human Frequency-Following Responses.
Clinard, Christopher G; Hodgson, Sarah L; Scherer, Mary Ellen
2017-04-01
The binaural masking level difference (BMLD) is an auditory phenomenon where binaural tone-in-noise detection is improved when the phase of either signal or noise is inverted in one of the ears (S π N o or S o N π , respectively), relative to detection when signal and noise are in identical phase at each ear (S o N o ). Processing related to BMLDs and interaural time differences has been confirmed in the auditory brainstem of non-human mammals; in the human auditory brainstem, phase-locked neural responses elicited by BMLD stimuli have not been systematically examined across signal-to-noise ratio. Behavioral and physiological testing was performed in three binaural stimulus conditions: S o N o , S π N o , and S o N π . BMLDs at 500 Hz were obtained from 14 young, normal-hearing adults (ages 21-26). Physiological BMLDs used the frequency-following response (FFR), a scalp-recorded auditory evoked potential dependent on sustained phase-locked neural activity; FFR tone-in-noise detection thresholds were used to calculate physiological BMLDs. FFR BMLDs were significantly smaller (poorer) than behavioral BMLDs, and FFR BMLDs did not reflect a physiological release from masking, on average. Raw FFR amplitude showed substantial reductions in the S π N o condition relative to S o N o and S o N π conditions, consistent with negative effects of phase summation from left and right ear FFRs. FFR amplitude differences between stimulus conditions (e.g., S o N o amplitude-S π N o amplitude) were significantly predictive of behavioral S π N o BMLDs; individuals with larger amplitude differences had larger (better) behavioral B MLDs and individuals with smaller amplitude differences had smaller (poorer) behavioral B MLDs. These data indicate a role for sustained phase-locked neural activity in BMLDs of humans and are the first to show predictive relationships between behavioral BMLDs and human brainstem responses.
Hamlet, William R.; Liu, Yu-Wei; Tang, Zheng-Quan; Lu, Yong
2014-01-01
Central auditory neurons that localize sound in horizontal space have specialized intrinsic and synaptic cellular mechanisms to tightly control the threshold and timing for action potential generation. However, the critical interplay between intrinsic voltage-gated conductances and extrinsic synaptic conductances in determining neuronal output are not well understood. In chicken, neurons in the nucleus laminaris (NL) encode sound location using interaural time difference (ITD) as a cue. Along the tonotopic axis of NL, there exist robust differences among low, middle, and high frequency (LF, MF, and HF, respectively) neurons in a variety of neuronal properties such as low threshold voltage-gated K+ (LTK) channels and depolarizing inhibition. This establishes NL as an ideal model to examine the interactions between LTK currents and synaptic inhibition across the tonotopic axis. Using whole-cell patch clamp recordings prepared from chicken embryos (E17–E18), we found that LTK currents were larger in MF and HF neurons than in LF neurons. Kinetic analysis revealed that LTK currents in MF neurons activated at lower voltages than in LF and HF neurons, whereas the inactivation of the currents was similar across the tonotopic axis. Surprisingly, blockade of LTK currents using dendrotoxin-I (DTX) tended to broaden the duration and increase the amplitude of the depolarizing inhibitory postsynaptic potentials (IPSPs) in NL neurons without dependence on coding frequency regions. Analyses of the effects of DTX on inhibitory postsynaptic currents led us to interpret this unexpected observation as a result of primarily postsynaptic effects of LTK currents on MF and HF neurons, and combined presynaptic and postsynaptic effects in LF neurons. Furthermore, DTX transferred subthreshold IPSPs to spikes. Taken together, the results suggest a critical role for LTK currents in regulating inhibitory synaptic strength in ITD-coding neurons at various frequencies. PMID:24904297
Hamlet, William R; Liu, Yu-Wei; Tang, Zheng-Quan; Lu, Yong
2014-01-01
Central auditory neurons that localize sound in horizontal space have specialized intrinsic and synaptic cellular mechanisms to tightly control the threshold and timing for action potential generation. However, the critical interplay between intrinsic voltage-gated conductances and extrinsic synaptic conductances in determining neuronal output are not well understood. In chicken, neurons in the nucleus laminaris (NL) encode sound location using interaural time difference (ITD) as a cue. Along the tonotopic axis of NL, there exist robust differences among low, middle, and high frequency (LF, MF, and HF, respectively) neurons in a variety of neuronal properties such as low threshold voltage-gated K(+) (LTK) channels and depolarizing inhibition. This establishes NL as an ideal model to examine the interactions between LTK currents and synaptic inhibition across the tonotopic axis. Using whole-cell patch clamp recordings prepared from chicken embryos (E17-E18), we found that LTK currents were larger in MF and HF neurons than in LF neurons. Kinetic analysis revealed that LTK currents in MF neurons activated at lower voltages than in LF and HF neurons, whereas the inactivation of the currents was similar across the tonotopic axis. Surprisingly, blockade of LTK currents using dendrotoxin-I (DTX) tended to broaden the duration and increase the amplitude of the depolarizing inhibitory postsynaptic potentials (IPSPs) in NL neurons without dependence on coding frequency regions. Analyses of the effects of DTX on inhibitory postsynaptic currents led us to interpret this unexpected observation as a result of primarily postsynaptic effects of LTK currents on MF and HF neurons, and combined presynaptic and postsynaptic effects in LF neurons. Furthermore, DTX transferred subthreshold IPSPs to spikes. Taken together, the results suggest a critical role for LTK currents in regulating inhibitory synaptic strength in ITD-coding neurons at various frequencies.
Schwartz, Andrew H; Shinn-Cunningham, Barbara G
2013-04-01
Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.
NASA Astrophysics Data System (ADS)
Shimokura, Ryota; Soeta, Yoshiharu
2011-04-01
Railway stations can be principally classified by their locations, i.e., above-ground or underground stations, and by their platform styles, i.e., side or island platforms. However, the effect of the architectural elements on the train noise in stations is not well understood. The aim of the present study is to determine the different acoustical characteristics of the train noise for each station style. The train noise was evaluated by (1) the A-weighted equivalent continuous sound pressure level ( LAeq), (2) the amplitude of the maximum peak of the interaural cross-correlation function (IACC), (3) the delay time ( τ1) and amplitude ( ϕ1) of the first maximum peak of the autocorrelation function. The IACC, τ1 and ϕ1 are related to the subjective diffuseness, pitch and pitch strength, respectively. Regarding the locations, the LAeq in the underground stations was 6.4 dB higher than that in the above-ground stations, and the pitch in the underground stations was higher and stronger. Regarding the platform styles, the LAeq on the side platforms was 3.3 dB higher than on the island platforms of the above-ground stations. For the underground stations, the LAeq on the island platforms was 3.3 dB higher than that on the side platforms when a train entered the station. The IACC on the island platforms of the above-ground stations was higher than that in the other stations.
Wang, Le; Devore, Sasha; Delgutte, Bertrand
2013-01-01
Human listeners are sensitive to interaural time differences (ITDs) in the envelopes of sounds, which can serve as a cue for sound localization. Many high-frequency neurons in the mammalian inferior colliculus (IC) are sensitive to envelope-ITDs of sinusoidally amplitude-modulated (SAM) sounds. Typically, envelope-ITD-sensitive IC neurons exhibit either peak-type sensitivity, discharging maximally at the same delay across frequencies, or trough-type sensitivity, discharging minimally at the same delay across frequencies, consistent with responses observed at the primary site of binaural interaction in the medial and lateral superior olives (MSO and LSO), respectively. However, some high-frequency IC neurons exhibit dual types of envelope-ITD sensitivity in their responses to SAM tones, that is, they exhibit peak-type sensitivity at some modulation frequencies and trough-type sensitivity at other frequencies. Here we show that high-frequency IC neurons in the unanesthetized rabbit can also exhibit dual types of envelope-ITD sensitivity in their responses to SAM noise. Such complex responses to SAM stimuli could be achieved by convergent inputs from MSO and LSO onto single IC neurons. We test this hypothesis by implementing a physiologically explicit, computational model of the binaural pathway. Specifically, we examined envelope-ITD sensitivity of a simple model IC neuron that receives convergent inputs from MSO and LSO model neurons. We show that dual envelope-ITD sensitivity emerges in the IC when convergent MSO and LSO inputs are differentially tuned for modulation frequency. PMID:24155013
Ashida, Go; Funabiki, Kazuo; Carr, Catherine E.
2013-01-01
A wide variety of neurons encode temporal information via phase-locked spikes. In the avian auditory brainstem, neurons in the cochlear nucleus magnocellularis (NM) send phase-locked synaptic inputs to coincidence detector neurons in the nucleus laminaris (NL) that mediate sound localization. Previous modeling studies suggested that converging phase-locked synaptic inputs may give rise to a periodic oscillation in the membrane potential of their target neuron. Recent physiological recordings in vivo revealed that owl NL neurons changed their spike rates almost linearly with the amplitude of this oscillatory potential. The oscillatory potential was termed the sound analog potential, because of its resemblance to the waveform of the stimulus tone. The amplitude of the sound analog potential recorded in NL varied systematically with the interaural time difference (ITD), which is one of the most important cues for sound localization. In order to investigate the mechanisms underlying ITD computation in the NM-NL circuit, we provide detailed theoretical descriptions of how phase-locked inputs form oscillating membrane potentials. We derive analytical expressions that relate presynaptic, synaptic, and postsynaptic factors to the signal and noise components of the oscillation in both the synaptic conductance and the membrane potential. Numerical simulations demonstrate the validity of the theoretical formulations for the entire frequency ranges tested (1–8 kHz) and potential effects of higher harmonics on NL neurons with low best frequencies (<2 kHz). PMID:24265616
Multisensory guidance of orienting behavior.
Maier, Joost X; Groh, Jennifer M
2009-12-01
We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.
On the Possible Detection of Lightning Storms by Elephants
Kelley, Michael C.; Garstang, Michael
2013-01-01
Simple Summary We use data similar to that taken by the International Monitoring System for the detection of nuclear explosions, to determine whether elephants might be capable of detecting and locating the source of sounds generated by thunderstorms. Knowledge that elephants might be capable of responding to such storms, particularly at the end of the dry season when migrations are initiated, is of considerable interest to management and conservation. Abstract Theoretical calculations suggest that sounds produced by thunderstorms and detected by a system similar to the International Monitoring System (IMS) for the detection of nuclear explosions at distances ≥100 km, are at sound pressure levels equal to or greater than 6 × 10−3 Pa. Such sound pressure levels are well within the range of elephant hearing. Frequencies carrying these sounds might allow for interaural time delays such that adult elephants could not only hear but could also locate the source of these sounds. Determining whether it is possible for elephants to hear and locate thunderstorms contributes to the question of whether elephant movements are triggered or influenced by these abiotic sounds. PMID:26487406
Wada, Yoshiro; Nishiike, Suetaka; Kitahara, Tadashi; Yamanaka, Toshiaki; Imai, Takao; Ito, Taeko; Sato, Go; Matsuda, Kazunori; Kitamura, Yoshiaki; Takeda, Noriaki
2016-11-01
After repeated snowboard exercises in the virtual reality (VR) world with increasing time lags in trials 3-8, it is suggested that the adaptation to repeated visual-vestibulosomatosensory conflict in the VR world improved dynamic posture control and motor performance in the real world without the development of motion sickness. The VR technology was used and the effects of repeated snowboard exercise examined in the VR world with time lags between visual scene and body rotation on the head stability and slalom run performance during exercise in healthy subjects. Forty-two healthy young subjects participated in the study. After trials 1 and 2 of snowboard exercise in the VR world without time lag, trials 3-8 were conducted with 0.1, 0.2, 0.3, 0.4, 0.5, and 0.6 s time lags of the visual scene that the computer creates behind board rotation, respectively. Finally, trial 9 was conducted without time lag. Head linear accelerations and subjective slalom run performance were evaluated. The standard deviations of head linear accelerations in inter-aural direction were significantly increased in trial 8, with a time lag of 0.6 s, but significantly decreased in trial 9 without a time lag, compared with those in trial 2 without a time lag. The subjective scores of slalom run performance were significantly decreased in trial 8, with a time lag of 0.6 s, but significantly increased in trial 9 without a time lag, compared with those in trial 2 without a time lag. Motion sickness was not induced in any subjects.
Kim, Min-Beom; Choi, Jeesun; Park, Ga Young; Cho, Yang-Sun; Hong, Sung Hwa
2013-01-01
Objectives Our goal was to find the clinical value of cervical vestibular evoked myogenic potential (VEMP) in Ménière's disease (MD) and to evaluate whether the VEMP results can be useful in assessing the stage of MD. Furthermore, we tried to evaluate the clinical effectiveness of VEMP in predicting hearing outcomes. Methods The amplitude, peak latency and interaural amplitude difference (IAD) ratio were obtained using cervical VEMP. The VEMP results of MD were compared with those of normal subjects, and the MD stages were compared with the IAD ratio. Finally, the hearing changes were analyzed according to their VEMP results. Results In clinically definite unilateral MD (n=41), the prevalence of cervical VEMP abnormality in the IAD ratio was 34.1%. When compared with normal subjects (n=33), the VEMP profile of MD patients showed a low amplitude and a similar latency. The mean IAD ratio in MD was 23%, which was significantly different from that of normal subjects (P=0.01). As the stage increased, the IAD ratio significantly increased (P=0.09). After stratification by initial hearing level, stage I and II subjects (hearing threshold, 0-40 dB) with an abnormal IAD ratio showed a decrease in hearing over time compared to those with a normal IAD ratio (P=0.08). Conclusion VEMP parameters have an important clinical role in MD. Especially, the IAD ratio can be used to assess the stage of MD. An abnormal IAD ratio may be used as a predictor of poor hearing outcomes in subjects with early stage MD. PMID:23799160
Knight, Richard D
2004-01-01
Limited data are available on the relationship between diplacusis and otoacoustic emissions and sudden hearing threshold changes, and the detail of the mechanism underlying diplacusis is not well understood. Data are presented here from an intensively studied single episode of sudden, non-conductive, mild hearing loss with associated binaural diplacusis, probably due to a viral infection. Treatment with steroids was administered for 1 week. This paper examines the relationships between the hearing loss, diplacusis and otoacoustic emissions during recovery on a day-by-day basis. The hearing thresholds were elevated by up to 20 dB at 4kHz and upwards, and there was an interaural pitch difference up to 12% at 4 and 8 kHz. There was also a frequency-specific change in transient evoked otoacoustic emission (TEOAE) and distortion-product otoacoustic emission (DPOAE) level. DPOAE level was reduced by up to 20 dB. with the greatest change seen when a stimulus with a wide stimulus frequency ratio was used. Frequency shifts in the 2f2-fi DPOAE fine structure corresponded to changes in the diplacusis. Complete recovery to previous levels was observed for TEOAE, DPOAE and hearing threshold. The diplacusis recovered to within normal limits after 4 weeks. The frequency shift seen in the DPOAE fine structure did not quite resolve, suggesting a very slight permanent change. The time-courses of TEOAE. diplacusis and hearing threshold were significantly different: most notably, the hearing threshold was stable over a period when the diplacusis deteriorated. This suggests that the cochlear mechanisms involved in diplacusis, hearing threshold and OAE may not be identical.
Exploring auditory neglect: Anatomo-clinical correlations of auditory extinction.
Tissieres, Isabel; Crottaz-Herbette, Sonia; Clarke, Stephanie
2018-05-23
The key symptoms of auditory neglect include left extinction on tasks of dichotic and/or diotic listening and rightward shift in locating sounds. The anatomical correlates of the latter are relatively well understood, but no systematic studies have examined auditory extinction. Here, we performed a systematic study of anatomo-clinical correlates of extinction by using dichotic and/or diotic listening tasks. In total, 20 patients with right hemispheric damage (RHD) and 19 with left hemispheric damage (LHD) performed dichotic and diotic listening tasks. Either task consists of the simultaneous presentation of word pairs; in the dichotic task, 1 word is presented to each ear, and in the diotic task, each word is lateralized by means of interaural time differences and presented to one side. RHD was associated with exclusively contralesional extinction in dichotic or diotic listening, whereas in selected cases, LHD led to contra- or ipsilesional extinction. Bilateral symmetrical extinction occurred in RHD or LHD, with dichotic or diotic listening. The anatomical correlates of these extinction profiles offer an insight into the organisation of the auditory and attentional systems. First, left extinction in dichotic versus diotic listening involves different parts of the right hemisphere, which explains the double dissociation between these 2 neglect symptoms. Second, contralesional extinction in the dichotic task relies on homologous regions in either hemisphere. Third, ipsilesional extinction in dichotic listening after LHD was associated with lesions of the intrahemispheric white matter, interrupting callosal fibres outside their midsagittal or periventricular trajectory. Fourth, bilateral symmetrical extinction was associated with large parieto-fronto-temporal LHD or smaller parieto-temporal RHD, which suggests that divided attention, supported by the right hemisphere, and auditory streaming, supported by the left, likely play a critical role. Copyright © 2018. Published by Elsevier Masson SAS.
Noguchi, Yoshihiro; Takahashi, Masatoki; Ito, Taku; Fujikawa, Taro; Kawashima, Yoshiyuki; Kitamura, Ken
2016-10-01
To assess possible delayed recovery of the maximum speech discrimination score (SDS) when the audiometric threshold ceases to change. We retrospectively examined 20 patients with idiopathic sudden sensorineural hearing loss (ISSNHL) (gender: 9 males and 11 females, age: 24-71 years). The findings of pure-tone average (PTA), maximum SDS, auditory brainstem responses (ABRs), and tinnitus handicap inventory (THI) were compared among the three periods of 1-3 months, 6-8 months, and 11-13 months after ISSNHL onset. No significant differences were noted in PTA, whereas an increase of greater than or equal to 10% in maximum SDS was recognized in 9 patients (45%) from the period of 1-3 months to the period of 11-13 months. Four of the 9 patients showed 20% or more recovery of maximum SDS. No significant differences were observed in the interpeak latency difference between waves I and V and the interaural latency difference of wave V in ABRs, whereas an improvement in the THI grade was recognized in 11 patients (55%) from the period of 1-3 months to the period of 11-13 months. The present study suggested the incidence of maximum SDS restoration over 1 year after ISSNHL onset. These findings may be because of the effects of auditory plasticity via the central auditory pathway. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Winters, Bradley D.; Jin, Shan-Xue; Ledford, Kenneth R.
2017-01-01
The principal neurons of the medial superior olive (MSO) encode cues for horizontal sound localization through comparisons of the relative timing of EPSPs. To understand how the timing and amplitude of EPSPs are maintained during propagation in the dendrites, we made dendritic and somatic whole-cell recordings from MSO principal neurons in brain slices from Mongolian gerbils. In somatic recordings, EPSP amplitudes were largely uniform following minimal stimulation of excitatory synapses at visualized locations along the dendrites. Similar results were obtained when excitatory synaptic transmission was eliminated in a low calcium solution and then restored at specific dendritic sites by pairing input stimulation and focal application of a higher calcium solution. We performed dual dendritic and somatic whole-cell recordings to measure spontaneous EPSPs using a dual-channel template-matching algorithm to separate out those events initiated at or distal to the dendritic recording location. Local dendritic spontaneous EPSP amplitudes increased sharply in the dendrite with distance from the soma (length constant, 53.6 μm), but their attenuation during propagation resulted in a uniform amplitude of ∼0.2 mV at the soma. The amplitude gradient of dendritic EPSPs was also apparent in responses to injections of identical simulated excitatory synaptic currents in the dendrites. Compartmental models support the view that these results extensively reflect the influence of dendritic cable properties. With relatively few excitatory axons innervating MSO neurons, the normalization of dendritic EPSPs at the soma would increase the importance of input timing versus location during the processing of interaural time difference cues in vivo. SIGNIFICANCE STATEMENT The neurons of the medial superior olive analyze cues for sound localization by detecting the coincidence of binaural excitatory synaptic inputs distributed along the dendrites. Previous studies have shown that dendritic voltages undergo severe attenuation as they propagate to the soma, potentially reducing the influence of distal inputs. However, using dendritic and somatic patch recordings, we found that dendritic EPSP amplitude increased with distance from the soma, compensating for dendritic attenuation and normalizing EPSP amplitude at the soma. Much of this normalization reflected the influence of dendritic morphology. As different combinations of presynaptic axons may be active during consecutive cycles of sound stimuli, somatic EPSP normalization renders spike initiation more sensitive to synapse timing than dendritic location. PMID:28213442
Effects of Various Architectural Parameters on Six Room Acoustical Measures in Auditoria.
NASA Astrophysics Data System (ADS)
Chiang, Wei-Hwa
The effects of architectural parameters on six room acoustical measures were investigated by means of correlation analyses, factor analyses and multiple regression analyses based on data taken in twenty halls. Architectural parameters were used to estimate acoustical measures taken at individual locations within each room as well as the averages and standard deviations of all measured values in the rooms. The six acoustical measures were Early Decay Time (EDT10), Clarity Index (C80), Overall Level (G), Bass Ratio based on Early Decay Time (BR(EDT)), Treble Ratio based on Early Decay Time (TR(EDT)), and Early Inter-aural Cross Correlation (IACC80). A comprehensive method of quantifying various architectural characteristics of rooms was developed to define a large number of architectural parameters that were hypothesized to effect the acoustical measurements made in the rooms. This study quantitatively confirmed many of the principles used in the design of concert halls and auditoria. Three groups of room architectural parameters such as the parameters associated with the depth of diffusing surfaces were significantly correlated with the hall standard deviations of most of the acoustical measures. Significant differences of statistical relations among architectural parameters and receiver specific acoustical measures were found between a group of music halls and a group of lecture halls. For example, architectural parameters such as the relative distance from the receiver to the overhead ceiling increased the percentage of the variance of acoustical measures that was explained by Barron's revised theory from approximately 70% to 80% only when data were taken in the group of music halls. This study revealed the major architectural parameters which have strong relations with individual acoustical measures forming the basis for a more quantitative method for advancing the theoretical design of concert halls and other auditoria. The results of this study provide designers the information to predict acoustical measures in buildings at very early stages of the design process without using computer models or scale models.
NASA Astrophysics Data System (ADS)
Azarpour, Masoumeh; Enzner, Gerald
2017-12-01
Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.
Effects of Telephone Ring on Two Mental Tasks Relative to AN Office
NASA Astrophysics Data System (ADS)
Mouri, K.; Akiyama, K.; Ando, Y.
2001-03-01
In many cases, there are a lot of noise sources in an office and particularly, telephone ringing often irritates the office workers. Effects of aircraft noise on the mental work of pupils were reported by Ando et al.[1]. In spite of its serious effect, it has not yet been found how the physical parameters of the wave form influence the perception of noise. The purpose of this study is to investigate the effects of telephone ringing on two mental tasks. This investigation is based on the human auditory-brain model consisting of the auto-correlation function (ACF) of sound source, the interaural cross-correlation function (IACF) for sound signals arriving at the two ears, and the specialization of the cerebral hemispheres. Under the stimulus of a telephone ringing, an adding task and a drawing task were performed. Results show that telephone ringing influences differently the two tasks: the V-type relaxation was observed only during the drawing task. It is revealed that the interference effect between the drawing task and the noise may occur in the right hemisphere.
Auditory brainstem response in neonates: influence of gender and weight/gestational age ratio
Angrisani, Rosanna M. Giaffredo; Bautzer, Ana Paula D.; Matas, Carla Gentile; de Azevedo, Marisa Frasson
2013-01-01
OBJECTIVE: To investigate the influence of gender and weight/gestational age ratio on the Auditory Brainstem Response (ABR) in preterm (PT) and term (T) newborns. METHODS: 176 newborns were evaluated by ABR; 88 were preterm infants - 44 females (22 small and 22 appropriate for gestational age) and 44 males (22 small and 22 appropriate for gestational age). The preterm infants were compared to 88 term infants - 44 females (22 small and 22 appropriate for gestational age) and 44 males (22 small and 22 appropriate for gestational age). All newborns had bilateral presence of transient otoacoustic emissions and type A tympanometry. RESULTS: No interaural differences were found. ABR response did not differentiate newborns regarding weight/gestational age in males and females. Term newborn females showed statistically shorter absolute latencies (except on wave I) than males. This finding did not occur in preterm infants, who had longer latencies than term newborns, regardless of gender. CONCLUSIONS: Gender and gestational age influence term infants' ABR, with lower responses in females. The weight/gestational age ratio did not influence ABR response in either groups. PMID:24473955
The Usefulness of Rectified VEMP.
Lee, Kang Jin; Kim, Min Soo; Son, Eun Jin; Lim, Hye Jin; Bang, Jung Hwan; Kang, Jae Goo
2008-09-01
For a reliable interpretation of left-right difference in Vestibular evoked myogenic potential (VEMP), the amount of sternocleidomastoid muscle (SCM) contraction has to be considered. Therefore, we can ensure that a difference in amplitude between the right and left VEMPs on a patient is due to vestibular abnormality, not due to individual differences of tonic muscle activity, fatigue or improper position. We used rectification to normalize electromyograph (EMG) based on pre-stimulus EMG activity. This study was designed to evaluate and compare the effect of rectification in two conventional ways of SCM contraction. Twenty-two normal subjects were included. Two methods were employed for SCM contraction in a subject. First, subjects were made to lie flat on their back, lifting the head off the table and turning to the opposite side. Secondly, subjects push with their jaw against the hand-held inflated cuff to generate cuff pressure of 40 mmHg. From the VEMP graphs, amplitude parameters and inter-aural difference ratio (IADR) were analyzed before and after EMG rectification. Before the rectification, the average IADR of the first method was not statistically different from that of the second method. The average IADRs from each method decreased in a rectified response, showing significant reduction in asymmetry ratio. The lowest average IADR could be obtained with the combination of both the first method and rectification. Rectified data show more reliable IADR and may help diagnose some vestibular disorders according to amplitude-associated parameters. The usage of rectification can be maximized with the proper SCM contraction method.
Spike-frequency adaptation in the inferior colliculus.
Ingham, Neil J; McAlpine, David
2004-02-01
We investigated spike-frequency adaptation of neurons sensitive to interaural phase disparities (IPDs) in the inferior colliculus (IC) of urethane-anesthetized guinea pigs using a stimulus paradigm designed to exclude the influence of adaptation below the level of binaural integration. The IPD-step stimulus consists of a binaural 3,000-ms tone, in which the first 1,000 ms is held at a neuron's least favorable ("worst") IPD, adapting out monaural components, before being stepped rapidly to a neuron's most favorable ("best") IPD for 300 ms. After some variable interval (1-1,000 ms), IPD is again stepped to the best IPD for 300 ms, before being returned to a neuron's worst IPD for the remainder of the stimulus. Exponential decay functions fitted to the response to best-IPD steps revealed an average adaptation time constant of 52.9 +/- 26.4 ms. Recovery from adaptation to best IPD steps showed an average time constant of 225.5 +/- 210.2 ms. Recovery time constants were not correlated with adaptation time constants. During the recovery period, adaptation to a 2nd best-IPD step followed similar kinetics to adaptation during the 1st best-IPD step. The mean adaptation time constant at stimulus onset (at worst IPD) was 34.8 +/- 19.7 ms, similar to the 38.4 +/- 22.1 ms recorded to contralateral stimulation alone. Individual time constants after stimulus onset were correlated with each other but not with time constants during the best-IPD step. We conclude that such binaurally derived measures of adaptation reflect processes that occur above the level of exclusively monaural pathways, and subsequent to the site of primary binaural interaction.
Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C
2006-03-20
In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.
Nisha, Kavassery Venkateswaran; Kumar, Ajith Uppunda
2017-04-01
Localization involves processing of subtle yet highly enriched monaural and binaural spatial cues. Remediation programs aimed at resolving spatial deficits are surprisingly scanty in literature. The present study is designed to explore the changes that occur in the spatial performance of normal-hearing listeners before and after subjecting them to virtual acoustic space (VAS) training paradigm using behavioral and electrophysiological measures. Ten normal-hearing listeners participated in the study, which was conducted in three phases, including a pre-training, training, and post-training phase. At the pre- and post-training phases both behavioral measures of spatial acuity and electrophysiological P300 were administered. The spatial acuity of the participants in the free field and closed field were measured apart from quantifying their binaural processing abilities. The training phase consisted of 5-8 sessions (20 min each) carried out using a hierarchy of graded VAS stimuli. The results obtained from descriptive statistics were indicative of an improvement in all the spatial acuity measures in the post-training phase. Statistically, significant changes were noted in interaural time difference (ITD) and virtual acoustic space identification scores measured in the post-training phase. Effect sizes (r) for all of these measures were substantially large, indicating the clinical relevance of these measures in documenting the impact of training. However, the same was not reflected in P300. The training protocol used in the present study on a preliminary basis proves to be effective in normal-hearing listeners, and its implications can be extended to other clinical population as well.
Bremen, Peter; Joris, Philip X
2013-10-30
Interaural time differences (ITDs) are a major cue for localizing low-frequency (<1.5 kHz) sounds. Sensitivity to this cue first occurs in the medial superior olive (MSO), which is thought to perform a coincidence analysis on its monaural inputs. Extracellular single-neuron recordings in MSO are difficult to obtain because (1) MSO action potentials are small and (2) a large field potential locked to the stimulus waveform hampers spike isolation. Consequently, only a limited number of studies report MSO data, and even in these studies data are limited in the variety of stimuli used, in the number of neurons studied, and in spike isolation. More high-quality data are needed to better understand the mechanisms underlying neuronal ITD-sensitivity. We circumvented these difficulties by recording from the axons of MSO neurons in the lateral lemniscus (LL) of the chinchilla, a species with pronounced low-frequency sensitivity. Employing sharp glass electrodes we successfully recorded from neurons with ITD sensitivity: the location, response properties, latency, and spike shape were consistent with an MSO axonal origin. The main difficulty encountered was mechanical stability. We obtained responses to binaural beats and dichotic noise bursts to characterize the best delay versus characteristic frequency distribution, and compared the data to recordings we obtained in the inferior colliculus (IC). In contrast to most reports in other rodents, many best delays were close to zero ITD, both in MSO and IC, with a majority of the neurons recorded in the LL firing maximally within the presumed ethological ITD range.
Effects of hair, clothing, and headgear on localization of three-dimensional sounds Part IIb
NASA Astrophysics Data System (ADS)
Riederer, Klaus A. J.
2003-10-01
Seven 20-25-year-old normal hearing (<=20 dBHL) native male-undergraduates listened twice to treatments of 85 virtual source locations in a large dark anechoic chamber. The 3-D-stimuli were anew-calculated white noise bursts, amplitude modulated (40-Hz sine), repeated after a pause (total duration 3×275=825 ms), HRTF-convolved and headphone-equalized (Sennheiser HD580). The HRTFs were measured from a Cortex dummy head wearing different garments: 1=alpaca pullover only; 2=1+curly pony-tailed thick-hair+eye-glasses 3=1+long thin-hair (ear-covering) 4=1+mens trilby; 5=2+bicycle helmet+jacket [Riederer, J. Acoust. Soc. Am., this issue]. Perceived directions were signified by placing a tailored digitizer-stylus over an illuminated ball darkened after the response. Subjects did the experiments during three days, each consisting of a 2-h session of several randomized sets with multiple breaks. Azimuth and elevation errors were investigated separately in factorial within-subjects ANOVA showing strong dependence p(<=0.004) on all main effects and interactions (garment, elevation, azimuth). The grand mean errors were approximately 16°-19°. Confused angles were retained around the +/-90°-interaural axis and cos(elev)-weighting was applied to azimuth errors. The total front-back/back-front confusion rate was 18.38% and up-down/down-up 12.21%. The confusions (except left-right/right-left, 2.07%) and reaction times depended strongly on azimuth (main effect) and garment (interaction). [Work supported by Graduate School of Electronics, Telecommunication and Automation.
Exploring the additivity of binaural and monaural masking release
Hall, Joseph W.; Buss, Emily; Grose, John H.
2011-01-01
Experiment 1 examined comodulation masking release (CMR) for a 700-Hz tonal signal under conditions of NoSo (noise and signal interaurally in phase) and NoSπ (noise in phase, signal out of phase) stimulation. The baseline stimulus for CMR was either a single 24-Hz wide narrowband noise centered on the signal frequency [on-signal band (OSB)] or the OSB plus, a set of flanking noise bands having random envelopes. Masking noise was either gated or continuous. The CMR, defined with respect to either the OSB or the random noise baseline, was smaller for NoSπ than NoSo stimulation, particularly when the masker was continuous. Experiment 2 examined whether the same pattern of results would be obtained for a 2000-Hz signal frequency; the number of flanking bands was also manipulated (two versus eight). Results again showed smaller CMR for NoSπ than NoSo stimulation for both continuous and gated masking noise. The CMR was larger with eight than with two flanking bands, and this difference was greater for NoSo than NoSπ. The results of this study are compatible with serial mechanisms of binaural and monaural masking release, but they indicate that the combined masking release (binaural masking-level difference and CMR) falls short of being additive. PMID:21476663
Vestibular evoked myogenic potential findings in multiple sclerosis.
Escorihuela García, Vicente; Llópez Carratalá, Ignacio; Orts Alborch, Miguel; Marco Algarra, Jaime
2013-01-01
Multiple sclerosis is an inflammatory disease involving the occurrence of demyelinating, chronic neurodegenerative lesions in the central nervous system. We studied vestibular evoked myogenic potentials (VEMPs) in this pathology, to allow us to evaluate the saccule, inferior vestibular nerve and vestibular-spinal pathway non-invasively. There were 23 patients diagnosed with multiple sclerosis who underwent VEMP recordings, comparing our results with a control group consisting of 35 healthy subjects. We registered p13 and n23 wave latencies, interaural amplitude difference and asymmetry ratio between both ears. Subjects also underwent an otoscopy and audiometric examination. The prolongation of p13 and n23 wave latencies was the most notable characteristic, with a mean p13 wave latency of 19.53 milliseconds and a mean latency of 30.06 milliseconds for n23. In contrast, the asymmetry index showed no significant differences with our control group. In case of multiple sclerosis, the prolongation of the p13 and n23 VEMP wave latencies is a feature that has been attributed to slowing of conduction by demyelination of the vestibular-spinal pathway. In this regard, alteration of the response or lack thereof in these potentials has a locator value of injury to the lower brainstem. Copyright © 2013 Elsevier España, S.L. All rights reserved.
Impact of monaural frequency compression on binaural fusion at the brainstem level.
Klauke, Isabelle; Kohl, Manuel C; Hannemann, Ronny; Kornagel, Ulrich; Strauss, Daniel J; Corona-Strauss, Farah I
2015-08-01
A classical objective measure for binaural fusion at the brainstem level is the so-called β-wave of the binaural interaction component (BIC) in the auditory brainstem response (ABR). However, in some cases it appeared that a reliable detection of this component still remains a challenge. In this study, we investigate the wavelet phase synchronization stability (WPSS) of ABR data for the analysis of binaural fusion and compare it to the BIC. In particular, we examine the impact of monaural nonlinear frequency compression on binaural fusion. As the auditory system is tonotopically organized, an interaural frequency mismatch caused by monaural frequency compression could negatively effect binaural fusion. In this study, only few subjects showed a detectable β-wave and in most cases only for low ITDs. However, we present a novel objective measure for binaural fusion that outperforms the current state-of-the-art technique (BIC): the WPSS analysis showed a significant difference between the phase stability of the sum of the monaurally evoked responses and the phase stability of the binaurally evoked ABR. This difference could be an indicator for binaural fusion in the brainstem. Furthermore, we observed that monaural frequency compression could indeed effect binaural fusion, as the WPSS results for this condition vary strongly from the results obtained without frequency compression.
Binaural auditory beats affect long-term memory.
Garcia-Argibay, Miguel; Santed, Miguel A; Reales, José M
2017-12-08
The presentation of two pure tones to each ear separately with a slight difference in their frequency results in the perception of a single tone that fluctuates in amplitude at a frequency that equals the difference of interaural frequencies. This perceptual phenomenon is known as binaural auditory beats, and it is thought to entrain electrocortical activity and enhance cognition functions such as attention and memory. The aim of this study was to determine the effect of binaural auditory beats on long-term memory. Participants (n = 32) were kept blind to the goal of the study and performed both the free recall and recognition tasks after being exposed to binaural auditory beats, either in the beta (20 Hz) or theta (5 Hz) frequency bands and white noise as a control condition. Exposure to beta-frequency binaural beats yielded a greater proportion of correctly recalled words and a higher sensitivity index d' in recognition tasks, while theta-frequency binaural-beat presentation lessened the number of correctly remembered words and the sensitivity index. On the other hand, we could not find differences in the conditional probability for recall given recognition between beta and theta frequencies and white noise, suggesting that the observed changes in recognition were due to the recollection component. These findings indicate that the presentation of binaural auditory beats can affect long-term memory both positively and negatively, depending on the frequency used.
The effect of preterm birth on vestibular evoked myogenic potentials in children.
Eshaghi, Zahra; Jafari, Zahra; Shaibanizadeh, Abdolreza; Jalaie, Shohreh; Ghaseminejad, Azizeh
2014-01-01
Preterm birth is a significant global health problem with serious short- and long-term consequences. This study examined the long term effects of preterm birth on vestibular evoked myogenic potentials (VEMPs) among preschool-aged children. Thirty-one children with preterm and 20 children with term birth histories aged 5.5 to 6.5 years were studied. Each child underwent VEMPs testing using a 500 Hz tone-burst stimulus with a 95 dB nHL (normal hearing level) intensity level. The mean peak latencies of the p13 and n23 waves in the very preterm group were significantly longer than for the full-term group (p≤ 0.041). There was a significant difference between very and mildly preterm children in the latency of peak p13 (p= 0.003). No significant differences existed between groups for p13-n23 amplitude and the interaural amplitude difference ratio. The tested ear and gender did not affect the results of the test. Prolonged VEMPs in very preterm children may reflect neurodevelopmental impairment and incomplete maturity of the vestibulospinal tract (sacculocollic reflex pathway), especially myelination. VEMPs is a non-invasive technique for investigating the vestibular function in young children, and considered to be an appropriate tool for evaluating vestibular impairments at the low brainstem level. It can be used in follow-ups of the long-term effects of preterm birth on the vestibular system.
Do humans show velocity-storage in the vertical rVOR?
Bertolini, G; Bockisch, C J; Straumann, D; Zee, D S; Ramat, S
2008-01-01
To investigate the contribution of the vestibular velocity-storage mechanism (VSM) to the vertical rotational vestibulo-ocular reflex (rVOR) we recorded eye movements evoked by off-vertical axis rotation (OVAR) using whole-body constant-velocity pitch rotations about an earth-horizontal, interaural axis in four healthy human subjects. Subjects were tumbled forward, and backward, at 60 deg/s for over 1 min using a 3D turntable. Slow-phase velocity (SPV) responses were similar to the horizontal responses elicited by OVAR along the body longitudinal axis, ('barbecue' rotation), with exponentially decaying amplitudes and a residual, otolith-driven sinusoidal response with a bias. The time constants of the vertical SPV ranged from 6 to 9 s. These values are closer to those that reflect the dynamic properties of vestibular afferents than the typical 20 s produced by the VSM in the horizontal plane, confirming the relatively smaller contribution of the VSM to these vertical responses. Our preliminary results also agree with the idea that the VSM velocity response aligns with the direction of gravity. The horizontal and torsional eye velocity traces were also sinusoidally modulated by the change in gravity, but showed no exponential decay.
NASA Technical Reports Server (NTRS)
Clement, G.; Moore, S. T.; Raphan, T.; Cohen, B.
2001-01-01
During the 1998 Neurolab mission (STS-90), four astronauts were exposed to interaural and head vertical (dorsoventral) linear accelerations of 0.5 g and 1 g during constant velocity rotation on a centrifuge, both on Earth and during orbital space flight. Subjects were oriented either left-ear-out or right-ear-out (Gy centrifugation), or lay supine along the centrifuge arm with their head off-axis (Gz centrifugation). Pre-flight centrifugation, producing linear accelerations of 0.5 g and 1 g along the Gy (interaural) axis, induced illusions of roll-tilt of 20 degrees and 34 degrees for gravito-inertial acceleration (GIA) vector tilts of 27 degrees and 45 degrees , respectively. Pre-flight 0.5 g and 1 g Gz (head dorsoventral) centrifugation generated perceptions of backward pitch of 5 degrees and 15 degrees , respectively. In the absence of gravity during space flight, the same centrifugation generated a GIA that was equivalent to the centripetal acceleration and aligned with the Gy or Gz axes. Perception of tilt was underestimated relative to this new GIA orientation during early in-flight Gy centrifugation, but was close to the GIA after 16 days in orbit, when subjects reported that they felt as if they were 'lying on side'. During the course of the mission, inflight roll-tilt perception during Gy centrifugation increased from 45 degrees to 83 degrees at 1 g and from 42 degrees to 48 degrees at 0.5 g. Subjects felt 'upside-down' during in-flight Gz centrifugation from the first in-flight test session, which reflected the new GIA orientation along the head dorsoventral axis. The different levels of in-flight tilt perception during 0.5 g and 1 g Gy centrifugation suggests that other non-vestibular inputs, including an internal estimate of the body vertical and somatic sensation, were utilized in generating tilt perception. Interpretation of data by a weighted sum of body vertical and somatic vectors, with an estimate of the GIA from the otoliths, suggests that perception weights the sense of the body vertical more heavily early in-flight, that this weighting falls during adaptation to microgravity, and that the decreased reliance on the body vertical persists early post-flight, generating an exaggerated sense of tilt. Since graviceptors respond to linear acceleration and not to head tilt in orbit, it has been proposed that adaptation to weightlessness entails reinterpretation of otolith activity, causing tilt to be perceived as translation. Since linear acceleration during in-flight centrifugation was always perceived as tilt, not translation, the findings do not support this hypothesis.
Heffner, Henry E; Heffner, Rickye S
2018-01-01
Branstetter and his colleagues present the audiograms of eight killer whales and provide a comprehensive review of previous killer whale audiograms. In their paper, they say that the present authors have reported a relationship between size and high-frequency hearing but that echolocating cetaceans might be a special case. The purpose of these comments is to clarify that the relationship of a species' high-frequency hearing is not to its size (mass) but to its "functional interaural distance" (a measure of the availability of sound-localization cues). Moreover, it has previously been noted that echolocating animals, cetaceans as well as bats, have extended their high-frequency hearing somewhat beyond the frequencies used by comparable non-echolocators for passive localization.
The Usefulness of Rectified VEMP
Kim, Min Soo; Son, Eun Jin; Lim, Hye Jin; Bang, Jung Hwan; Kang, Jae Goo
2008-01-01
Objectives For a reliable interpretation of left-right difference in Vestibular evoked myogenic potential (VEMP), the amount of sternocleidomastoid muscle (SCM) contraction has to be considered. Therefore, we can ensure that a difference in amplitude between the right and left VEMPs on a patient is due to vestibular abnormality, not due to individual differences of tonic muscle activity, fatigue or improper position. We used rectification to normalize electromyograph (EMG) based on pre-stimulus EMG activity. This study was designed to evaluate and compare the effect of rectification in two conventional ways of SCM contraction. Methods Twenty-two normal subjects were included. Two methods were employed for SCM contraction in a subject. First, subjects were made to lie flat on their back, lifting the head off the table and turning to the opposite side. Secondly, subjects push with their jaw against the hand-held inflated cuff to generate cuff pressure of 40 mmHg. From the VEMP graphs, amplitude parameters and inter-aural difference ratio (IADR) were analyzed before and after EMG rectification. Results Before the rectification, the average IADR of the first method was not statistically different from that of the second method. The average IADRs from each method decreased in a rectified response, showing significant reduction in asymmetry ratio. The lowest average IADR could be obtained with the combination of both the first method and rectification. Conclusion Rectified data show more reliable IADR and may help diagnose some vestibular disorders according to amplitude-associated parameters. The usage of rectification can be maximized with the proper SCM contraction method. PMID:19434246
Figure-background in dichotic task and their relation to skills untrained.
Cibian, Aline Priscila; Pereira, Liliane Desgualdo
2015-01-01
To evaluate the effectiveness of auditory training in dichotic task and to compare the responses of trained skills with the responses of untrained skills, after 4-8 weeks. Nineteen subjects, aged 12-15 years, underwent an auditory training based on dichotic interaural intensity difference (DIID), organized in eight sessions, each lasting 50 min. The assessment of auditory processing was conducted in three stages: before the intervention, after the intervention, and in the middle and at the end of the training. Data from this evaluation were analyzed as per group of disorder, according to the changes in the auditory processes evaluated: selective attention and temporal processing. Each of them was named selective attention group (SAG) and temporal processing group (TPG), and, for both the processes, selective attention and temporal processing group (SATPG). The training improved both the trained and untrained closing skill, normalizing all individuals. Untrained solving and temporal ordering skills did not reach normality for SATPG and TPG. Individuals reached normality for the trained figure-ground skill and for the untrained closing skill. The untrained solving and temporal ordering skills improved in some individuals but failed to reach normality.
Detecting and Quantifying Topography in Neural Maps
Yarrow, Stuart; Razak, Khaleel A.; Seitz, Aaron R.; Seriès, Peggy
2014-01-01
Topographic maps are an often-encountered feature in the brains of many species, yet there are no standard, objective procedures for quantifying topography. Topographic maps are typically identified and described subjectively, but in cases where the scale of the map is close to the resolution limit of the measurement technique, identifying the presence of a topographic map can be a challenging subjective task. In such cases, an objective topography detection test would be advantageous. To address these issues, we assessed seven measures (Pearson distance correlation, Spearman distance correlation, Zrehen's measure, topographic product, topological correlation, path length and wiring length) by quantifying topography in three classes of cortical map model: linear, orientation-like, and clusters. We found that all but one of these measures were effective at detecting statistically significant topography even in weakly-ordered maps, based on simulated noisy measurements of neuronal selectivity and sparse sampling of the maps. We demonstrate the practical applicability of these measures by using them to examine the arrangement of spatial cue selectivity in pallid bat A1. This analysis shows that significantly topographic arrangements of interaural intensity difference and azimuth selectivity exist at the scale of individual binaural clusters. PMID:24505279
Binaural model-based dynamic-range compression.
Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D
2018-01-26
Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.
Interaction of Object Binding Cues in Binaural Masking Pattern Experiments.
Verhey, Jesko L; Lübken, Björn; van de Par, Steven
2016-01-01
Object binding cues such as binaural and across-frequency modulation cues are likely to be used by the auditory system to separate sounds from different sources in complex auditory scenes. The present study investigates the interaction of these cues in a binaural masking pattern paradigm where a sinusoidal target is masked by a narrowband noise. It was hypothesised that beating between signal and masker may contribute to signal detection when signal and masker do not spectrally overlap but that this cue could not be used in combination with interaural cues. To test this hypothesis an additional sinusoidal interferer was added to the noise masker with a lower frequency than the noise whereas the target had a higher frequency than the noise. Thresholds increase when the interferer is added. This effect is largest when the spectral interferer-masker and masker-target distances are equal. The result supports the hypothesis that modulation cues contribute to signal detection in the classical masking paradigm and that these are analysed with modulation bandpass filters. A monaural model including an across-frequency modulation process is presented that account for this effect. Interestingly, the interferer also affects dichotic thresholds indicating that modulation cues also play a role in binaural processing.
Monaural and binaural processing of complex waveforms
NASA Astrophysics Data System (ADS)
Trahiotis, Constantine; Bernstein, Leslie R.
1992-01-01
Our research concerned the manners by which the monaural and binaural auditory systems process information in complex sounds. Substantial progress was made in three areas, consistent with the ojectives outlined in the original proposal. (1) New electronic equipment, including a NeXT computer was purchased, installed and interfaced with the existing laboratory. Software was developed for generating the necessary complex digital stimuli and for running behavioral experiments utilizing those stimuli. (2) Monaural experiments showed that the CMR is not obtained successively and is reduced or non-existent when the flanking bands are pulsed rather than presented continuously. Binaural investigations revealed that the detectability of a tonal target in a masking level difference paradigm could be degraded by the presence of a spectrally remote interfering tone. (3) In collaboration with Dr. Richard Stem, theoretical efforts included the explication and evaluation of a weighted-image model of binaural hearing, attempts to extend the Stern-Colbum position-variable model to account for many crucial lateralization and localization data gathered over the past 50 years, and the continuation of efforts to incorporate into a general model notions that lateralization and localization of spectrally-rich stimuli depend upon the patterns of neural activity within a plane defined by frequency and interaural delay.
A novel procedure for examining pre-lexical phonetic-level analysis
NASA Astrophysics Data System (ADS)
Bashford, James A.; Warren, Richard M.; Lenz, Peter W.
2005-09-01
A recorded word repeated over and over is heard to undergo a series of illusory changes (verbal transformations) to other syllables and words in the listener's lexicon. When a second image of the same repeating word is added through dichotic presentation (with an interaural delay preventing fusion), the two distinct lateralized images of the word undergo independent illusory transformations at the same rate observed for a single image [Lenz et al., J. Acoust. Soc. Am. 107, 2857 (2000)]. However, when the contralateral word differs by even one phoneme, transformation rate decreases dramatically [Bashford et al., J. Acoust. Soc. Am. 110, 2658 (2001)]. This suppression of transformations did not occur when a nonspeech competitor was employed. The present study found that dichotic suppression of transformation rate also is independent of the top-down influence of a verbal competitor's word frequency, neighborhood density, and lexicality. However, suppression did increase with the extent of feature mismatch at a given phoneme position (e.g., transformations for ``dark'' were suppressed more by contralateral ``hark'' than by ``bark''). These and additional findings indicate that dichotic verbal transformations can provide experimental access to a pre-lexical phonetic analysis normally obscured by subsequent processing. [Work supported by NIH.
Winters, Bradley D; Jin, Shan-Xue; Ledford, Kenneth R; Golding, Nace L
2017-03-22
The principal neurons of the medial superior olive (MSO) encode cues for horizontal sound localization through comparisons of the relative timing of EPSPs. To understand how the timing and amplitude of EPSPs are maintained during propagation in the dendrites, we made dendritic and somatic whole-cell recordings from MSO principal neurons in brain slices from Mongolian gerbils. In somatic recordings, EPSP amplitudes were largely uniform following minimal stimulation of excitatory synapses at visualized locations along the dendrites. Similar results were obtained when excitatory synaptic transmission was eliminated in a low calcium solution and then restored at specific dendritic sites by pairing input stimulation and focal application of a higher calcium solution. We performed dual dendritic and somatic whole-cell recordings to measure spontaneous EPSPs using a dual-channel template-matching algorithm to separate out those events initiated at or distal to the dendritic recording location. Local dendritic spontaneous EPSP amplitudes increased sharply in the dendrite with distance from the soma (length constant, 53.6 μm), but their attenuation during propagation resulted in a uniform amplitude of ∼0.2 mV at the soma. The amplitude gradient of dendritic EPSPs was also apparent in responses to injections of identical simulated excitatory synaptic currents in the dendrites. Compartmental models support the view that these results extensively reflect the influence of dendritic cable properties. With relatively few excitatory axons innervating MSO neurons, the normalization of dendritic EPSPs at the soma would increase the importance of input timing versus location during the processing of interaural time difference cues in vivo SIGNIFICANCE STATEMENT The neurons of the medial superior olive analyze cues for sound localization by detecting the coincidence of binaural excitatory synaptic inputs distributed along the dendrites. Previous studies have shown that dendritic voltages undergo severe attenuation as they propagate to the soma, potentially reducing the influence of distal inputs. However, using dendritic and somatic patch recordings, we found that dendritic EPSP amplitude increased with distance from the soma, compensating for dendritic attenuation and normalizing EPSP amplitude at the soma. Much of this normalization reflected the influence of dendritic morphology. As different combinations of presynaptic axons may be active during consecutive cycles of sound stimuli, somatic EPSP normalization renders spike initiation more sensitive to synapse timing than dendritic location. Copyright © 2017 the authors 0270-6474/17/373138-12$15.00/0.
Lane, Courtney C.; Delgutte, Bertrand
2007-01-01
Spatial release from masking (SRM), a factor in listening in noisy environments, is the improvement in auditory signal detection obtained when a signal is separated in space from a masker. To study the neural mechanisms of SRM, we recorded from single units in the inferior colliculus (IC) of barbiturate-anesthetized cats, focusing on low-frequency neurons sensitive to interaural time differences. The stimulus was a broadband chirp train with a 40-Hz repetition rate in continuous broadband noise, and the unit responses were measured for several signal and masker (virtual) locations. Masked thresholds (the lowest signal-to-noise ratio, SNR, for which the signal could be detected for 75% of the stimulus presentations) changed systematically with signal and masker location. Single-unit thresholds did not necessarily improve with signal and masker separation; instead, they tended to reflect the units’ azimuth preference. Both how the signal was detected (through a rate increase or decrease) and how the noise masked the signal response (suppressive or excitatory masking) changed with signal and masker azimuth, consistent with a cross-correlator model of binaural processing. However, additional processing, perhaps related to the signal’s amplitude modulation rate, appeared to influence the units’ responses. The population masked thresholds (the most sensitive unit’s threshold at each signal and masker location) did improve with signal and masker separation as a result of the variety of azimuth preferences in our unit sample. The population thresholds were similar to human behavioral thresholds in both SNR value and shape, indicating that these units may provide a neural substrate for low-frequency SRM. PMID:15857966
Influence of aging on human sound localization
Dobreva, Marina S.; O'Neill, William E.
2011-01-01
Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age. PMID:21368004
Exploring the additivity of binaural and monaural masking release.
Hall, Joseph W; Buss, Emily; Grose, John H
2011-04-01
Experiment 1 examined comodulation masking release (CMR) for a 700-Hz tonal signal under conditions of N(o)S(o) (noise and signal interaurally in phase) and N(o)S(π) (noise in phase, signal out of phase) stimulation. The baseline stimulus for CMR was either a single 24-Hz wide narrowband noise centered on the signal frequency [on-signal band (OSB)] or the OSB plus, a set of flanking noise bands having random envelopes. Masking noise was either gated or continuous. The CMR, defined with respect to either the OSB or the random noise baseline, was smaller for N(o)S(π) than N(o)S(o) stimulation, particularly when the masker was continuous. Experiment 2 examined whether the same pattern of results would be obtained for a 2000-Hz signal frequency; the number of flanking bands was also manipulated (two versus eight). Results again showed smaller CMR for N(o)S(π) than N(o)S(o) stimulation for both continuous and gated masking noise. The CMR was larger with eight than with two flanking bands, and this difference was greater for N(o)S(o) than N(o)S(π). The results of this study are compatible with serial mechanisms of binaural and monaural masking release, but they indicate that the combined masking release (binaural masking-level difference and CMR) falls short of being additive.
Frey, Johannes Daniel; Wendt, Mike; Löw, Andreas; Möller, Stephan; Zölzer, Udo; Jacobsen, Thomas
2017-02-15
Changes in room acoustics provide important clues about the environment of sound source-perceiver systems, for example, by indicating changes in the reflecting characteristics of surrounding objects. To study the detection of auditory irregularities brought about by a change in room acoustics, a passive oddball protocol with participants watching a movie was applied in this study. Acoustic stimuli were presented via headphones. Standards and deviants were created by modelling rooms of different sizes, keeping the values of the basic acoustic dimensions (e.g., frequency, duration, sound pressure, and sound source location) as constant as possible. In the first experiment, each standard and deviant stimulus consisted of sequences of three short sounds derived from sinusoidal tones, resulting in three onsets during each stimulus. Deviant stimuli elicited a Mismatch Negativity (MMN) as well as two additional negative deflections corresponding to the three onset peaks. In the second experiment, only one sound was used; the stimuli were otherwise identical to the ones used in the first experiment. Again, an MMN was observed, followed by an additional negative deflection. These results provide further support for the hypothesis of automatic detection of unattended changes in room acoustics, extending previous work by demonstrating the elicitation of an MMN by changes in room acoustics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Binaural Pitch Fusion in Bilateral Cochlear Implant Users.
Reiss, Lina A J; Fowler, Jennifer R; Hartling, Curtis L; Oh, Yonghee
Binaural pitch fusion is the fusion of stimuli that evoke different pitches between the ears into a single auditory image. Individuals who use hearing aids or bimodal cochlear implants (CIs) experience abnormally broad binaural pitch fusion, such that sounds differing in pitch by as much as 3-4 octaves are fused across ears, leading to spectral averaging and speech perception interference. The goal of this study was to determine if adult bilateral CI users also experience broad binaural pitch fusion. Stimuli were pulse trains delivered to individual electrodes. Fusion ranges were measured using simultaneous, dichotic presentation of reference and comparison stimuli in opposite ears, and varying the comparison stimulus to find the range that fused with the reference stimulus. Bilateral CI listeners had binaural pitch fusion ranges varying from 0 to 12 mm (average 6.1 ± 3.9 mm), where 12 mm indicates fusion over all electrodes in the array. No significant correlations of fusion range were observed with any subject factors related to age, hearing loss history, or hearing device history, or with any electrode factors including interaural electrode pitch mismatch, pitch match bandwidth, or within-ear electrode discrimination abilities. Bilateral CI listeners have abnormally broad fusion, similar to hearing aid and bimodal CI listeners. This broad fusion may explain the variability of binaural benefits for speech perception in quiet and in noise in bilateral CI users.
NASA Technical Reports Server (NTRS)
Angelaki, D. E.; Hess, B. J.
1996-01-01
1. The dynamic properties of otolith-ocular reflexes elicited by sinusoidal linear acceleration along the three cardinal head axes were studied during off-vertical axis rotations in rhesus monkeys. As the head rotates in space at constant velocity about an off-vertical axis, otolith-ocular reflexes are elicited in response to the sinusoidally varying linear acceleration (gravity) components along the interaural, nasooccipital, or vertical head axis. Because the frequency of these sinusoidal stimuli is proportional to the velocity of rotation, rotation at low and moderately fast speeds allows the study of the mid-and low-frequency dynamics of these otolith-ocular reflexes. 2. Animals were rotated in complete darkness in the yaw, pitch, and roll planes at velocities ranging between 7.4 and 184 degrees/s. Accordingly, otolith-ocular reflexes (manifested as sinusoidal modulations in eye position and/or slow-phase eye velocity) were quantitatively studied for stimulus frequencies ranging between 0.02 and 0.51 Hz. During yaw and roll rotation, torsional, vertical, and horizontal slow-phase eye velocity was sinusoidally modulated as a function of head position. The amplitudes of these responses were symmetric for rotations in opposite directions. In contrast, mainly vertical slow-phase eye velocity was modulated during pitch rotation. This modulation was asymmetric for rotations in opposite direction. 3. Each of these response components in a given rotation plane could be associated with an otolith-ocular response vector whose sensitivity, temporal phase, and spatial orientation were estimated on the basis of the amplitude and phase of sinusoidal modulations during both directions of rotation. Based on this analysis, which was performed either for slow-phase eye velocity alone or for total eye excursion (including both slow and fast eye movements), two distinct response patterns were observed: 1) response vectors with pronounced dynamics and spatial/temporal properties that could be characterized as the low-frequency range of "translational" otolith-ocular reflexes; and 2) response vectors associated with an eye position modulation in phase with head position ("tilt" otolith-ocular reflexes). 4. The responses associated with two otolith-ocular vectors with pronounced dynamics consisted of horizontal eye movements evoked as a function of gravity along the interaural axis and vertical eye movements elicited as a function of gravity along the vertical head axis. Both responses were characterized by a slow-phase eye velocity sensitivity that increased three- to five-fold and large phase changes of approximately 100-180 degrees between 0.02 and 0.51 Hz. These dynamic properties could suggest nontraditional temporal processing in utriculoocular and sacculoocular pathways, possibly involving spatiotemporal otolith-ocular interactions. 5. The two otolith-ocular vectors associated with eye position responses in phase with head position (tilt otolith-ocular reflexes) consisted of torsional eye movements in response to gravity along the interaural axis, and vertical eye movements in response to gravity along the nasooccipital head axis. These otolith-ocular responses did not result from an otolithic effect on slow eye movements alone. Particularly at high frequencies (i.e., high speed rotations), saccades were responsible for most of the modulation of torsional and vertical eye position, which was relatively large (on average +/- 8-10 degrees/g) and remained independent of frequency. Such reflex dynamics can be simulated by a direct coupling of primary otolith afferent inputs to the oculomotor plant. (ABSTRACT TRUNCATED).
Hamlet, William R.; Lu, Yong
2016-01-01
Intrinsic plasticity has emerged as an important mechanism regulating neuronal excitability and output under physiological and pathological conditions. Here, we report a novel form of intrinsic plasticity. Using perforated patch clamp recordings, we examined the modulatory effects of group II metabotropic glutamate receptors (mGluR II) on voltage-gated potassium (KV) currents and the firing properties of neurons in the chicken nucleus laminaris (NL), the first central auditory station where interaural time cues are analyzed for sound localization. We found that activation of mGluR II by synthetic agonists resulted in a selective increase of the high threshold KV currents. More importantly, synaptically released glutamate (with reuptake blocked) also enhanced the high threshold KV currents. The enhancement was frequency-coding region dependent, being more pronounced in low frequency neurons compared to middle and high frequency neurons. The intracellular mechanism involved the Gβγ signaling pathway associated with phospholipase C and protein kinase C. The modulation strengthened membrane outward rectification, sharpened action potentials, and improved the ability of NL neurons to follow high frequency inputs. These data suggest that mGluR II provides a feedforward modulatory mechanism that may regulate temporal processing under the condition of heightened synaptic inputs. PMID:26964678
Subjective evaluation and electroacoustic theoretical validation of a new approach to audio upmixing
NASA Astrophysics Data System (ADS)
Usher, John S.
Audio signal processing systems for converting two-channel (stereo) recordings to four or five channels are increasingly relevant. These audio upmixers can be used with conventional stereo sound recordings and reproduced with multichannel home theatre or automotive loudspeaker audio systems to create a more engaging and natural-sounding listening experience. This dissertation discusses existing approaches to audio upmixing for recordings of musical performances and presents specific design criteria for a system to enhance spatial sound quality. A new upmixing system is proposed and evaluated according to these criteria and a theoretical model for its behavior is validated using empirical measurements. The new system removes short-term correlated components from two electronic audio signals using a pair of adaptive filters, updated according to a frequency domain implementation of the normalized-least-means-square algorithm. The major difference of the new system with all extant audio upmixers is that unsupervised time-alignment of the input signals (typically, by up to +/-10 ms) as a function of frequency (typically, using a 1024-band equalizer) is accomplished due to the non-minimum phase adaptive filter. Two new signals are created from the weighted difference of the inputs, and are then radiated with two loudspeakers behind the listener. According to the consensus in the literature on the effect of interaural correlation on auditory image formation, the self-orthogonalizing properties of the algorithm ensure minimal distortion of the frontal source imagery and natural-sounding, enveloping reverberance (ambiance) imagery. Performance evaluation of the new upmix system was accomplished in two ways: Firstly, using empirical electroacoustic measurements which validate a theoretical model of the system; and secondly, with formal listening tests which investigated auditory spatial imagery with a graphical mapping tool and a preference experiment. Both electroacoustic and subjective methods investigated system performance with a variety of test stimuli for solo musical performances reproduced using a loudspeaker in an orchestral concert-hall and recorded using different microphone techniques. The objective and subjective evaluations combined with a comparative study with two commercial systems demonstrate that the proposed system provides a new, computationally practical, high sound quality solution to upmixing.
Owren, M J; Hopp, S L; Sinnott, J M; Petersen, M R
1988-06-01
We investigated the absolute auditory sensitivities of three monkey species (Cercopithecus aethiops, C. neglectus, and Macaca fuscata) and humans (Homo sapiens). Results indicated that species-typical variation exists in these primates. Vervets, which have the smallest interaural distance of the species that we tested, exhibited the greatest high-frequency sensitivity. This result is consistent with Masterton, Heffner, and Ravizza's (1969) observations that head size and high-frequency acuity are inversely correlated in mammals. Vervets were also the most sensitive in the middle frequency range. Furthermore, we found that de Brazza's monkeys, though they produce a specialized, low-pitched boom call, did not show the enhanced low-frequency sensitivity that Brown and Waser (1984) showed for blue monkeys (C. mitis), a species with a similar sound. This discrepancy may be related to differences in the acoustics of the respective habitats of these animals or in the way their boom calls are used. The acuity of Japanese monkeys was found to closely resemble that of rhesus macaques (M. mulatta) that were tested in previous studies. Finally, humans tested in the same apparatus exhibited normative sensitivities. These subjects responded more readily to low frequencies than did the monkeys but rapidly became less sensitive in the high ranges.
The role of off-frequency masking in binaural hearing
Buss, Emily; Hall, Joseph W.
2010-01-01
The present studies examined the binaural masking level difference (MLD) for off-frequency masking. It has been shown previously that the MLD decreases steeply with increasing spectral separation between a pure tone signal and a 10-Hz wide band of masking noise. Data collected here show that this reduction in the off-frequency MLD as a function of signal∕masker separation is comparable at 250 and 2500 Hz, indicating that neither interaural phase cues nor frequency resolution are critical to this finding. The MLD decreases more gradually with spectral separation when the masker is a 250-Hz-wide band of noise, a result that implicates the rate of inherent amplitude modulation of the masker. Thresholds were also measured for a brief signal presented coincident with a local masker modulation minimum or maximum. Sensitivity was better in the minima for all NoSπ and off-frequency NoSo conditions, with little or no effect of signal position for on-frequency NoSo conditions. Taken together, the present results indicate that the steep reduction in the off-frequency MLD for a narrowband noise masker is due at least in part to envelope cues in the NoSo conditions. There was no evidence of a reduction in binaural cue quality for off-frequency masking. PMID:20550265
Strategies to combat auditory overload during vehicular command and control.
Abel, Sharon M; Ho, Geoffrey; Nakashima, Ann; Smith, Ingrid
2014-09-01
Strategies to combat auditory overload were studied. Normal-hearing males were tested in a sound isolated room in a mock-up of a military land vehicle. Two tasks were presented concurrently, in quiet and vehicle noise. For Task 1 dichotic phrases were delivered over a communications headset. Participants encoded only those beginning with a preassigned call sign (Baron or Charlie). For Task 2, they agreed or disagreed with simple equations presented either over loudspeakers, as text on the laptop monitor, in both the audio and the visual modalities, or not at all. Accuracy was significantly better by 20% on Task 2 when the equations were presented visually or audiovisually. Scores were at least 78% correct for dichotic phrases presented over the headset, with a right ear advantage of 7%, given the 5 dB speech-to-noise ratio. The left ear disadvantage was particularly apparent in noise, where the interaural difference was 12%. Relatively lower scores in the left ear, in noise, were observed for phrases beginning with Charlie. These findings underscore the benefit of delivering higher priority communications to the dominant ear, the importance of selecting speech sounds that are resilient to noise masking, and the advantage of using text in cases of degraded audio. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
Auditory pathway maturational study in small for gestational age preterm infants.
Angrisani, Rosanna Giaffredo; Diniz, Edna Maria Albuquerque; Guinsburg, Ruth; Ferraro, Alexandre Archanjo; Azevedo, Marisa Frasson de; Matas, Carla Gentile
2014-01-01
To follow up the maturation of the auditory pathway in preterm infants small for gestational age (SGA), through the study of absolute and interpeak latencies of auditory brainstem response (ABR) in the first six months of age. This multicentric prospective cross-sectional and longitudinal study assessed 76 newborn infants, 35 SGA and 41 appropriate for gestational age (AGA), born between 33 and 36 weeks in the first evaluation. The ABR was carried out in three moments (neonatal period, three months and six months). Twenty-nine SGA and 33 AGA (62 infants), between 51 and 54 weeks (corrected age), returned for the second evaluation. In the third evaluation, 49 infants (23 SGA and 26 AGA), with age range from 63 to 65 weeks (corrected age), were assessed. The bilateral presence of Transient Evoked Otoacoustic Emissions and normal tympanogram were inclusion criteria. It was found interaural symmetry in both groups. The comparison between the two groups throughout the three periods studied showed no significant differences in the ABR parameters, except for the latencies of wave III in the period between three and six months. As for the maturation with tone burst 0.5 and 1 kHz, it was found that the groups did not differ. The findings suggest that, in the premature infants, the maturational process of the auditory pathway occurs in a similar rate for SGA and AGA. These results also suggest that prematurity is a more relevant factor for the maturation of the auditory pathway than birth weight.
Rowe, James B.; Ghosh, Boyd C. P.; Carlyon, Robert P.; Plack, Christopher J.; Gockel, Hedwig E.
2014-01-01
Under binaural listening conditions, the detection of target signals within background masking noise is substantially improved when the interaural phase of the target differs from that of the masker. Neural correlates of this binaural masking level difference (BMLD) have been observed in the inferior colliculus and temporal cortex, but it is not known whether degeneration of the inferior colliculus would result in a reduction of the BMLD in humans. We used magnetoencephalography to examine the BMLD in 13 healthy adults and 13 patients with progressive supranuclear palsy (PSP). PSP is associated with severe atrophy of the upper brain stem, including the inferior colliculus, confirmed by voxel-based morphometry of structural MRI. Stimuli comprised in-phase sinusoidal tones presented to both ears at three levels (high, medium, and low) masked by in-phase noise, which rendered the low-level tone inaudible. Critically, the BMLD was measured using a low-level tone presented in opposite phase across ears, making it audible against the noise. The cortical waveforms from bilateral auditory sources revealed significantly larger N1m peaks for the out-of-phase low-level tone compared with the in-phase low-level tone, for both groups, indicating preservation of early cortical correlates of the BMLD in PSP. In PSP a significant delay was observed in the onset of the N1m deflection and the amplitude of the P2m was reduced, but these differences were not restricted to the BMLD condition. The results demonstrate that although PSP causes subtle auditory deficits, binaural processing can survive the presence of significant damage to the upper brain stem. PMID:25231610
Hughes, Laura E; Rowe, James B; Ghosh, Boyd C P; Carlyon, Robert P; Plack, Christopher J; Gockel, Hedwig E
2014-12-15
Under binaural listening conditions, the detection of target signals within background masking noise is substantially improved when the interaural phase of the target differs from that of the masker. Neural correlates of this binaural masking level difference (BMLD) have been observed in the inferior colliculus and temporal cortex, but it is not known whether degeneration of the inferior colliculus would result in a reduction of the BMLD in humans. We used magnetoencephalography to examine the BMLD in 13 healthy adults and 13 patients with progressive supranuclear palsy (PSP). PSP is associated with severe atrophy of the upper brain stem, including the inferior colliculus, confirmed by voxel-based morphometry of structural MRI. Stimuli comprised in-phase sinusoidal tones presented to both ears at three levels (high, medium, and low) masked by in-phase noise, which rendered the low-level tone inaudible. Critically, the BMLD was measured using a low-level tone presented in opposite phase across ears, making it audible against the noise. The cortical waveforms from bilateral auditory sources revealed significantly larger N1m peaks for the out-of-phase low-level tone compared with the in-phase low-level tone, for both groups, indicating preservation of early cortical correlates of the BMLD in PSP. In PSP a significant delay was observed in the onset of the N1m deflection and the amplitude of the P2m was reduced, but these differences were not restricted to the BMLD condition. The results demonstrate that although PSP causes subtle auditory deficits, binaural processing can survive the presence of significant damage to the upper brain stem. Copyright © 2014 the American Physiological Society.
Correlation Factors Describing Primary and Spatial Sensations of Sound Fields
NASA Astrophysics Data System (ADS)
ANDO, Y.
2002-11-01
The theory of subjective preference of the sound field in a concert hall is established based on the model of human auditory-brain system. The model consists of the autocorrelation function (ACF) mechanism and the interaural crosscorrelation function (IACF) mechanism for signals arriving at two ear entrances, and the specialization of human cerebral hemispheres. This theory can be developed to describe primary sensations such as pitch or missing fundamental, loudness, timbre and, in addition, duration sensation which is introduced here as a fourth. These four primary sensations may be formulated by the temporal factors extracted from the ACF associated with the left hemisphere and, spatial sensations such as localization in the horizontal plane, apparent source width and subjective diffuseness are described by the spatial factors extracted from the IACF associated with the right hemisphere. Any important subjective responses of sound fields may be described by both temporal and spatial factors.
Subliminal speech perception and auditory streaming.
Dupoux, Emmanuel; de Gardelle, Vincent; Kouider, Sid
2008-11-01
Current theories of consciousness assume a qualitative dissociation between conscious and unconscious processing: while subliminal stimuli only elicit a transient activity, supraliminal stimuli have long-lasting influences. Nevertheless, the existence of this qualitative distinction remains controversial, as past studies confounded awareness and stimulus strength (energy, duration). Here, we used a masked speech priming method in conjunction with a submillisecond interaural delay manipulation to contrast subliminal and supraliminal processing at constant prime, mask and target strength. This delay induced a perceptual streaming effect, with the prime popping out in the supraliminal condition. By manipulating the prime-target interval (ISI), we show a qualitatively distinct profile of priming longevity as a function of prime awareness. While subliminal priming disappeared after half a second, supraliminal priming was independent of ISI. This shows that the distinction between conscious and unconscious processing depends on high-level perceptual streaming factors rather than low-level features (energy, duration).
A model of head-related transfer functions based on a state-space analysis
NASA Astrophysics Data System (ADS)
Adams, Norman Herkamp
This dissertation develops and validates a novel state-space method for binaural auditory display. Binaural displays seek to immerse a listener in a 3D virtual auditory scene with a pair of headphones. The challenge for any binaural display is to compute the two signals to supply to the headphones. The present work considers a general framework capable of synthesizing a wide variety of auditory scenes. The framework models collections of head-related transfer functions (HRTFs) simultaneously. This framework improves the flexibility of contemporary displays, but it also compounds the steep computational cost of the display. The cost is reduced dramatically by formulating the collection of HRTFs in the state-space and employing order-reduction techniques to design efficient approximants. Order-reduction techniques based on the Hankel-operator are found to yield accurate low-cost approximants. However, the inter-aural time difference (ITD) of the HRTFs degrades the time-domain response of the approximants. Fortunately, this problem can be circumvented by employing a state-space architecture that allows the ITD to be modeled outside of the state-space. Accordingly, three state-space architectures are considered. Overall, a multiple-input, single-output (MISO) architecture yields the best compromise between performance and flexibility. The state-space approximants are evaluated both empirically and psychoacoustically. An array of truncated FIR filters is used as a pragmatic reference system for comparison. For a fixed cost bound, the state-space systems yield lower approximation error than FIR arrays for D>10, where D is the number of directions in the HRTF collection. A series of headphone listening tests are also performed to validate the state-space approach, and to estimate the minimum order N of indiscriminable approximants. For D = 50, the state-space systems yield order thresholds less than half those of the FIR arrays. Depending upon the stimulus uncertainty, a minimum state-space order of 7≤N≤23 appears to be adequate. In conclusion, the proposed state-space method enables a more flexible and immersive binaural display with low computational cost.
Gifford, René H.; Dorman, Michael F.; Skarzynski, Henryk; Lorens, Artur; Polak, Marek; Driscoll, Colin L. W.; Roland, Peter; Buchman, Craig A.
2012-01-01
Objective The aim of this study was to assess the benefit of having preserved acoustic hearing in the implanted ear for speech recognition in complex listening environments. Design The current study included a within subjects, repeated-measures design including 21 English speaking and 17 Polish speaking cochlear implant recipients with preserved acoustic hearing in the implanted ear. The patients were implanted with electrodes that varied in insertion depth from 10 to 31 mm. Mean preoperative low-frequency thresholds (average of 125, 250 and 500 Hz) in the implanted ear were 39.3 and 23.4 dB HL for the English- and Polish-speaking participants, respectively. In one condition, speech perception was assessed in an 8-loudspeaker environment in which the speech signals were presented from one loudspeaker and restaurant noise was presented from all loudspeakers. In another condition, the signals were presented in a simulation of a reverberant environment with a reverberation time of 0.6 sec. The response measures included speech reception thresholds (SRTs) and percent correct sentence understanding for two test conditions: cochlear implant (CI) plus low-frequency hearing in the contralateral ear (bimodal condition) and CI plus low-frequency hearing in both ears (best aided condition). A subset of 6 English-speaking listeners were also assessed on measures of interaural time difference (ITD) thresholds for a 250-Hz signal. Results Small, but significant, improvements in performance (1.7 – 2.1 dB and 6 – 10 percentage points) were found for the best-aided condition vs. the bimodal condition. Postoperative thresholds in the implanted ear were correlated with the degree of EAS benefit for speech recognition in diffuse noise. There was no reliable relationship among measures of audiometric threshold in the implanted ear nor elevation in threshold following surgery and improvement in speech understanding in reverberation. There was a significant correlation between ITD threshold at 250 Hz and EAS-related benefit for the adaptive SRT. Conclusions Our results suggest that (i) preserved low-frequency hearing improves speech understanding for CI recipients (ii) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing and (iii) preservation of binaural timing cues, albeit poorer than observed for individuals with normal hearing, is possible following unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. Our results demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of cochlear implant criteria to include individuals with low-frequency thresholds in even the normal to near-normal range. PMID:23446225
Klein-Hennig, Martin; Dietz, Mathias; Hohmann, Volker
2018-03-01
Both harmonic and binaural signal properties are relevant for auditory processing. To investigate how these cues combine in the auditory system, detection thresholds for an 800-Hz tone masked by a diotic (i.e., identical between the ears) harmonic complex tone were measured in six normal-hearing subjects. The target tone was presented either diotically or with an interaural phase difference (IPD) of 180° and in either harmonic or "mistuned" relationship to the diotic masker. Three different maskers were used, a resolved and an unresolved complex tone (fundamental frequency: 160 and 40 Hz) with four components below and above the target frequency and a broadband unresolved complex tone with 12 additional components. The target IPD provided release from masking in most masker conditions, whereas mistuning led to a significant release from masking only in the diotic conditions with the resolved and the narrowband unresolved maskers. A significant effect of mistuning was neither found in the diotic condition with the wideband unresolved masker nor in any of the dichotic conditions. An auditory model with a single analysis frequency band and different binaural processing schemes was employed to predict the data of the unresolved masker conditions. Sensitivity to modulation cues was achieved by including an auditory-motivated modulation filter in the processing pathway. The predictions of the diotic data were in line with the experimental results and literature data in the narrowband condition, but not in the broadband condition, suggesting that across-frequency processing is involved in processing modulation information. The experimental and model results in the dichotic conditions show that the binaural processor cannot exploit modulation information in binaurally unmasked conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Diversity of acoustic tracheal system and its role for directional hearing in crickets
2013-01-01
Background Sound localization in small insects can be a challenging task due to physical constraints in deriving sufficiently large interaural intensity differences (IIDs) between both ears. In crickets, sound source localization is achieved by a complex type of pressure difference receiver consisting of four potential sound inputs. Sound acts on the external side of two tympana but additionally reaches the internal tympanal surface via two external sound entrances. Conduction of internal sound is realized by the anatomical arrangement of connecting trachea. A key structure is a trachea coupling both ears which is characterized by an enlarged part in its midline (i.e., the acoustic vesicle) accompanied with a thin membrane (septum). This facilitates directional sensitivity despite an unfavorable relationship between wavelength of sound and body size. Here we studied the morphological differences of the acoustic tracheal system in 40 cricket species (Gryllidae, Mogoplistidae) and species of outgroup taxa (Gryllotalpidae, Rhaphidophoridae, Gryllacrididae) of the suborder Ensifera comprising hearing and non hearing species. Results We found a surprisingly high variation of acoustic tracheal systems and almost all investigated species using intraspecific acoustic communication were characterized by an acoustic vesicle associated with a medial septum. The relative size of the acoustic vesicle - a structure most crucial for deriving high IIDs - implies an important role for sound localization. Most remarkable in this respect was the size difference of the acoustic vesicle between species; those with a more unfavorable ratio of body size to sound wavelength tend to exhibit a larger acoustic vesicle. On the other hand, secondary loss of acoustic signaling was nearly exclusively associated with the absence of both acoustic vesicle and septum. Conclusion The high diversity of acoustic tracheal morphology observed between species might reflect different steps in the evolution of the pressure difference receiver; with a precursor structure already present in ancestral non-hearing species. In addition, morphological transitions of the acoustic vesicle suggest a possible adaptive role for the generation of binaural directional cues. PMID:24131512
The representation of sound localization cues in the barn owl's inferior colliculus
Singheiser, Martin; Gutfreund, Yoram; Wagner, Hermann
2012-01-01
The barn owl is a well-known model system for studying auditory processing and sound localization. This article reviews the morphological and functional organization, as well as the role of the underlying microcircuits, of the barn owl's inferior colliculus (IC). We focus on the processing of frequency and interaural time (ITD) and level differences (ILD). We first summarize the morphology of the sub-nuclei belonging to the IC and their differentiation by antero- and retrograde labeling and by staining with various antibodies. We then focus on the response properties of neurons in the three major sub-nuclei of IC [core of the central nucleus of the IC (ICCc), lateral shell of the central nucleus of the IC (ICCls), and the external nucleus of the IC (ICX)]. ICCc projects to ICCls, which in turn sends its information to ICX. The responses of neurons in ICCc are sensitive to changes in ITD but not to changes in ILD. The distribution of ITD sensitivity with frequency in ICCc can only partly be explained by optimal coding. We continue with the tuning properties of ICCls neurons, the first station in the midbrain where the ITD and ILD pathways merge after they have split at the level of the cochlear nucleus. The ICCc and ICCls share similar ITD and frequency tuning. By contrast, ICCls shows sigmoidal ILD tuning which is absent in ICCc. Both ICCc and ICCls project to the forebrain, and ICCls also projects to ICX, where space-specific neurons are found. Space-specific neurons exhibit side peak suppression in ITD tuning, bell-shaped ILD tuning, and are broadly tuned to frequency. These neurons respond only to restricted positions of auditory space and form a map of two-dimensional auditory space. Finally, we briefly review major IC features, including multiplication-like computations, correlates of echo suppression, plasticity, and adaptation. PMID:22798945
Horizontal sound localization in cochlear implant users with a contralateral hearing aid.
Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A
2016-06-01
Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.
The Balance of Excitatory and Inhibitory Synaptic Inputs for Coding Sound Location
Ono, Munenori
2014-01-01
The localization of high-frequency sounds in the horizontal plane uses an interaural-level difference (ILD) cue, yet little is known about the synaptic mechanisms that underlie processing this cue in the inferior colliculus (IC) of mouse. Here, we study the synaptic currents that process ILD in vivo and use stimuli in which ILD varies around a constant average binaural level (ABL) to approximate sounds on the horizontal plane. Monaural stimulation in either ear produced EPSCs and IPSCs in most neurons. The temporal properties of monaural responses were well matched, suggesting connected functional zones with matched inputs. The EPSCs had three patterns in response to ABL stimuli, preference for the sound field with the highest level stimulus: (1) contralateral; (2) bilateral highly lateralized; or (3) at the center near 0 ILD. EPSCs and IPSCs were well correlated except in center-preferred neurons. Summation of the monaural EPSCs predicted the binaural excitatory response but less well than the summation of monaural IPSCs. Binaural EPSCs often showed a nonlinearity that strengthened the response to specific ILDs. Extracellular spike and intracellular current recordings from the same neuron showed that the ILD tuning of the spikes was sharper than that of the EPSCs. Thus, in the IC, balanced excitatory and inhibitory inputs may be a general feature of synaptic coding for many types of sound processing. PMID:24599475
Verrecchia, Luca; Westin, Magnus; Duan, Maoli; Brantberg, Krister
2016-04-01
To explore ocular vestibular evoked myogenic potentials (oVEMP) to low-frequency vertex vibration (125 Hz) as a diagnostic test for superior canal dehiscence (SCD) syndrome. The oVEMP using 125 Hz single cycle bone-conducted vertex vibration were tested in 15 patients with unilateral superior canal dehiscence (SCD) syndrome, 15 healthy controls and in 20 patients with unilateral vestibular loss due to vestibular neuritis. Amplitude, amplitude asymmetry ratio, latency and interaural latency difference were parameters of interest. The oVEMP amplitude was significantly larger in SCD patients when affected sides (53 μVolts) were compared to non-affected (17.2 μVolts) or compared to healthy controls (13.6 μVolts). Amplitude larger than 33.8 μVolts separates effectively the SCD ears from the healthy ones with sensitivity of 87% and specificity of 93%. The other three parameters showed an overlap between affected SCD ears and non-affected as well as between SCD ears and those in the two control groups. oVEMP amplitude distinguishes SCD ears from healthy ones using low-frequency vibration stimuli at vertex. Amplitude analysis of oVEMP evoked by low-frequency vertex bone vibration stimulation is an additional indicator of SCD syndrome and might serve for diagnosing SCD patients with coexistent conductive middle ear problems. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Xiao, Jun
2007-05-15
Traditionally, the skull landmarks, i.e., bregma, lambda, and the interaural line, are the origins of the coordinate system for almost all rodent brain atlases. The disadvantages of using a skull landmark as an origin are: (i) there are differences among individuals in the alignment between the skull and the brain; (ii) the shapes of sutures, on which a skull landmark is determined, are different for different animals; (iii) the skull landmark is not clear for some animals. Recently, the extreme point of the entire brain (the tip of the olfactory bulb) has also been used as the origin for an atlas coordinate system. The accuracy of stereotaxically locating a brain structure depends on the relative distance between the structure and the reference point of the coordinate. The disadvantages of using the brain extreme as an origin are that it is located far from most brain structures and is not readily exposed during most in vivo procedures. To overcome these disadvantages, this paper introduces a new coordinate system for the brain of the naked mole-rat. The origin of this new coordinate system is a landmark directly on the brain: the intersection point of the posterior edges of the two cerebral hemispheres. This new coordinate system is readily applicable to other rodent species and is statistically better than using bragma and lambda as reference points. It is found that the body weight of old naked mole-rats is significantly bigger than that of young animals. However, the old naked mole-rat brain is not significantly heavier than that of young animal. Both brain weight and brain length vary little among animals of different weights. The disadvantages of current definition of "significant" are briefly discussed and a new expression that describes more objectively the result of statistical test is brought up and used.
Inertial processing of vestibulo-ocular signals
NASA Technical Reports Server (NTRS)
Hess, B. J.; Angelaki, D. E.
1999-01-01
New evidence for a central resolution of gravito-inertial signals has been recently obtained by analyzing the properties of the vestibulo-ocular reflex (VOR) in response to combined lateral translations and roll tilts of the head. It is found that the VOR generates robust compensatory horizontal eye movements independent of whether or not the interaural translatory acceleration component is canceled out by a gravitational acceleration component due to simultaneous roll-tilt. This response property of the VOR depends on functional semicircular canals, suggesting that the brain uses both otolith and semicircular canal signals to estimate head motion relative to inertial space. Vestibular information about dynamic head attitude relative to gravity is the basis for computing head (and body) angular velocity relative to inertial space. Available evidence suggests that the inertial vestibular system controls both head attitude and velocity with respect to a gravity-centered reference frame. The basic computational principles underlying the inertial processing of otolith and semicircular canal afferent signals are outlined.
Amplitude-modulation detection by gerbils in reverberant sound fields.
Lingner, Andrea; Kugler, Kathrin; Grothe, Benedikt; Wiegrebe, Lutz
2013-08-01
Reverberation can dramatically reduce the depth of amplitude modulations which are critical for speech intelligibility. Psychophysical experiments indicate that humans' sensitivity to amplitude modulation in reverberation is better than predicted from the acoustic modulation depth at the receiver position. Electrophysiological studies on reverberation in rabbits highlight the contribution of neurons sensitive to interaural correlation. Here, we use a prepulse-inhibition paradigm to quantify the gerbils' amplitude modulation threshold in both anechoic and reverberant virtual environments. Data show that prepulse inhibition provides a reliable method for determining the gerbils' AM sensitivity. However, we find no evidence for perceptual restoration of amplitude modulation in reverberation. Instead, the deterioration of AM sensitivity in reverberant conditions can be quantitatively explained by the reduced modulation depth at the receiver position. We suggest that the lack of perceptual restoration is related to physical properties of the gerbil's ear input signals and inner-ear processing as opposed to shortcomings of their binaural neural processing. Copyright © 2013 Elsevier B.V. All rights reserved.
Perception of tilt and ocular torsion of vestibular patients during eccentric rotation.
Clément, Gilles; Deguine, Olivier
2010-01-04
Four patients following unilateral vestibular loss and four patients complaining of otolith-dependent vertigo were tested during eccentric yaw rotation generating 1 x g centripetal acceleration directed along the interaural axis. Perception of body tilt in roll and in pitch was recorded in darkness using a somatosensory plate that the subjects maintained parallel to the perceived horizon. Ocular torsion was recorded by a video camera. Unilateral vestibular-defective patients underestimated the magnitude of the roll tilt and had a smaller torsion when the centrifugal force was towards the operated ear compared to the intact ear and healthy subjects. Patients with otolithic-dependent vertigo overestimated the magnitude of roll tilt in both directions of eccentric rotation relative to healthy subjects, and their ocular torsion was smaller than in healthy subjects. Eccentric rotation is a promising tool for the evaluation of vestibular dysfunction in patients. Eye torsion and perception of tilt during this stimulation are objective and subjective measurements, which could be used to determine alterations in spatial processing in the CNS.
Kim, Kun Woo; Jung, Jae Yun; Lee, Jeong Hyun
2013-01-01
Objectives Rectified vestibular evoked myogenic potential (rVEMP) is new method that simultaneously measures the muscle contraction power during VEMP recordings. Although there are a few studies that have evaluated the effect of the rVEMP, there is no study that has evaluated the capacity of rVEMP during asymmetrical muscle contraction. Methods Thirty VEMP measurements were performed among 20 normal subjects (mean age, 28.2±2.1 years; male, 16). VEMP was measured in the supine position. The head was turned to the right side by 0°, 15°, 30°, and 45° and the VEMPs were recorded in each position. The interaural amplitude difference (IAD) ratio was calculated by the conventional non-rectified VEMP (nVEMP) and rVEMP. Results The nVEMP IAD increased significantly according to increasing neck rotation. The IAD in rVEMP was almost similar from 0° to 30°. However, the IAD was significantly larger than the other positions when the neck was rotated 45°. When IAD during 0° was set as a standard, the IAD of the rVEMP was significantly smaller that the nVEMP only during the 30°rotaion. Conclusion Rectified VEMP is capable of correcting asymmetrical muscle contraction power. In contrast, it cannot correct the asymmetry if muscle contraction power asymmetry is 44.8% or larger. Also, it is not necessary if muscle contraction power asymmetry is 22.5% or smaller. PMID:24353859
Statistics of natural binaural sounds.
Młynarski, Wiktor; Jost, Jürgen
2014-01-01
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.
Statistics of Natural Binaural Sounds
Młynarski, Wiktor; Jost, Jürgen
2014-01-01
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658
NASA Astrophysics Data System (ADS)
Leon, Angel Luis
2003-11-01
This thesis reports on the study of the acoustic properties of 18 theaters belonging to the Andalusian historical and architectural heritage. These theaters have undergone recent renovations to modernize and equip them appropriately. Coincident with this work, evaluations and qualification assessments with regard to their acoustic properties have been carried out for the individual theaters and for the group as a whole. Data measurements for this purpose consisted of acoustic measurements in situ, both before the renovation and after the renovation. These results have been compared with computer simulations of sound fields. Variables and parameters considered include the following: reverberation time, rapid speech transition index, back-ground noise, definition, clarity, strength, lateral efficiency, interaural cross-correlation coefficient, volume/seat ratio, volume/audience-area ratio. Based on the measurements and analysis, general conclusions are given in regard to the acoustic performance of theaters whose typology and size are comparable to those that were used in this study (between 800 and 8000 cubic meters). It is noted that these properties are comparable to those of the majority of European theaters. The results and conclusions are presented so that they should be of interest to architectural acoustics practitioners and to architects who are involved in the planning of renovation projects for theaters Thesis advisors: Juan J. Sendra and Jaime Navarro Copies of this thesis written in Spanish may be obtained by contacting the author, Angel L. Leon, E.T.S. de Arquitectura de Sevilla, Dpto. de Construcciones Arquitectonicas I, Av. Reina Mercedes, 2, 41012 Sevilla, Spain. E-mail address: leonr@us.es
Wang, Chi-Te; Fang, Kai-Min; Young, Yi-Ho; Cheng, Po-Wen
2010-04-01
Click and galvanic stimulations of vestibular-evoked myogenic potential (c-VEMP and g-VEMP) were applied to measure the interaural difference (IAD) of saccular responses in patients with acute low-tone sensorineural hearing loss (ALHL). This study intended to explore the relationship between saccular asymmetry and final hearing recovery. We hypothesize that greater extent of saccular dysfunction may be associated with lesser hearing recovery. Twenty-one patients with unilateral ALHL were prospectively enrolled to receive c-VEMP and g-VEMP tests in a random sequence. The IAD of the saccular responses for each patient was measured using three parameters-the raw and corrected amplitudes of c-VEMP, and corrected c-VEMP to g-VEMP amplitude ratio (C/G ratio). The IAD for each parameter was classified as depressed, normal, or augmented by calculating the difference between the affected and unaffected ears and dividing by its sum for both ears. After 3 consecutive months of oral medication and follow-up, 19 patients displayed a hearing recovery of >50%; only two had a recovery of <50%. The significant correlation between the IAD of corrected C/G ratios and hearing recovery demonstrated that subjects with depressed responses had a worse hearing outcome (percent recovery: 51% [45-80%], median [minimum-maximum]), compared with those with normal responses, who exhibited the best recovery (87% [56-100%]), whereas patients with augmented response showed an intermediate recovery (67% [54-100%]; p = 0.02, Kruskal-Wallis test). On the contrary, the raw and corrected amplitudes of c-VEMP did not reveal a significantly different hearing recovery among the three groups of saccular responses. The extent of saccular dysfunction in ALHL might be better explored by combining the results of c-VEMP and g-VEMP. Outcome analysis indicated that the corrected C/G ratio might be a promising prognostic factor for hearing recovery in ALHL.
Bonnard, Damien; Lautissier, Sylvie; Bosset-Audoit, Amélie; Coriat, Géraldine; Beraha, Max; Maunoury, Antoine; Martel, Jacques; Darrouzet, Vincent; Bébéar, Jean-Pierre; Dauman, René
2013-01-01
An alternative to bilateral cochlear implantation is offered by the Neurelec Digisonic(®) SP Binaural cochlear implant, which allows stimulation of both cochleae within a single device. The purpose of this prospective study was to compare a group of Neurelec Digisonic(®) SP Binaural implant users (denoted BINAURAL group, n = 7) with a group of bilateral adult cochlear implant users (denoted BILATERAL group, n = 6) in terms of speech perception, sound localization, and self-assessment of health status and hearing disability. Speech perception was assessed using word recognition at 60 dB SPL in quiet and in a 'cocktail party' noise delivered through five loudspeakers in the hemi-sound field facing the patient (signal-to-noise ratio = +10 dB). The sound localization task was to determine the source of a sound stimulus among five speakers positioned between -90° and +90° from midline. Change in health status was assessed using the Glasgow Benefit Inventory and hearing disability was evaluated with the Abbreviated Profile of Hearing Aid Benefit. Speech perception was not statistically different between the two groups, even though there was a trend in favor of the BINAURAL group (mean percent word recognition in the BINAURAL and BILATERAL groups: 70 vs. 56.7% in quiet, 55.7 vs. 43.3% in noise). There was also no significant difference with regard to performance in sound localization and self-assessment of health status and hearing disability. On the basis of the BINAURAL group's performance in hearing tasks involving the detection of interaural differences, implantation with the Neurelec Digisonic(®) SP Binaural implant may be considered to restore effective binaural hearing. Based on these first comparative results, this device seems to provide benefits similar to those of traditional bilateral cochlear implantation, with a new approach to stimulate both auditory nerves. Copyright © 2013 S. Karger AG, Basel.
NASA Technical Reports Server (NTRS)
Kaufman, Galen D.; Wood, Scott J.; Gianna, Claire C.; Black, F. Owen; Paloski, William H.
2000-01-01
Eight chronic vestibular deficient (VD) patients (bilateral N = 4, unilateral N = 4, ages 18-67 were exposed to an interaural centripetal acceleration of 1 G (resultant 45 degree roll tilt of 1.4 G) on a 0.8 meter radius centrifuge for up to 90 minutes in the dark. The patients sat with head fixed upright, except every 4 of 10 minutes when instructed to point their nose and eyes towards a visual target (switched on every 3 to 5 seconds at random places within plus or minus 30 deg) in the Earth horizontal plane. Eye movements, including directed saccades for subjective Earth-and head-referenced planes, were recorded before, during, and after centrifugation using electro-oculography. Postural sway was measured before and within ten minutes after centrifugation using a sway-referenced or earth-fixed support surface, and with or without a head movement sequence. The protocol was selected for each patient based on the most challenging condition in which the patient was able to maintain balance with eyes closed.
An Auditory Illusion of Proximity of the Source Induced by Sonic Crystals
Spiousas, Ignacio; Etchemendy, Pablo E.; Vergara, Ramiro O.; Calcagno, Esteban R.; Eguia, Manuel C.
2015-01-01
In this work we report an illusion of proximity of a sound source created by a sonic crystal placed between the source and a listener. This effect seems, at first, paradoxical to naïve listeners since the sonic crystal is an obstacle formed by almost densely packed cylindrical scatterers. Even when the singular acoustical properties of these periodic composite materials have been studied extensively (including band gaps, deaf bands, negative refraction, and birrefringence), the possible perceptual effects remain unexplored. The illusion reported here is studied through acoustical measurements and a psychophysical experiment. The results of the acoustical measurements showed that, for a certain frequency range and region in space where the focusing phenomenon takes place, the sonic crystal induces substantial increases in binaural intensity, direct-to-reverberant energy ratio and interaural cross-correlation values, all cues involved in the auditory perception of distance. Consistently, the results of the psychophysical experiment revealed that the presence of the sonic crystal between the sound source and the listener produces a significant reduction of the perceived relative distance to the sound source. PMID:26222281
An Auditory Illusion of Proximity of the Source Induced by Sonic Crystals.
Spiousas, Ignacio; Etchemendy, Pablo E; Vergara, Ramiro O; Calcagno, Esteban R; Eguia, Manuel C
2015-01-01
In this work we report an illusion of proximity of a sound source created by a sonic crystal placed between the source and a listener. This effect seems, at first, paradoxical to naïve listeners since the sonic crystal is an obstacle formed by almost densely packed cylindrical scatterers. Even when the singular acoustical properties of these periodic composite materials have been studied extensively (including band gaps, deaf bands, negative refraction, and birrefringence), the possible perceptual effects remain unexplored. The illusion reported here is studied through acoustical measurements and a psychophysical experiment. The results of the acoustical measurements showed that, for a certain frequency range and region in space where the focusing phenomenon takes place, the sonic crystal induces substantial increases in binaural intensity, direct-to-reverberant energy ratio and interaural cross-correlation values, all cues involved in the auditory perception of distance. Consistently, the results of the psychophysical experiment revealed that the presence of the sonic crystal between the sound source and the listener produces a significant reduction of the perceived relative distance to the sound source.
NASA Technical Reports Server (NTRS)
Shelhamer, Mark; Peng, Grace C Y.; Ramat, Stefano; Patel, Vivek
2002-01-01
Previous studies established that vestibular and oculomotor behaviors can have two adapted states (e.g., gain) simultaneously, and that a context cue (e.g., vertical eye position) can switch between the two states. The present study examined this phenomenon of context-specific adaptation for the oculomotor response to interaural translation (which we term "linear vestibulo-ocular reflex" or LVOR even though it may have extravestibular components). Subjects sat upright on a linear sled and were translated at 0.7 Hz and 0.3 gpeak acceleration while a visual-vestibular mismatch paradigm was used to adaptively increase (x2) or decrease (x0) the gain of the LVOR. In each experimental session, gain increase was asked for in one context, and gain decrease in another context. Testing in darkness with steps and sines before and after adaptation, in each context, assessed the extent to which the context itself could recall the gain state that was imposed in that context during adaptation. Two different contexts were used: head pitch (26 degrees forward and backward) and head roll (26 degrees or 45 degrees, right and left). Head roll tilt worked well as a context cue: with the head rolled to the right the LVOR could be made to have a higher gain than with the head rolled to the left. Head pitch tilt was less effective as a context cue. This suggests that the more closely related a context cue is to the response being adapted, the more effective it is.
Lin, Nan; Wei, Min
2014-01-01
After vestibular labyrinth injury, behavioral deficits partially recover through the process of vestibular compensation. The present study was performed to improve our understanding of the physiology of the macaque vestibular system in the compensated state (>7 wk) after unilateral labyrinthectomy (UL). Three groups of vestibular nucleus neurons were included: pre-UL control neurons, neurons ipsilateral to the lesion, and neurons contralateral to the lesion. The firing responses of neurons sensitive to linear acceleration in the horizontal plane were recorded during sinusoidal horizontal translation directed along six different orientations (30° apart) at 0.5 Hz and 0.2 g peak acceleration (196 cm/s2). This data defined the vector of best response for each neuron in the horizontal plane, along which sensitivity, symmetry, detection threshold, and variability of firing were determined. Additionally, the responses of the same cells to translation over a series of frequencies (0.25–5.0 Hz) either in the interaural or naso-occipital orientation were obtained to define the frequency response characteristics in each group. We found a decrease in sensitivity, increase in threshold, and alteration in orientation of best responses in the vestibular nuclei after UL. Additionally, the phase relationship of the best neural response to translational stimulation changed with UL. The symmetry of individual neuron responses in the excitatory and inhibitory directions was unchanged by UL. Bilateral central utricular neurons still demonstrated two-dimension tuning after UL, consistent with spatio-temporal convergence from a single vestibular end-organ. These neuronal data correlate with known behavioral deficits after unilateral vestibular compromise. PMID:24717349
Chronic detachable headphones for acoustic stimulation in freely moving animals.
Nodal, Fernando R; Keating, Peter; King, Andrew J
2010-05-30
A growing number of studies of auditory processing are being carried out in awake, behaving animals, creating a need for precisely controlled sound delivery without restricting head movements. We have designed a system for closed-field stimulus presentation in freely moving ferrets, which comprises lightweight, adjustable headphones that can be consistently positioned over the ears via a small, skull-mounted implant. The invasiveness of the implant was minimized by simplifying its construction and using dental adhesive only for attaching it to the skull, thereby reducing the surgery required and avoiding the use of screws or other anchoring devices. Attaching the headphones to a chronic implant also reduced the amount of contact they had with the head and ears, increasing the willingness of the animals to wear them. We validated sound stimulation via the headphones in ferrets trained previously in a free-field task to localize stimuli presented from one of two loudspeakers. Noise bursts were delivered binaurally over the headphones and interaural level differences (ILDs) were introduced to allow the sound to be lateralized. Animals rapidly transferred from the free-field task to indicate the perceived location of the stimulus presented over headphones. They showed near perfect lateralization with a 5 dB ILD, matching the scores achieved in the free-field task. As expected, the ferrets' performance declined when the ILD was reduced in value. This closed-field system can easily be adapted for use in other species, and provides a reliable means of presenting closed-field stimuli whilst monitoring behavioral responses in freely moving animals. (c) 2010 Elsevier B.V. All rights reserved.
Li, Na; Pollak, George D.
2013-01-01
Neurons excited by stimulation of one ear and suppressed by the other, called EI neurons, are sensitive to interaural intensity disparities (IIDs), the cues animals use to localize high frequencies. EI neurons are first formed in lateral superior olive (LSO), which then sends excitatory projections to the dorsal nucleus of the lateral lemniscus (DNLL) and the inferior colliculus (IC), both of which contain large populations of EI cells. We evaluate the inputs that innervate EI cells in the IC of Mexican free-tailed bats, Tadarida brasilensis mexicana, with in vivo whole cell recordings from which we derived excitatory and inhibitory conductances. We show that the basic EI property in the majority of IC cells is inherited from LSO, but each type of EI cell is also innervated by the ipsi- or contralateral DNLL, as well as additional excitatory and inhibitory inputs from monaural nuclei. We identify three EI types, where each type receives a set of projections that are different from the other types. To evaluate the role that the various projections played in generating binaural responses, we used modeling to compute a predicted response from the conductances. We then omitted one of the conductances from the computation to evaluate the degree to which that input contributed to the binaural response. We show that formation of the EI property in the various types is complex, and that some projections exert such subtle influences that they could not have been detected with extracellular recordings or even from intracellular recordings of post-synaptic potentials. PMID:23575835
Davis, Kevin A; Lomakin, Oleg; Pesavento, Michael J
2007-09-01
The dorsal nucleus of the lateral lemniscus (DNLL) receives afferent inputs from many brain stem nuclei and, in turn, is a major source of inhibitory inputs to the inferior colliculus (IC). The goal of this study was to characterize the monaural and binaural response properties of neurons in the DNLL of unanesthetized decerebrate cat. Monaural responses were classified according to the patterns of excitation and inhibition observed in contralateral and ipsilateral frequency response maps. Binaural classification was based on unit sensitivity to interaural level differences. The results show that units in the DNLL can be grouped into three distinct types. Type v units produce contralateral response maps that show a wide V-shaped excitatory area and no inhibition. These units receive ipsilateral excitation and exhibit binaural facilitation. The contralateral maps of type i units show a more restricted I-shaped region of excitation that is flanked by inhibition. Type o maps display an O-shaped island of excitation at low stimulus levels that is bounded by inhibition at higher levels. Both type i and type o units receive ipsilateral inhibition and exhibit binaural inhibition. Units that produce type v maps have a low best frequency (BF), whereas type i and type o units have high BFs. Type v and type i units give monotonic rate-level responses for both BF tones and broadband noise. Type o units are inhibited by tones at high levels, but are excited by high-level noise. These results show that the DNLL can exert strong, differential effects in the IC.
NASA Technical Reports Server (NTRS)
Wood, Scott; Clement, Gilles; Denise, Pierre; Reschke, Millard
2005-01-01
Constant velocity Off-Vertical Axis Rotation (OVAR) imposes a continuously varying orientation of the head and body relative to gravity. The ensuing ocular reflexes include modulation of both horizontal and torsional eye velocity as a function of the varying linear acceleration along the lateral plane. The purpose of this study was to examine whether the modulation of these ocular reflexes would be modified by different head-on-trunk positions. Ten human subjects were rotated in darkness about their longitudinal axis 20 deg off-vertical at constant rates of 45 and 180 deg/s, corresponding to 0.125 and 0.5 Hz. Binocular responses were obtained with video-oculography with the head and trunk aligned, and then with the head turned relative to the trunk 40 deg to the right or left of center. Sinusoidal curve fits were used to derive amplitude, phase and bias velocity of the eye movements across multiple cycles for each head-on-trunk position. Consistent with previous studies, the modulation of torsional eye movements was greater at 0.125 Hz while the modulation of horizontal eye movements was greater at 0.5 Hz. Neither amplitude nor bias velocities were significantly altered by head-on-trunk position. The phases of both torsional and horizontal ocular reflexes, on the other hand, shifted towards alignment with the head. These results are consistent with the modulation of torsional and horizontal ocular reflexes during OVAR being primarily mediated by the otoliths in response to the sinusoidally varying linear acceleration along the interaural head axis.
Development of a method to evaluate glutamate receptor function in rat barrel cortex slices.
Lehohla, M; Russell, V; Kellaway, L; Govender, A
2000-12-01
The rat is a nocturnal animal and uses its vibrissae extensively to navigate its environment. The vibrissae are linked to a highly organized part of the sensory cortex, called the barrel cortex which contains spiny neurons that receive whisker specific thalamic input and distribute their output mainly within the cortical column. The aim of the present study was to develop a method to evaluate glutamate receptor function in the rat barrel cortex. Long Evans rats (90-160 g) were killed by cervical dislocation and decapitated. The brain was rapidly removed, cooled in a continuously oxygenated, ice-cold Hepes buffer (pH 7.4) and sliced using a vibratome to produce 0.35 mm slices. The barrel cortex was dissected from slices corresponding to 8.6 to 4.8 mm anterior to the interaural line and divided into rostral, middle and caudal regions. Depolarization-induced uptake of 45Ca2+ was achieved by incubating test slices in a high K+ (62.5 mM) buffer for 2 minutes at 35 degrees C. Potassium-stimulated uptake of 45Ca2+ into the rostral region was significantly lower than into middle and caudal regions of the barrel cortex. Glutamate had no effect. NMDA significantly increased uptake of 45Ca2+ into all regions of the barrel cortex. The technique is useful in determining NMDA receptor function and will be applied to study differences between spontaneously hypertensive rats (SHR) that are used as a model for attention deficit disorder and their normotensive control rats.
Availability of binaural cues for pediatric bilateral cochlear implant recipients.
Sheffield, Sterling W; Haynes, David S; Wanna, George B; Labadie, Robert F; Gifford, René H
2015-03-01
Bilateral implant recipients theoretically have access to binaural cues. Research in postlingually deafened adults with cochlear implants (CIs) indicates minimal evidence for true binaural hearing. Congenitally deafened children who experience spatial hearing with bilateral CIs, however, might perceive binaural cues in the CI signal differently. There is limited research examining binaural hearing in children with CIs, and the few published studies are limited by the use of unrealistic speech stimuli and background noise. The purposes of this study were to (1) replicate our previous study of binaural hearing in postlingually deafened adults with AzBio sentences in prelingually deafened children with the pediatric version of the AzBio sentences, and (2) replicate previous studies of binaural hearing in children with CIs using more open-set sentences and more realistic background noise (i.e., multitalker babble). The study was a within-participant, repeated-measures design. The study sample consisted of 14 children with bilateral CIs with at least 25 mo of listening experience. Speech recognition was assessed using sentences presented in multitalker babble at a fixed signal-to-noise ratio. Test conditions included speech at 0° with noise presented at 0° (S0N0), on the side of the first CI (90° or 270°) (S0N1stCI), and on the side of the second CI (S0N2ndCI) as well as speech presented at 0° with noise presented semidiffusely from eight speakers at 45° intervals. Estimates of summation, head shadow, squelch, and spatial release from masking were calculated. Results of test conditions commonly reported in the literature (S0N0, S0N1stCI, S0N2ndCI) are consistent with results from previous research in adults and children with bilateral CIs, showing minimal summation and squelch but typical head shadow and spatial release from masking. However, bilateral benefit over the better CI with speech at 0° was much larger with semidiffuse noise. Congenitally deafened children with CIs have similar availability of binaural hearing cues to postlingually deafened adults with CIs within the same experimental design. It is possible that the use of realistic listening environments, such as semidiffuse background noise as in Experiment II, would reveal greater binaural hearing benefit for bilateral CI recipients. Future research is needed to determine whether (1) availability of binaural cues for children correlates with interaural time and level differences, (2) different listening environments are more sensitive to binaural hearing benefits, and (3) differences exist between pediatric bilateral recipients receiving implants in the same or sequential surgeries. American Academy of Audiology.
Wu, Yu-Hsiang; Stangl, Elizabeth; Pang, Carol; Zhang, Xuyang
2014-02-01
Little is known regarding the acoustic features of a stimulus used by listeners to determine the acceptable noise level (ANL). Features suggested by previous research include speech intelligibility (noise is unacceptable when it degrades speech intelligibility to a certain degree; the intelligibility hypothesis) and loudness (noise is unacceptable when the speech-to-noise loudness ratio is poorer than a certain level; the loudness hypothesis). The purpose of the study was to investigate if speech intelligibility or loudness is the criterion feature that determines ANL. To achieve this, test conditions were chosen so that the intelligibility and loudness hypotheses would predict different results. In Experiment 1, the effect of audiovisual (AV) and binaural listening on ANL was investigated; in Experiment 2, the effect of interaural correlation (ρ) on ANL was examined. A single-blinded, repeated-measures design was used. Thirty-two and twenty-five younger adults with normal hearing participated in Experiments 1 and 2, respectively. In Experiment 1, both ANL and speech recognition performance were measured using the AV version of the Connected Speech Test (CST) in three conditions: AV-binaural, auditory only (AO)-binaural, and AO-monaural. Lipreading skill was assessed using the Utley lipreading test. In Experiment 2, ANL and speech recognition performance were measured using the Hearing in Noise Test (HINT) in three binaural conditions, wherein the interaural correlation of noise was varied: ρ = 1 (N(o)S(o) [a listening condition wherein both speech and noise signals are identical across two ears]), -1 (NπS(o) [a listening condition wherein speech signals are identical across two ears whereas the noise signals of two ears are 180 degrees out of phase]), and 0 (N(u)S(o) [a listening condition wherein speech signals are identical across two ears whereas noise signals are uncorrelated across ears]). The results were compared to the predictions made based on the intelligibility and loudness hypotheses. The results of the AV and AO conditions appeared to support the intelligibility hypothesis due to the significant correlation between visual benefit in ANL (AV re: AO ANL) and (1) visual benefit in CST performance (AV re: AO CST) and (2) lipreading skill. The results of the N(o)S(o), NπS(o), and N(u)S(o) conditions negated the intelligibility hypothesis because binaural processing benefit (NπS(o) re: N(o)S(o), and N(u)S(o) re: N(o)S(o)) in ANL was not correlated to that in HINT performance. Instead, the results somewhat supported the loudness hypothesis because the pattern of ANL results across the three conditions (N(o)S(o) ≈ NπS(o) ≈ N(u)S(o) ANL) was more consistent with what was predicted by the loudness hypothesis (N(o)S(o) ≈ NπS(o) < N(u)S(o) ANL) than by the intelligibility hypothesis (NπS(o) < N(u)S(o) < N(o)S(o) ANL). The results of the binaural and monaural conditions supported neither hypothesis because (1) binaural benefit (binaural re: monaural) in ANL was not correlated to that in speech recognition performance, and (2) the pattern of ANL results across conditions (binaural < monaural ANL) was not consistent with the prediction made based on previous binaural loudness summation research (binaural ≥ monaural ANL). The study suggests that listeners may use multiple acoustic features to make ANL judgments. The binaural/monaural results showing that neither hypothesis was supported further indicate that factors other than speech intelligibility and loudness, such as psychological factors, may affect ANL. The weightings of different acoustic features in ANL judgments may vary widely across individuals and listening conditions. American Academy of Audiology.
On the temporal window of auditory-brain system in connection with subjective responses
NASA Astrophysics Data System (ADS)
Mouri, Kiminori
2003-08-01
The human auditory-brain system processes information extracted from autocorrelation function (ACF) of the source signal and interaural cross correlation function (IACF) of binaural sound signals which are associated with the left and right cerebral hemispheres, respectively. The purpose of this dissertation is to determine the desirable temporal window (2T: integration interval) for ACF and IACF mechanisms. For the ACF mechanism, the visual change of Φ(0), i.e., the power of ACF, was associated with the change of loudness, and it is shown that the recommended temporal window is given as about 30(τe)min [s]. The value of (τe)min is the minimum value of effective duration of the running ACF of the source signal. It is worth noticing from the experiment of EEG that the most preferred delay time of the first reflection sound is determined by the piece indicating (τe)min in the source signal. For the IACF mechanism, the temporal window is determined as below: The measured range of τIACC corresponding to subjective angle for the moving image sound depends on the temporal window. Here, the moving image was simulated by the use of two loudspeakers located at +/-20° in the horizontal plane, reproducing amplitude modulated band-limited noise alternatively. It is found that the temporal window has a wide range of values from 0.03 to 1 [s] for the modulation frequency below 0.2 Hz. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Kiminori Mouri, 5-3-3-1110 Harayama-dai, Sakai city, Osaka 590-0132, Japan. E-mail address: km529756@aol.com
Binaural speech processing in individuals with auditory neuropathy.
Rance, G; Ryan, M M; Carew, P; Corben, L A; Yiu, E; Tan, J; Delatycki, M B
2012-12-13
Auditory neuropathy disrupts the neural representation of sound and may therefore impair processes contingent upon inter-aural integration. The aims of this study were to investigate binaural auditory processing in individuals with axonal (Friedreich ataxia) and demyelinating (Charcot-Marie-Tooth disease type 1A) auditory neuropathy and to evaluate the relationship between the degree of auditory deficit and overall clinical severity in patients with neuropathic disorders. Twenty-three subjects with genetically confirmed Friedreich ataxia and 12 subjects with Charcot-Marie-Tooth disease type 1A underwent psychophysical evaluation of basic auditory processing (intensity discrimination/temporal resolution) and binaural speech perception assessment using the Listening in Spatialized Noise test. Age, gender and hearing-level-matched controls were also tested. Speech perception in noise for individuals with auditory neuropathy was abnormal for each listening condition, but was particularly affected in circumstances where binaural processing might have improved perception through spatial segregation. Ability to use spatial cues was correlated with temporal resolution suggesting that the binaural-processing deficit was the result of disordered representation of timing cues in the left and right auditory nerves. Spatial processing was also related to overall disease severity (as measured by the Friedreich Ataxia Rating Scale and Charcot-Marie-Tooth Neuropathy Score) suggesting that the degree of neural dysfunction in the auditory system accurately reflects generalized neuropathic changes. Measures of binaural speech processing show promise for application in the neurology clinic. In individuals with auditory neuropathy due to both axonal and demyelinating mechanisms the assessment provides a measure of functional hearing ability, a biomarker capable of tracking the natural history of progressive disease and a potential means of evaluating the effectiveness of interventions. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
The precedence effect for lateralization at low sensation levels.
Goverts, S T; Houtgast, T; van Beek, H H
2000-10-01
Using dichotic signals presented by headphone, stimulus onset dominance (the precedence effect) for lateralization at low sensation levels was investigated for five normal hearing subjects. Stimuli were based on 2400-Hz low pass filtered 5-ms noise bursts. We used the paradigm, as described by Aoki and Houtgast (Hear. Res., 59 (1992) 25-30) and Houtgast and Aoki (Hear. Res., 72 (1994) 29-36), in which the stimulus is divided into a leading and a lagging part with opposite lateralization cues (i.e. an interaural time delay of 0.2 ms). The occurrence of onset dominance was investigated by measuring lateral perception of the stimulus, with fixed equal duration of leading and lagging part, while decreasing absolute signal level or adding a filtered white noise with the signal level set at 65 dBA. The dominance of the leading part was quantified by measuring the perceived lateral position of the stimulus as a function of the relative duration of the leading (and thus the lagging) part. This was done at about 45 dB SL without masking noise and also at a signal-to-noise ratio resulting in a sensation level of 10 dB. The occurrence and strength of the precedence effect was found to depend on sensation level, which was decreased either by lowering the signal level or by adding noise. With the present paradigm, besides a decreased lateralization accuracy, a decrease in the precedence effect was found for sensation levels below about 30-40 dB. In daily-life conditions, with a sensation level in noise of typically 10 dB, the onset dominance was still manifest, albeit degraded to some extent.
Roles for Coincidence Detection in Coding Amplitude-Modulated Sounds
Ashida, Go; Kretzberg, Jutta; Tollin, Daniel J.
2016-01-01
Many sensory neurons encode temporal information by detecting coincident arrivals of synaptic inputs. In the mammalian auditory brainstem, binaural neurons of the medial superior olive (MSO) are known to act as coincidence detectors, whereas in the lateral superior olive (LSO) roles of coincidence detection have remained unclear. LSO neurons receive excitatory and inhibitory inputs driven by ipsilateral and contralateral acoustic stimuli, respectively, and vary their output spike rates according to interaural level differences. In addition, LSO neurons are also sensitive to binaural phase differences of low-frequency tones and envelopes of amplitude-modulated (AM) sounds. Previous physiological recordings in vivo found considerable variations in monaural AM-tuning across neurons. To investigate the underlying mechanisms of the observed temporal tuning properties of LSO and their sources of variability, we used a simple coincidence counting model and examined how specific parameters of coincidence detection affect monaural and binaural AM coding. Spike rates and phase-locking of evoked excitatory and spontaneous inhibitory inputs had only minor effects on LSO output to monaural AM inputs. In contrast, the coincidence threshold of the model neuron affected both the overall spike rates and the half-peak positions of the AM-tuning curve, whereas the width of the coincidence window merely influenced the output spike rates. The duration of the refractory period affected only the low-frequency portion of the monaural AM-tuning curve. Unlike monaural AM coding, temporal factors, such as the coincidence window and the effective duration of inhibition, played a major role in determining the trough positions of simulated binaural phase-response curves. In addition, empirically-observed level-dependence of binaural phase-coding was reproduced in the framework of our minimalistic coincidence counting model. These modeling results suggest that coincidence detection of excitatory and inhibitory synaptic inputs is essential for LSO neurons to encode both monaural and binaural AM sounds. PMID:27322612
Adaptive spatial filtering improves speech reception in noise while preserving binaural cues.
Bissmeyer, Susan R S; Goldsworthy, Raymond L
2017-09-01
Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.
The role of reverberation-related binaural cues in the externalization of speech.
Catic, Jasmina; Santurette, Sébastien; Dau, Torsten
2015-08-01
The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners' ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones. The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient for the externalization of a lateral sound source. In contrast, for a frontal source, an increased amount of binaural cues from reflections was required in order to obtain well externalized sound images. It was demonstrated that the interaction between the interaural cues of the direct sound and the reverberation strongly affects the perception of externalization. An analysis of the short-term binaural cues showed that the amount of fluctuations of the binaural cues corresponded well to the externalization ratings obtained in the listening tests. The results further suggested that the precedence effect is involved in the auditory processing of the dynamic binaural cues that are utilized for externalization perception.
Relation Between Cochlear Mechanics and Performance of Temporal Fine Structure-Based Tasks.
Otsuka, Sho; Furukawa, Shigeto; Yamagishi, Shimpei; Hirota, Koich; Kashino, Makio
2016-12-01
This study examined whether the mechanical characteristics of the cochlea could influence individual variation in the ability to use temporal fine structure (TFS) information. Cochlear mechanical functioning was evaluated by swept-tone evoked otoacoustic emissions (OAEs), which are thought to comprise linear reflection by micromechanical impedance perturbations, such as spatial variations in the number or geometry of outer hair cells, on the basilar membrane (BM). Low-rate (2 Hz) frequency modulation detection limens (FMDLs) were measured for carrier frequency of 1000 Hz and interaural phase difference (IPD) thresholds as indices of TFS sensitivity and high-rate (16 Hz) FMDLs and amplitude modulation detection limens (AMDLs) as indices of sensitivity to non-TFS cues. Significant correlations were found among low-rate FMDLs, low-rate AMDLs, and IPD thresholds (R = 0.47-0.59). A principal component analysis was used to show a common factor that could account for 81.1, 74.1, and 62.9 % of the variance in low-rate FMDLs, low-rate AMDLs, and IPD thresholds, respectively. An OAE feature, specifically a characteristic dip around 2-2.5 kHz in OAE spectra, showed a significant correlation with the common factor (R = 0.54). High-rate FMDLs and AMDLs were correlated with each other (R = 0.56) but not with the other measures. The results can be interpreted as indicating that (1) the low-rate AMDLs, as well as the IPD thresholds and low-rate FMDLs, depend on the use of TFS information coded in neural phase locking and (2) the use of TFS information is influenced by a particular aspect of cochlear mechanics, such as mechanical irregularity along the BM.
Frequency tagging to track the neural processing of contrast in fast, continuous sound sequences.
Nozaradan, Sylvie; Mouraux, André; Cousineau, Marion
2017-07-01
The human auditory system presents a remarkable ability to detect rapid changes in fast, continuous acoustic sequences, as best illustrated in speech and music. However, the neural processing of rapid auditory contrast remains largely unclear, probably due to the lack of methods to objectively dissociate the response components specifically related to the contrast from the other components in response to the sequence of fast continuous sounds. To overcome this issue, we tested a novel use of the frequency-tagging approach allowing contrast-specific neural responses to be tracked based on their expected frequencies. The EEG was recorded while participants listened to 40-s sequences of sounds presented at 8Hz. A tone or interaural time contrast was embedded every fifth sound (AAAAB), such that a response observed in the EEG at exactly 8 Hz/5 (1.6 Hz) or harmonics should be the signature of contrast processing by neural populations. Contrast-related responses were successfully identified, even in the case of very fine contrasts. Moreover, analysis of the time course of the responses revealed a stable amplitude over repetitions of the AAAAB patterns in the sequence, except for the response to perceptually salient contrasts that showed a buildup and decay across repetitions of the sounds. Overall, this new combination of frequency-tagging with an oddball design provides a valuable complement to the classic, transient, evoked potentials approach, especially in the context of rapid auditory information. Specifically, we provide objective evidence on the neural processing of contrast embedded in fast, continuous sound sequences. NEW & NOTEWORTHY Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia might be an impaired processing of fast auditory changes, highlighting how the encoding of rapid acoustic information is critical for auditory communication. Here, we present a novel electrophysiological approach to capture in humans neural markers of contrasts in fast continuous tone sequences. Contrast-specific responses were successfully identified, even for very fine contrasts, providing direct insight on the encoding of rapid auditory information. Copyright © 2017 the American Physiological Society.
Magnetoencephalographic responses in relation to temporal and spatial factors of sound fields
NASA Astrophysics Data System (ADS)
Soeta, Yoshiharu; Nakagawa, Seiji; Tonoike, Mitsuo; Hotehama, Takuya; Ando, Yoichi
2004-05-01
To establish the guidelines based on brain functions for designing sound fields such as a concert hall and an opera house, the activities of the human brain to the temporal and spatial factors of the sound field have been investigated using magnetoencephalography (MEG). MEG is a noninvasive technique for investigating neuronal activity in human brain. First of all, the auditory evoked responses in change of the magnitude of the interaural cross-correlation (IACC) were analyzed. IACC is one of the spatial factors, which has great influence on the degree of subjective preference and diffuseness for sound fields. The results indicated that the peak amplitude of N1m, which was found over the left and right temporal lobes around 100 ms after the stimulus onset, decreased with increasing the IACC. Second, the responses corresponding to subjective preference for one of the typical temporal factors, i.e., the initial delay gap between a direct sound and the first reflection, were investigated. The results showed that the effective duration of the autocorrelation function of MEG between 8 and 13 Hz became longer during presentations of a preferred stimulus. These results indicate that the brain may be relaxed, and repeat a similar temporal rhythm under preferred sound fields.
Predicting the Overall Spatial Quality of Automotive Audio Systems
NASA Astrophysics Data System (ADS)
Koya, Daisuke
The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial quality of 2- and 5-channel automotive audio systems with a cross-validation performance of R. 2 = 0.85 and root-mean-squareerror (RMSE) = 11.03%.
Sound localization by echolocating bats
NASA Astrophysics Data System (ADS)
Aytekin, Murat
Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.
Learning for pitch and melody discrimination in congenital amusia.
Whiteford, Kelly L; Oxenham, Andrew J
2018-06-01
Congenital amusia is currently thought to be a life-long neurogenetic disorder in music perception, impervious to training in pitch or melody discrimination. This study provides an explicit test of whether amusic deficits can be reduced with training. Twenty amusics and 20 matched controls participated in four sessions of psychophysical training involving either pure-tone (500 Hz) pitch discrimination or a control task of lateralization (interaural level differences for bandpass white noise). Pure-tone pitch discrimination at low, medium, and high frequencies (500, 2000, and 8000 Hz) was measured before and after training (pretest and posttest) to determine the specificity of learning. Melody discrimination was also assessed before and after training using the full Montreal Battery of Evaluation of Amusia, the most widely used standardized test to diagnose amusia. Amusics performed more poorly than controls in pitch but not localization discrimination, but both groups improved with practice on the trained stimuli. Learning was broad, occurring across all three frequencies and melody discrimination for all groups, including those who trained on the non-pitch control task. Following training, 11 of 20 amusics no longer met the global diagnostic criteria for amusia. A separate group of untrained controls (n = 20), who also completed melody discrimination and pretest, improved by an equal amount as trained controls on all measures, suggesting that the bulk of learning for the control group occurred very rapidly from the pretest. Thirty-one trained participants (13 amusics) returned one year later to assess long-term maintenance of pitch and melody discrimination. On average, there was no change in performance between posttest and one-year follow-up, demonstrating that improvements on pitch- and melody-related tasks in amusics and controls can be maintained. The findings indicate that amusia is not always a life-long deficit when using the current standard diagnostic criteria. Copyright © 2018 Elsevier Ltd. All rights reserved.
Contralateral Effects and Binaural Interactions in Dorsal Cochlear Nucleus
2005-01-01
The dorsal cochlear nucleus (DCN) receives afferent input from the auditory nerve and is thus usually thought of as a monaural nucleus, but it also receives inputs from the contralateral cochlear nucleus as well as descending projections from binaural nuclei. Evidence suggests that some of these commissural and efferent projections are excitatory, whereas others are inhibitory. The goals of this study were to investigate the nature and effects of these inputs in the DCN by measuring DCN principal cell (type IV unit) responses to a variety of contralateral monaural and binaural stimuli. As expected, the results of contralateral stimulation demonstrate a mixture of excitatory and inhibitory influences, although inhibitory effects predominate. Most type IV units are weakly, if at all, inhibited by tones but are strongly inhibited by broadband noise (BBN). The inhibition evoked by BBN is also low threshold and short latency. This inhibition is abolished and excitation is revealed when strychnine, a glycine-receptor antagonist, is applied to the DCN; application of bicuculline, a GABAA-receptor antagonist, has similar effects but does not block the onset of inhibition. Manipulations of discrete fiber bundles suggest that the inhibitory, but not excitatory, inputs to DCN principal cells enter the DCN via its output pathway, and that the short latency inhibition is carried by commissural axons. Consistent with their respective monaural effects, responses to binaural tones as a function of interaural level difference are essentially the same as responses to ipsilateral tones, whereas binaural BBN responses decrease with increasing contralateral level. In comparison to monaural responses, binaural responses to virtual space stimuli show enhanced sensitivity to the elevation of a sound source in ipsilateral space but reduced sensitivity in contralateral space. These results show that the contralateral inputs to the DCN are functionally relevant in natural listening conditions, and that one role of these inputs is to enhance DCN processing of spectral sound localization cues produced by the pinna. PMID:16075189
Li, Huahui; Kong, Lingzhi; Wu, Xihong; Li, Liang
2013-01-01
In reverberant rooms with multiple-people talking, spatial separation between speech sources improves recognition of attended speech, even though both the head-shadowing and interaural-interaction unmasking cues are limited by numerous reflections. It is the perceptual integration between the direct wave and its reflections that bridges the direct-reflection temporal gaps and results in the spatial unmasking under reverberant conditions. This study further investigated (1) the temporal dynamic of the direct-reflection-integration-based spatial unmasking as a function of the reflection delay, and (2) whether this temporal dynamic is correlated with the listeners’ auditory ability to temporally retain raw acoustic signals (i.e., the fast decaying primitive auditory memory, PAM). The results showed that recognition of the target speech against the speech-masker background is a descending exponential function of the delay of the simulated target reflection. In addition, the temporal extent of PAM is frequency dependent and markedly longer than that for perceptual fusion. More importantly, the temporal dynamic of the speech-recognition function is significantly correlated with the temporal extent of the PAM of low-frequency raw signals. Thus, we propose that a chain process, which links the earlier-stage PAM with the later-stage correlation computation, perceptual integration, and attention facilitation, plays a role in spatially unmasking target speech under reverberant conditions. PMID:23658664
Gravito-Inertial Force Resolution in Perception of Synchronized Tilt and Translation
NASA Technical Reports Server (NTRS)
Wood, Scott J.; Holly, Jan; Zhang, Guen-Lu
2011-01-01
Natural movements in the sagittal plane involve pitch tilt relative to gravity combined with translation motion. The Gravito-Inertial Force (GIF) resolution hypothesis states that the resultant force on the body is perceptually resolved into tilt and translation consistently with the laws of physics. The purpose of this study was to test this hypothesis for human perception during combined tilt and translation motion. EXPERIMENTAL METHODS: Twelve subjects provided verbal reports during 0.3 Hz motion in the dark with 4 types of tilt and/or translation motion: 1) pitch tilt about an interaural axis at +/-10deg or +/-20deg, 2) fore-aft translation with acceleration equivalent to +/-10deg or +/-20deg, 3) combined "in phase" tilt and translation motion resulting in acceleration equivalent to +/-20deg, and 4) "out of phase" tilt and translation motion that maintained the resultant gravito-inertial force aligned with the longitudinal body axis. The amplitude of perceived pitch tilt and translation at the head were obtained during separate trials. MODELING METHODS: Three-dimensional mathematical modeling was performed to test the GIF-resolution hypothesis using a dynamical model. The model encoded GIF-resolution using the standard vector equation, and used an internal model of motion parameters, including gravity. Differential equations conveyed time-varying predictions. The six motion profiles were tested, resulting in predicted perceived amplitude of tilt and translation for each. RESULTS: The modeling results exhibited the same pattern as the experimental results. Most importantly, both modeling and experimental results showed greater perceived tilt during the "in phase" profile than the "out of phase" profile, and greater perceived tilt during combined "in phase" motion than during pure tilt of the same amplitude. However, the model did not predict as much perceived translation as reported by subjects during pure tilt. CONCLUSION: Human perception is consistent with the GIF-resolution hypothesis even when the gravito-inertial force vector remains aligned with the body during periodic motion. Perception is also consistent with GIF-resolution in the opposite condition, when the gravito-inertial force vector angle is enhanced by synchronized tilt and translation.
Role of Binaural Temporal Fine Structure and Envelope Cues in Cocktail-Party Listening.
Swaminathan, Jayaganesh; Mason, Christine R; Streeter, Timothy M; Best, Virginia; Roverud, Elin; Kidd, Gerald
2016-08-03
While conversing in a crowded social setting, a listener is often required to follow a target speech signal amid multiple competing speech signals (the so-called "cocktail party" problem). In such situations, separation of the target speech signal in azimuth from the interfering masker signals can lead to an improvement in target intelligibility, an effect known as spatial release from masking (SRM). This study assessed the contributions of two stimulus properties that vary with separation of sound sources, binaural envelope (ENV) and temporal fine structure (TFS), to SRM in normal-hearing (NH) human listeners. Target speech was presented from the front and speech maskers were either colocated with or symmetrically separated from the target in azimuth. The target and maskers were presented either as natural speech or as "noise-vocoded" speech in which the intelligibility was conveyed only by the speech ENVs from several frequency bands; the speech TFS within each band was replaced with noise carriers. The experiments were designed to preserve the spatial cues in the speech ENVs while retaining/eliminating them from the TFS. This was achieved by using the same/different noise carriers in the two ears. A phenomenological auditory-nerve model was used to verify that the interaural correlations in TFS differed across conditions, whereas the ENVs retained a high degree of correlation, as intended. Overall, the results from this study revealed that binaural TFS cues, especially for frequency regions below 1500 Hz, are critical for achieving SRM in NH listeners. Potential implications for studying SRM in hearing-impaired listeners are discussed. Acoustic signals received by the auditory system pass first through an array of physiologically based band-pass filters. Conceptually, at the output of each filter, there are two principal forms of temporal information: slowly varying fluctuations in the envelope (ENV) and rapidly varying fluctuations in the temporal fine structure (TFS). The importance of these two types of information in everyday listening (e.g., conversing in a noisy social situation; the "cocktail-party" problem) has not been established. This study assessed the contributions of binaural ENV and TFS cues for understanding speech in multiple-talker situations. Results suggest that, whereas the ENV cues are important for speech intelligibility, binaural TFS cues are critical for perceptually segregating the different talkers and thus for solving the cocktail party problem. Copyright © 2016 the authors 0270-6474/16/368250-08$15.00/0.
NASA Astrophysics Data System (ADS)
Fujii, Kenji
2002-06-01
In this dissertation, the correlation mechanism in modeling the process in the visual perception is introduced. It has been well described that the correlation mechanism is effective for describing subjective attributes in auditory perception. The main result is that it is possible to apply the correlation mechanism to the process in temporal vision and spatial vision, as well as in audition. (1) The psychophysical experiment was performed on subjective flicker rates for complex waveforms. A remarkable result is that the phenomenon of missing fundamental is found in temporal vision as analogous to the auditory pitch perception. This implies the existence of correlation mechanism in visual system. (2) For spatial vision, the autocorrelation analysis provides useful measures for describing three primary perceptual properties of visual texture: contrast, coarseness, and regularity. Another experiment showed that the degree of regularity is a salient cue for texture preference judgment. (3) In addition, the autocorrelation function (ACF) and inter-aural cross-correlation function (IACF) were applied for analysis of the temporal and spatial properties of environmental noise. It was confirmed that the acoustical properties of aircraft noise and traffic noise are well described. These analyses provided useful parameters extracted from the ACF and IACF in assessing the subjective annoyance for noise. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Junko Atagi, 6813 Mosonou, Saijo-cho, Higashi-Hiroshima 739-0024, Japan. E-mail address: atagi\\@urban.ne.jp.
Stereotactic Radiosurgery for Acoustic Neuromas: What Happens Long Term?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, Daniel E., E-mail: daniel.roos@health.sa.gov.au; University of Adelaide School of Medicine, Adelaide, South Australia; Potter, Andrew E.
2012-03-15
Purpose: To determine the clinical outcomes for acoustic neuroma treated with low-dose linear accelerator stereotactic radiosurgery (SRS) >10 years earlier at the Royal Adelaide Hospital using data collected prospectively at a dedicated SRS clinic. Methods and Materials: Between November 1993 and December 2000, 51 patients underwent SRS for acoustic neuroma. For the 44 patients with primary SRS for sporadic (unilateral) lesions, the median age was 63 years, the median of the maximal tumor diameter was 21 mm (range, 11-34), and the marginal dose was 14 Gy for the first 4 patients and 12 Gy for the other 40. Results: Themore » crude tumor control rate was 97.7% (1 patient required salvage surgery for progression at 9.75 years). Only 8 (29%) of 28 patients ultimately retained useful hearing (interaural pure tone average {<=}50 dB). Also, although the Kaplan-Meier estimated rate of hearing preservation at 5 years was 57% (95% confidence interval, 38-74%), this decreased to 24% (95% confidence interval, 11-44%) at 10 years. New or worsened V and VII cranial neuropathy occurred in 11% and 2% of patients, respectively; all cases were transient. No case of radiation oncogenesis developed. Conclusions: The long-term follow-up data of low-dose (12-14 Gy) linear accelerator SRS for acoustic neuroma have confirmed excellent tumor control and acceptable cranial neuropathy rates but a continual decrease in hearing preservation out to {>=}10 years.« less
Physiology of primary saccular afferents of goldfish: implications for Mauthner cell response.
Fay, R R
1995-01-01
Mauthner cells receive neurally coded information from the otolith organs in fishes, and it is most likely that initiation and directional characteristics of the C-start response depend on this input. In the goldfish, saccular afferents are sensitive to sound pressure (< -30 dB re: 1 dyne cm-2) in the most sensitive frequency range (200 to 800 Hz). This input arises from volume fluctuations of the swimbladder in response to the sound pressure waveform and is thus nondirectional. Primary afferents of the saccule, lagena, and utricle of the goldfish also respond with great sensitivity to acoustic particle motion (< 1 nanometer between 100 and 200 Hz). This input arises from the acceleration of the fish in a sound field and is inherently directional. Saccular afferents can be divided into two groups based on their tuning: one group is tuned at about 250 Hz, and the other tuned between 400 Hz and 1 kHz. All otolithic primary afferents phaselock to sinusoids throughout the frequency range of hearing (up to about 2 kHz). Based on physiological and behavioral studies on Mauthner cells, it appears that highly correlated binaural input to the M-cell, from the sacculi responding to sound pressure, may be required for a decision to respond but that the direction of the response is extracted from small deviations from a perfect interaural correlation arising from the directional response of otolith organs to acoustic particle motion.
Perception of the dynamic visual vertical during sinusoidal linear motion.
Pomante, A; Selen, L P J; Medendorp, W P
2017-10-01
The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical-as a proxy for the tilt percept-during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s 2 peak acceleration, 80 cm displacement). While subjects ( n =10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model's prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical. NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the disambiguation of linear acceleration and spatial orientation. We discuss the dynamics of these illusory percepts in terms of a dynamic Bayesian model that combines uncertainty in the vestibular signals with priors based on the natural statistics of head motion. Copyright © 2017 the American Physiological Society.
Binocular Coordination of the Human Vestibulo-Ocular Reflex during Off-axis Pitch Rotation
NASA Technical Reports Server (NTRS)
Wood, S. J.; Reschke, M. F.; Kaufman, G. D.; Black, F. O.; Paloski, W. H.
2006-01-01
Head movements in the sagittal pitch plane typically involve off-axis rotation requiring both vertical and horizontal vergence ocular reflexes to compensate for angular and translational motion relative to visual targets of interest. The purpose of this study was to compare passive pitch VOR responses during rotation about an Earth-vertical axis (canal only cues) with off-axis rotation (canal and otolith cues). Methods. Eleven human subjects were oscillated sinusoidally at 0.13, 0.3 and 0.56 Hz while lying left-side down with the interaural axis either aligned with the axis of rotation or offset by 50 cm. In a second set of measurements, twelve subjects were also tested during sinusoidally varying centrifugation over the same frequency range. The modulation of vertical and horizontal vergence ocular responses was measured with a binocular videography system. Results. Off-axis pitch rotation enhanced the vertical VOR at lower frequencies and enhanced the vergence VOR at higher frequencies. During sinusoidally varying centrifugation, the opposite trend was observed for vergence, with both vertical and vergence vestibulo-ocular reflexes being suppressed at the highest frequency. Discussion. These differential effects of off-axis rotation over the 0.13 to 0.56 Hz range are consistent with the hypothesis that otolith-ocular reflexes are segregated in part on the basis of stimulus frequency. At the lower frequencies, tilt otolith-ocular responses compensate for declining canal input. At higher frequencies, translational otolith-ocular reflexes compensate for declining visual contributions to the kinematic demands required for fixating near targets.
Monaural Sound Localization Revisited
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Kistler, Doris J.
1997-01-01
Research reported during the past few decades has revealed the importance for human sound localization of the so-called 'monaural spectral cues.' These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.
Monaural sound localization revisited.
Wightman, F L; Kistler, D J
1997-02-01
Research reported during the past few decades has revealed the importance for human sound localization of the so-called "monaural spectral cues." These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.
Compound gravity receptor polarization vectors evidenced by linear vestibular evoked potentials
NASA Technical Reports Server (NTRS)
Jones, S. M.; Jones, T. A.; Bell, P. L.; Taylor, M. J.
2001-01-01
The utricle and saccule are gravity receptor organs of the vestibular system. These receptors rely on a high-density otoconial membrane to detect linear acceleration and the position of the cranium relative to Earth's gravitational vector. The linear vestibular evoked potential (VsEP) has been shown to be an effective non-invasive functional test specifically for otoconial gravity receptors (Jones et al., 1999). Moreover, there is some evidence that the VsEP can be used to independently test utricular and saccular function (Taylor et al., 1997; Jones et al., 1998). Here we characterize compound macular polarization vectors for the utricle and saccule in hatchling chickens. Pulsed linear acceleration stimuli were presented in two axes, the dorsoventral (DV, +/- Z axis) to isolate the saccule, and the interaural (IA, +/- Y axis) to isolate the utricle. Traditional signal averaging was used to resolve responses recorded from the surface of the skull. Latency and amplitude of eighth nerve components of the linear VsEP were measured. Gravity receptor responses exhibited clear preferences for one stimulus direction in each axis. With respect to each utricular macula, lateral translation in the IA axis produced maximum ipsilateral response amplitudes with substantially greater amplitude intensity (AI) slopes than medially directed movement. Downward caudal motions in the DV axis produced substantially larger response amplitudes and AI slopes. The results show that the macula lagena does not contribute to the VsEP compound polarization vectors of the sacculus and utricle. The findings suggest further that preferred compound vectors for the utricle depend on the pars externa (i.e. lateral hair cell field) whereas for the saccule they depend on pars interna (i.e. superior hair cell fields). These data provide evidence that maculae saccule and utricle can be selectively evaluated using the linear VsEP.
Multivariate Analyses of Balance Test Performance, Vestibular Thresholds, and Age
Karmali, Faisal; Bermúdez Rey, María Carolina; Clark, Torin K.; Wang, Wei; Merfeld, Daniel M.
2017-01-01
We previously published vestibular perceptual thresholds and performance in the Modified Romberg Test of Standing Balance in 105 healthy humans ranging from ages 18 to 80 (1). Self-motion thresholds in the dark included roll tilt about an earth-horizontal axis at 0.2 and 1 Hz, yaw rotation about an earth-vertical axis at 1 Hz, y-translation (interaural/lateral) at 1 Hz, and z-translation (vertical) at 1 Hz. In this study, we focus on multiple variable analyses not reported in the earlier study. Specifically, we investigate correlations (1) among the five thresholds measured and (2) between thresholds, age, and the chance of failing condition 4 of the balance test, which increases vestibular reliance by having subjects stand on foam with eyes closed. We found moderate correlations (0.30–0.51) between vestibular thresholds for different motions, both before and after using our published aging regression to remove age effects. We found that lower or higher thresholds across all threshold measures are an individual trait that account for about 60% of the variation in the population. This can be further distributed into two components with about 20% of the variation explained by aging and 40% of variation explained by a single principal component that includes similar contributions from all threshold measures. When only roll tilt 0.2 Hz thresholds and age were analyzed together, we found that the chance of failing condition 4 depends significantly on both (p = 0.006 and p = 0.013, respectively). An analysis incorporating more variables found that the chance of failing condition 4 depended significantly only on roll tilt 0.2 Hz thresholds (p = 0.046) and not age (p = 0.10), sex nor any of the other four threshold measures, suggesting that some of the age effect might be captured by the fact that vestibular thresholds increase with age. For example, at 60 years of age, the chance of failing is roughly 5% for the lowest roll tilt thresholds in our population, but this increases to 80% for the highest roll tilt thresholds. These findings demonstrate the importance of roll tilt vestibular cues for balance, even in individuals reporting no vestibular symptoms and with no evidence of vestibular dysfunction. PMID:29167656
Fagerson, M H; Barmack, N H
1995-06-01
1. Because the nucleus reticularis gigantocellularis (NRGc) receives a substantial descending projection from the caudal vestibular nuclei, we used extracellular single-unit recording combined with natural vestibular stimulation to examine the possible peripheral origins of the vestibularly modulated activity of caudal NRGc neurons located within 500 microns of the midline. Chloralose-urethan anesthetized rabbits were stimulated with an exponential "step" and/or static head-tilt stimulus, as well as sinusoidal rotation about the longitudinal or interaural axes providing various combinations of roll or pitch, respectively. Recording sites were reconstructed from electrolytic lesions confirmed histologically. 2. More than 85% of the 151 neurons, in the medial aspect of the caudal NRGc, responded to vertical vestibular stimulation. Ninety-six percent of these responded to rotation onto the contralateral side (beta responses). Only a few also responded to horizontal stimulation. Seventy-eight percent of the neurons that responded to vestibular stimulation responded during static roll-tilt. One-half of these neurons also responded transiently to the change in head position during exponential "step" stimulation, suggesting input mediated by otolith and semicircular canal receptors or tonic-phasic otolith neurons. 3. Seventy-five percent of the responsive neurons had a "null plane." The planes of stimulation resulting in maximal responses, for cells that responded to static stimulation, were distributed throughout 150 degrees in both roll and pitch quadrants. Five of these cells responded only transiently during exponential "step" stimulation and responded maximally when stimulated in the plane of one of the vertical semicircular canals. 4. The phase of the response of the 25% of medial NRGc neurons that lacked "null planes" gradually shifted approximately 180 degrees during sinusoidal vestibular stimulation as the plane of stimulation was shifted about the vertical axis. These neurons likely received convergent input with differing spatial and temporal properties. 5. The activity of neurons in the medial aspect of the caudal NRGc of rabbits was modulated by both otolithic macular and vertical semicircular canal receptor stimulation. This vestibular information may be important for controlling the intensity of the muscle activity in muscles such as neck muscles where the load on the muscle is affected by the position of the head with respect to gravity. Some of these neurons may also shift muscle function from an agonist to an antagonist as the direction of head tilt changes.
Brenneman, Lauren; Cash, Elizabeth; Chermak, Gail D; Guenette, Linda; Masters, Gay; Musiek, Frank E; Brown, Mallory; Ceruti, Julianne; Fitzegerald, Krista; Geissler, Kristin; Gonzalez, Jennifer; Weihing, Jeffrey
2017-09-01
Pediatric central auditory processing disorder (CAPD) is frequently comorbid with other childhood disorders. However, few studies have examined the relationship between commonly used CAPD, language, and cognition tests within the same sample. The present study examined the relationship between diagnostic CAPD tests and "gold standard" measures of language and cognitive ability, the Clinical Evaluation of Language Fundamentals (CELF) and the Wechsler Intelligence Scale for Children (WISC). A retrospective study. Twenty-seven patients referred for CAPD testing who scored average or better on the CELF and low average or better on the WISC were initially included. Seven children who scored below the CELF and/or WISC inclusion criteria were then added to the dataset for a second analysis, yielding a sample size of 34. Participants were administered a CAPD battery that included at least the following three CAPD tests: Frequency Patterns (FP), Dichotic Digits (DD), and Competing Sentences (CS). In addition, they were administered the CELF and WISC. Relationships between scores on CAPD, language (CELF), and cognition (WISC) tests were examined using correlation analysis. DD and FP showed significant correlations with Full Scale Intelligence Quotient, and the DD left ear and the DD interaural difference measures both showed significant correlations with working memory. However, ∼80% or more of the variance in these CAPD tests was unexplained by language and cognition measures. Language and cognition measures were more strongly correlated with each other than were the CAPD tests with any CELF or WISC scale. Additional correlations with the CAPD tests were revealed when patients who scored in the mild-moderate deficit range on the CELF and/or in the borderline low intellectual functioning range on the WISC were included in the analysis. While both the DD and FP tests showed significant correlations with one or more cognition measures, the majority of the variance in these CAPD measures went unexplained by cognition. Unlike DD and FP, the CS test was not correlated with cognition. Additionally, language measures were not significantly correlated with any of the CAPD tests. Our findings emphasize that the outcomes and interpretation of results vary as a function of the subject inclusion criteria that are applied for the CELF and WISC. Including participants with poorer cognition and/or language scores increased the number of significant correlations observed. For this reason, it is important that studies investigating the relationship between CAPD and other domains or disorders report the specific inclusion criteria used for all tests. American Academy of Audiology
Parabrachial nucleus neuronal responses to off-vertical axis rotation in macaques
McCandless, Cyrus H.; Balaban, Carey D.
2010-01-01
The caudal aspect of the parabrachial nucleus (PBN) contains neurons responsive to whole body, periodic rotational stimulation in alert monkeys. This study characterizes the angular and linear motion-sensitive response properties of PBN unit responses during off-vertical axis rotation (OVAR) and position trapezoid stimulation. The OVAR responses displayed a constant firing component which varied from the firing rate at rest. Nearly two-thirds of the units also modulated their discharges with respect to head orientation (re: gravity) during constant velocity OVAR stimulation. The modulated response magnitudes were equal during ipsilateral and contralateral OVARs, indicative of a one-dimensional accelerometer. These response orientations during OVAR divided the units into three spatially tuned populations, with peak modulation responses centered in the ipsilateral ear down, contralateral anterior semicircular canal down, and occiput down orientations. Because the orientation of the OVAR modulation response was opposite in polarity to the orientation of the static tilt component of responses to position trapezoids for the majority of units, the linear acceleration responses were divided into colinear dynamic linear and static tilt components. The orientations of these unit responses formed two distinct population response axes: (1) units with an interaural linear response axis and (2) units with an ipsilateral anterior semicircular canal-contralateral posterior semicircular canal plane linear response axis. The angular rotation sensitivity of these units is in a head-vertical plane that either contains the linear acceleration response axis or is perpendicular to the linear acceleration axis. Hence, these units behave like head-based (‘strap-down’) inertial guidance sensors. Because the PBN contributes to sensory and interoceptive processing, it is suggested that vestibulo-recipient caudal PBN units may detect potentially dangerous anomalies in control of postural stability during locomotion. In particular, these signals may contribute to the range of affective and emotional responses that include panic associated with falling, malaise associated with motion sickness and mal-de-debarquement, and comorbid balance and anxiety disorders. PMID:20039027
Parthasarathy, Aravindakshan; Bartlett, Edward
2012-07-01
Auditory brainstem responses (ABRs), and envelope and frequency following responses (EFRs and FFRs) are widely used to study aberrant auditory processing in conditions such as aging. We have previously reported age-related deficits in auditory processing for rapid amplitude modulation (AM) frequencies using EFRs recorded from a single channel. However, sensitive testing of EFRs along a wide range of modulation frequencies is required to gain a more complete understanding of the auditory processing deficits. In this study, ABRs and EFRs were recorded simultaneously from two electrode configurations in young and old Fischer-344 rats, a common auditory aging model. Analysis shows that the two channels respond most sensitively to complementary AM frequencies. Channel 1, recorded from Fz to mastoid, responds better to faster AM frequencies in the 100-700 Hz range of frequencies, while Channel 2, recorded from the inter-aural line to the mastoid, responds better to slower AM frequencies in the 16-100 Hz range. Simultaneous recording of Channels 1 and 2 using AM stimuli with varying sound levels and modulation depths show that age-related deficits in temporal processing are not present at slower AM frequencies but only at more rapid ones, which would not have been apparent recording from either channel alone. Comparison of EFRs between un-anesthetized and isoflurane-anesthetized recordings in young animals, as well as comparison with previously published ABR waveforms, suggests that the generators of Channel 1 may emphasize more caudal brainstem structures while those of Channel 2 may emphasize more rostral auditory nuclei including the inferior colliculus and the forebrain, with the boundary of separation potentially along the cochlear nucleus/superior olivary complex. Simultaneous two-channel recording of EFRs help to give a more complete understanding of the properties of auditory temporal processing over a wide range of modulation frequencies which is useful in understanding neural representations of sound stimuli in normal, developmental or pathological conditions. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Merfeld, D. M.; Paloski, W. H. (Principal Investigator)
1996-01-01
The vestibulo-ocular reflexes (VOR) are determined not only by angular acceleration, but also by the presence of gravity and linear acceleration. This phenomenon was studied by measuring three-dimensional nystagmic eye movements, with implanted search coils, in four male squirrel monkeys. Monkeys were rotated in the dark at 200 degrees/s, centrally or 79 cm off-axis, with the axis of rotation always aligned with gravity and the spinal axis of the upright monkeys. The monkey's position relative to the centripetal acceleration (facing center or back to center) had a dramatic influence on the VOR. These studies show that a torsional response was always elicited that acted to shift the axis of eye rotation toward alignment with gravito-inertial force. On the other hand, a slow phase downward vertical response usually existed, which shifted the axis of eye rotation away from the gravito-inertial force. These findings were consistent across all monkeys. In another set of tests, the same monkeys were rapidly tilted about their interaural (pitch) axis. Tilt orientations of 45 degrees and 90 degrees were maintained for 1 min. Other than a compensatory angular VOR during the rotation, no consistent eye velocity response was ever observed during or following the tilt. The absence of any response following tilt proves that the observed torsional and vertical responses were not a positional nystagmus. Model simulations qualitatively predict all components of these eccentric rotation and tilt responses. These simulations support the conclusion that the VOR during eccentric rotation may consist of two components: a linear VOR and a rotational VOR. The model predicts a slow phase downward, vertical, linear VOR during eccentric rotation even though there was never a change in the force aligned with monkey's spinal (Z) axis. The model also predicts the torsional components of the response that shift the rotation axis of the angular VOR toward alignment with gravito-inertial force.
Motion perception during variable-radius swing motion in darkness.
Rader, A A; Oman, C M; Merfeld, D M
2009-10-01
Using a variable-radius roll swing motion paradigm, we examined the influence of interaural (y-axis) and dorsoventral (z-axis) force modulation on perceived tilt and translation by measuring perception of horizontal translation, roll tilt, and distance from center of rotation (radius) at 0.45 and 0.8 Hz using standard magnitude estimation techniques (primarily verbal reports) in darkness. Results show that motion perception was significantly influenced by both y- and z-axis forces. During constant radius trials, subjects' perceptions of tilt and translation were generally almost veridical. By selectively pairing radius (1.22 and 0.38 m) and frequency (0.45 and 0.8 Hz, respectively), the y-axis acceleration could be tailored in opposition to gravity so that the combined y-axis gravitoinertial force (GIF) variation at the subject's ears was reduced to approximately 0.035 m/s(2) - in effect, the y-axis GIF was "nulled" below putative perceptual threshold levels. With y-axis force nulling, subjects overestimated their tilt angle and underestimated their horizontal translation and radius. For some y-axis nulling trials, a radial linear acceleration at twice the tilt frequency (0.25 m/s(2) at 0.9 Hz, 0.13 m/s(2) at 1.6 Hz) was simultaneously applied to reduce the z-axis force variations caused by centripetal acceleration and by changes in the z-axis component of gravity during tilt. For other trials, the phase of this radial linear acceleration was altered to double the magnitude of the z-axis force variations. z-axis force nulling further increased the perceived tilt angle and further decreased perceived horizontal translation and radius relative to the y-axis nulling trials, while z-axis force doubling had the opposite effect. Subject reports were remarkably geometrically consistent; an observer model-based analysis suggests that perception was influenced by knowledge of swing geometry.
Merfeld, D M
1996-01-01
The vestibulo-ocular reflexes (VOR) are determined not only by angular acceleration, but also by the presence of gravity and linear acceleration. This phenomenon was studied by measuring three-dimensional nystagmic eye movements, with implanted search coils, in four male squirrel monkeys. Monkeys were rotated in the dark at 200 degrees/s, centrally or 79 cm off-axis, with the axis of rotation always aligned with gravity and the spinal axis of the upright monkeys. The monkey's position relative to the centripetal acceleration (facing center or back to center) had a dramatic influence on the VOR. These studies show that a torsional response was always elicited that acted to shift the axis of eye rotation toward alignment with gravito-inertial force. On the other hand, a slow phase downward vertical response usually existed, which shifted the axis of eye rotation away from the gravito-inertial force. These findings were consistent across all monkeys. In another set of tests, the same monkeys were rapidly tilted about their interaural (pitch) axis. Tilt orientations of 45 degrees and 90 degrees were maintained for 1 min. Other than a compensatory angular VOR during the rotation, no consistent eye velocity response was ever observed during or following the tilt. The absence of any response following tilt proves that the observed torsional and vertical responses were not a positional nystagmus. Model simulations qualitatively predict all components of these eccentric rotation and tilt responses. These simulations support the conclusion that the VOR during eccentric rotation may consist of two components: a linear VOR and a rotational VOR. The model predicts a slow phase downward, vertical, linear VOR during eccentric rotation even though there was never a change in the force aligned with monkey's spinal (Z) axis. The model also predicts the torsional components of the response that shift the rotation axis of the angular VOR toward alignment with gravito-inertial force.
Binaural masking release in children with Down syndrome.
Porter, Heather L; Grantham, D Wesley; Ashmead, Daniel H; Tharpe, Anne Marie
2014-01-01
Binaural hearing results in a number of listening advantages relative to monaural hearing, including enhanced hearing sensitivity and better speech understanding in adverse listening conditions. These advantages are facilitated in part by the ability to detect and use interaural cues within the central auditory system. Binaural hearing for children with Down syndrome could be impacted by multiple factors including, structural anomalies within the peripheral and central auditory system, alterations in synaptic communication, and chronic otitis media with effusion. However, binaural hearing capabilities have not been investigated in these children. This study tested the hypothesis that children with Down syndrome experience less binaural benefit than typically developing peers. Participants included children with Down syndrome aged 6 to 16 years (n = 11), typically developing children aged 3 to 12 years (n = 46), adults with Down syndrome (n = 3), and adults with no known neurological delays (n = 6). Inclusionary criteria included normal to near-normal hearing sensitivity. Two tasks were used to assess binaural ability. Masking level difference (MLD) was calculated by comparing threshold for a 500-Hz pure-tone signal in 300-Hz wide Gaussian noise for N0S0 and N0Sπ signal configurations. Binaural intelligibility level difference was calculated using simulated free-field conditions. Speech recognition threshold was measured for closed-set spondees presented from 0-degree azimuth in speech-shaped noise presented from 0-, 45- and 90-degree azimuth, respectively. The developmental ability of children with Down syndrome was estimated and information regarding history of otitis media was obtained for all child participants via parent survey. Individuals with Down syndrome had higher masked thresholds for pure-tone and speech stimuli than typically developing individuals. Children with Down syndrome had significantly smaller MLDs than typically developing children. Adults with Down syndrome and control adults had similar MLDs. Similarities in simulated spatial release from masking were observed for all groups for the experimental parameters used in this study. No association was observed for any measure of binaural ability and developmental age for children with Down syndrome. Similar group psychometric functions were observed for children with Down syndrome and typically developing children in most instances, suggesting that attentiveness and motivation contributed equally to performance for both groups on most tasks. The binaural advantages afforded to typically developing children, such as enhanced hearing sensitivity in noise, were not as robust for children with Down syndrome in this study. Children with Down syndrome experienced less binaural benefit than typically developing peers for some stimuli, suggesting that they could require more favorable signal-to-noise ratios to achieve optimal performance in some adverse listening conditions. The reduced release from masking observed for children with Down syndrome could represent a delay in ability rather than a deficit that persists into adulthood. This could have implications for the planning of interventions for individuals with Down syndrome.
Spatial orientation and balance control changes induced by altered gravitoinertial force vectors
NASA Technical Reports Server (NTRS)
Kaufman, G. D.; Wood, S. J.; Gianna, C. C.; Black, F. O.; Paloski, W. H.
2001-01-01
To better understand the mechanisms of human adaptation to rotating environments, we exposed 19 healthy subjects and 8 vestibular-deficient subjects ("abnormal"; four bilateral and four unilateral lesions) to an interaural centripetal acceleration of 1 g (resultant 45 degrees roll-tilt of 1.4 g) on a 0.8-m-radius centrifuge for periods of 90 min. The subjects sat upright (body z-axis parallel to centrifuge rotation axis) in the dark with head stationary, except during 4 min of every 10 min, when they performed head saccades toward visual targets switched on at 3- to 5-s intervals at random locations (within +/- 30 degrees) in the earth-horizontal plane. Eight of the normal subjects also performed the head saccade protocol in a stationary chair adjusted to a static roll-tilt angle of 45 degrees for 90 min (reproducing the change in orientation but not the magnitude of the gravitoinertial force on the centrifuge). Eye movements, including voluntary saccades directed along perceived earth- and head-referenced planes, were recorded before, during, and immediately after centrifugation. Postural center of pressure (COP) and multisegment body kinematics were also gathered before and within 10 min after centrifugation. Normal subjects overestimated roll-tilt during centrifugation and revealed errors in perception of head-vertical provided by directed saccades. Errors in this perceptual response tended to increase with time and became significant after approximately 30 min. Motion-sickness symptoms caused approximately 25% of normal subjects to limit their head movements during centrifugation and led three normal subjects to stop the test early. Immediately after centrifugation, subjects reported feeling tilted 10 degrees in the opposite direction, which was in agreement with the direction of their earth-referenced directed saccades. Postural COP, segmental body motion amplitude, and hip-sway frequency increased significantly after centrifugation. These postural effects were short-lived, however, with a recovery time of several postural test trials (minutes). There were also asymmetries in the direction of postcentrifugation COP and head tilt which depended on the subject's orientation during the centrifugation adaptation period (left ear or right ear out). The amount of total head movements during centrifugation correlated poorly or inversely with postcentrifugation postural stability, and the most unstable subject made no head movements. There was no decrease in postural stability after static tilt, although these subjects also reported a perceived tilt briefly after return to upright, and they also had COP asymmetries. Abnormal subjects underestimated roll-tilt during centrifugation, and their directed saccades revealed permanent spatial distortions. Bilateral abnormal subjects started out with poor postural control, but showed no postural decrements after centrifugation, while unilateral abnormal subjects had varying degrees of postural decrement, both in their everyday function and as a result of experiencing the centrifugation. In addition, three unilateral, abnormal subjects, who rode twice in opposite orientations, revealed a consistent orthogonal pattern of COP offsets after centrifugation. These results suggest that both orientation and magnitude of the gravitoinertial vector are used by the central nervous system for calibration of multiple orientation systems. A change in the background gravitoinertial force (otolith input) can rapidly initiate postural and perceptual adaptation in several sensorimotor systems, independent of a structured visual surround.
Spatial orientation and balance control changes induced by altered gravitoinertial force vectors.
Kaufman, G D; Wood, S J; Gianna, C C; Black, F O; Paloski, W H
2001-04-01
To better understand the mechanisms of human adaptation to rotating environments, we exposed 19 healthy subjects and 8 vestibular-deficient subjects ("abnormal"; four bilateral and four unilateral lesions) to an interaural centripetal acceleration of 1 g (resultant 45 degrees roll-tilt of 1.4 g) on a 0.8-m-radius centrifuge for periods of 90 min. The subjects sat upright (body z-axis parallel to centrifuge rotation axis) in the dark with head stationary, except during 4 min of every 10 min, when they performed head saccades toward visual targets switched on at 3- to 5-s intervals at random locations (within +/- 30 degrees) in the earth-horizontal plane. Eight of the normal subjects also performed the head saccade protocol in a stationary chair adjusted to a static roll-tilt angle of 45 degrees for 90 min (reproducing the change in orientation but not the magnitude of the gravitoinertial force on the centrifuge). Eye movements, including voluntary saccades directed along perceived earth- and head-referenced planes, were recorded before, during, and immediately after centrifugation. Postural center of pressure (COP) and multisegment body kinematics were also gathered before and within 10 min after centrifugation. Normal subjects overestimated roll-tilt during centrifugation and revealed errors in perception of head-vertical provided by directed saccades. Errors in this perceptual response tended to increase with time and became significant after approximately 30 min. Motion-sickness symptoms caused approximately 25% of normal subjects to limit their head movements during centrifugation and led three normal subjects to stop the test early. Immediately after centrifugation, subjects reported feeling tilted 10 degrees in the opposite direction, which was in agreement with the direction of their earth-referenced directed saccades. Postural COP, segmental body motion amplitude, and hip-sway frequency increased significantly after centrifugation. These postural effects were short-lived, however, with a recovery time of several postural test trials (minutes). There were also asymmetries in the direction of postcentrifugation COP and head tilt which depended on the subject's orientation during the centrifugation adaptation period (left ear or right ear out). The amount of total head movements during centrifugation correlated poorly or inversely with postcentrifugation postural stability, and the most unstable subject made no head movements. There was no decrease in postural stability after static tilt, although these subjects also reported a perceived tilt briefly after return to upright, and they also had COP asymmetries. Abnormal subjects underestimated roll-tilt during centrifugation, and their directed saccades revealed permanent spatial distortions. Bilateral abnormal subjects started out with poor postural control, but showed no postural decrements after centrifugation, while unilateral abnormal subjects had varying degrees of postural decrement, both in their everyday function and as a result of experiencing the centrifugation. In addition, three unilateral, abnormal subjects, who rode twice in opposite orientations, revealed a consistent orthogonal pattern of COP offsets after centrifugation. These results suggest that both orientation and magnitude of the gravitoinertial vector are used by the central nervous system for calibration of multiple orientation systems. A change in the background gravitoinertial force (otolith input) can rapidly initiate postural and perceptual adaptation in several sensorimotor systems, independent of a structured visual surround.
NASA Astrophysics Data System (ADS)
Wuyts, Floris; Clement, Gilles; Naumov, Ivan; Kornilova, Ludmila; Glukhikh, Dmitriy; Hallgren, Emma; MacDougall, Hamish; Migeotte, Pierre-Francois; Delière, Quentin; Weerts, Aurelie; Moore, Steven; Diedrich, Andre
In 13 cosmonauts, the vestibulo-autonomic reflex was investigated before and after 6 months duration spaceflight. Cosmonauts were rotated on the mini-centrifuge VVIS, which is installed in Star City. Initially, this mini-centrifuge flew on board of the Neurolab mission (STS-90), and served to generate intermittent artificial gravity during that mission, with apparent very positive effects on the preservation of the orthostatic tolerance upon return to earth in the 4 crew members that were subjected to the rotations in space. The current experiments SPIN and GAZE-SPIN are control experiments to test the hypothesis that intermittent artificial gravity in space can serve as a counter measure against several deleterious effects of microgravity. Additionally, the effect of microgravity on the gaze holding system is studied as well. Cosmonauts from a long duration stay in the International Space Station were tested on the VVIS (1 g centripetal interaural acceleration; consecutive right-ear-out anti-clockwise and left-ear-out clockwise measurement) on 5 different days. Two measurements were scheduled about one month and a half prior to launch and the remaining three immediately after their return from space (typically on R+2, R+4, R+9; R = return day from space). The ocular counter roll (OCR) as a measure of otolith function was measured on before, during and after the rotation in the mini centrifuge, using infrared video goggles. The perception of verticality was monitored using an ultrasound system. Gaze holding was tested before, during and after rotation. After the centrifugation part, the crew was installed on a tilt table, and instrumented with several cardiovascular recording equipment (ECG, continuous blood pressure monitoring, respiratory monitoring), as well as with impedance measurement devices to investigate fluid redistribution throughout the operational tilt test. To measure heart rate variability parameters, imposed breathing periods were included in the test protocol. The subjects were subjected to a passive tilt test of 60 degrees, during 15 minutes. The results show that cosmonauts clearly have a statistically significantly reduced ocular counter rolling during rotation upon return from space, when compared to the pre-flight condition, indicating a reduced sensitivity of the otolith system to gravito intertial acceleration. None of the subjects fainted or even approached presyncope. However, the resistance in the calf, measured with the impedance method, showed a significant increased pooling in the lower limbs. Additionally, this was statistically significantly correlated (p=0.024) with a reduced otolith response, when comparing for each subject the vestibular and autonomic data. This result shows that the vestibulo-autonomic reflex is reduced after 6 months of spaceflight. When compared with Neurolab, the otolith response in the current group of crew members that were not subjected to in-flight centrifugation is significantly reduced, corroborating the hypothesis that in-flight artificial gravity may be of great importance to mitigate the deleterious effects of microgravity. Projects are funded by PRODEX-BELSPO, ESA, IBMP
Modification of Eye Movements and Motion Perception during Off-Vertical Axis Rotation
NASA Technical Reports Server (NTRS)
Wood, S. J.; Reschke, M. F.; Denise, P.; CLement, G.
2006-01-01
Constant velocity Off-Vertical Axis Rotation (OVAR) imposes a continuously varying orientation of the head and body relative to gravity. The ensuing ocular reflexes include modulation of both torsional and horizontal eye movements as a function of the varying linear acceleration along the lateral plane, and modulation of vertical and vergence eye movements as a function of the varying linear acceleration along the sagittal plane. Previous studies have demonstrated that tilt and translation otolith-ocular responses, as well as motion perception, vary as a function of stimulus frequency during OVAR. The purpose of this study is to examine normative OVAR responses in healthy human subjects, and examine adaptive changes in astronauts following short duration space flight at low (0.125 Hz) and high (0.5 Hz) frequencies. Data was obtained on 24 normative subjects (14 M, 10 F) and 14 (13 M, 1F) astronaut subjects. To date, astronauts have participated in 3 preflight sessions (n=14) and on R+0/1 (n=7), R+2 (n= 13) and R+4 (n= 13) days after landing. Subjects were rotated in darkness about their longitudinal axis 20 deg off-vertical at constant rates of 45 and 180 deg/s, corresponding to 0.125 and 0.5 Hz. Binocular responses were obtained with video-oculography. Perceived motion was evaluated using verbal reports and a two-axis joystick (pitch and roll tilt) mounted on top of a two-axis linear stage (anterior-posterior and medial-lateral translation). Eye responses were obtained in ten of the normative subjects with the head and trunk aligned, and then with the head turned relative to the trunk 40 deg to the right or left of center. Sinusoidal curve fits were used to derive amplitude, phase and bias of the responses over several cycles at each stimulus frequency. Eye responses during 0.125 Hz OVAR were dominated by modulation of torsional and vertical eye position, compensatory for tilt relative to gravity. While there is a bias horizontal slow phase velocity (SPV), the modulation of horizontal and vergence SPV is negligible at this lower stimulus frequency. Eye responses during 0.5 Hz OVAR; however, are characterized by modulation of horizontal and vergence SPV, compensatory for translation in the lateral and sagittal planes, respectively. Neither amplitude nor bias velocities were significantly altered by head-on-trunk position. The phases of the ocular reflexes, on the other hand, shifted towards alignment with the head. During the lower frequency OVAR, subjects reported the perception of progressing along the edge of a cone. During higher frequency OVAR, subjects reported the perception of progressing along the edge of an upright cylinder. In contrast to the eye movements, the phase of both perceived tilt and translation motion is not altered by stimulus frequency. Preliminary results from astronaut data suggest that the ocular responses are not substantially altered by short-duration spaceflight. However, compared to preflight averages, astronauts reported greater amplitude of both perceived tilt and translation at low and high frequency, respectively, during early post-flight testing. We conclude that the neural processing to distinguish tilt and translation linear acceleration stimuli differs between eye movements and motion perception. The results from modifying head-on-trunk position are consistent with the modulation of ocular reflexes during OVAR being primarily mediated by the otoliths in response to the sinusoidally varying linear acceleration along the interaural and naso-occipital head axis. While the tilt and translation ocular reflexes appear to operate in an independent fashion, the timing of perceived tilt and translation influence each other. We conclude that the perceived motion path during linear acceleration in darkness results from a composite representation of tilt and translation inputs from both vestibular and somatosensory systems.
Busettini, C; Miles, F A; Schwarz, U; Carl, J R
1994-01-01
Recent experiments on monkeys have indicated that the eye movements induced by brief translation of either the observer or the visual scene are a linear function of the inverse of the viewing distance. For the movements of the observer, the room was dark and responses were attributed to a translational vestibulo-ocular reflex (TVOR) that senses the motion through the otolith organs; for the movements of the scene, which elicit ocular following, the scene was projected and adjusted in size and speed so that the retinal stimulation was the same at all distances. The shared dependence on viewing distance was consistent with the hypothesis that the TVOR and ocular following are synergistic and share central pathways. The present experiments looked for such dependencies on viewing distance in human subjects. When briefly accelerated along the interaural axis in the dark, human subjects generated compensatory eye movements that were also a linear function of the inverse of the viewing distance to a previously fixated target. These responses, which were attributed to the TVOR, were somewhat weaker than those previously recorded from monkeys using similar methods. When human subjects faced a tangent screen onto which patterned images were projected, brief motion of those images evoked ocular following responses that showed statistically significant dependence on viewing distance only with low-speed stimuli (10 degrees/s). This dependence was at best weak and in the reverse direction of that seen with the TVOR, i.e., responses increased as viewing distance increased. We suggest that in generating an internal estimate of viewing distance subjects may have used a confounding cue in the ocular-following paradigm--the size of the projected scene--which was varied directly with the viewing distance in these experiments (in order to preserve the size of the retinal image). When movements of the subject were randomly interleaved with the movements of the scene--to encourage the expectation of ego-motion--the dependence of ocular following on viewing distance altered significantly: with higher speed stimuli (40 degrees/s) many responses (63%) now increased significantly as viewing distance decreased, though less vigorously than the TVOR. We suggest that the expectation of motion results in the subject placing greater weight on cues such as vergence and accommodation that provide veridical distance information in our experimental situation: cue selection is context specific.
Dynamic correlations at different time-scales with empirical mode decomposition
NASA Astrophysics Data System (ADS)
Nava, Noemi; Di Matteo, T.; Aste, Tomaso
2018-07-01
We introduce a simple approach which combines Empirical Mode Decomposition (EMD) and Pearson's cross-correlations over rolling windows to quantify dynamic dependency at different time scales. The EMD is a tool to separate time series into implicit components which oscillate at different time-scales. We apply this decomposition to intraday time series of the following three financial indices: the S&P 500 (USA), the IPC (Mexico) and the VIX (volatility index USA), obtaining time-varying multidimensional cross-correlations at different time-scales. The correlations computed over a rolling window are compared across the three indices, across the components at different time-scales and across different time lags. We uncover a rich heterogeneity of interactions, which depends on the time-scale and has important lead-lag relations that could have practical use for portfolio management, risk estimation and investment decisions.
NASA Astrophysics Data System (ADS)
Li, Weidong; Shan, Xinjian; Qu, Chunyan
2010-11-01
In comparison with polar-orbiting satellites, geostationary satellites have a higher time resolution and wider field of visions, which can cover eleven time zones (an image covers about one third of the Earth's surface). For a geostationary satellite panorama graph at a point of time, the brightness temperature of different zones is unable to represent the thermal radiation information of the surface at the same point of time because of the effect of different sun solar radiation. So it is necessary to calibrate brightness temperature of different zones with respect to the same point of time. A model of calibrating the differences of the brightness temperature of geostationary satellite generated by time zone differences is suggested in this study. A total of 16 curves of four positions in four different stages are given through sample statistics of brightness temperature of every 5 days synthetic data which are from four different time zones (time zones 4, 6, 8, and 9). The above four stages span January -March (winter), April-June (spring), July-September (summer), and October-December (autumn). Three kinds of correct situations and correct formulas based on curves changes are able to better eliminate brightness temperature rising or dropping caused by time zone differences.
Spatial Orientation and Balance Control Changes Induced by Altered Gravito-Inertial Force Vectors
NASA Technical Reports Server (NTRS)
Kaufman, Galen D.; Wood, Scott J.; Gianna, Claire C.; Black, F. Owen; Paloski, William H.; Dawson, David L. (Technical Monitor)
1999-01-01
Seventeen healthy and eight vestibular deficient subjects were exposed to an interaural centripetal acceleration of 1 G (resultant 45 deg roll tilt of 1.4 G) on a 0.8 meter radius centrifuge for a period of 90 minutes in the dark. The subjects sat with head fixed upright, except every 4 of 10 minutes when instructed to rotate their head so that their nose and eyes pointed towards a visual point switched on every 3 to 5 seconds at random places (within +/- 30 deg) in the Earth horizontal plane. Motion sickness caused some subjects to limit their head movements during significant portions of the 90 minute period, and led three normal subjects to stop the test earlier. Eye movements, including directed saccades for subjective Earth- and head-referenced planes, were recorded before, during, and immediately after centrifugation using electro-oculography. Postural stability measurements were made before and within ten minutes after centrifugation. In normal subjects, postural sway and multisegment body kinematics were gathered during an eyes-closed head movement cadence (sway-referenced support platform), and in response to translational/rotational platform perturbations. A significant increase in postural sway, segmental motion amplitude and hip frequency was observed after centrifugation. This effect was short-lived, with a recovery time of several postural test trials. There were also asymmetries in the direction of post-centrifugation center of sway and head tilt which depended on the subject's orientation during the centrifugation adaptation period (left ear or right ear out). To delineate the effect of the magnitude of the gravito-inertial vector versus its direction during the adaptive centrifugation period, we tilted eight normal subjects in the roll axis at a 45 deg angle in the dark for 90 minutes without rotational motion. Their postural responses did not change following the period of tilt. Based on verbal reports, normal subjects overestimated roll-tilt during 90 minutes of both tilt and centrifugation stimuli. Subjective estimates of head-horizontal, provided by directed saccades, revealed significant errors after approximately 30 minutes that tended to increase only in the group who underwent centrifugation. Immediately after centrifugation, subjects reported feeling tilted on average 10 degrees in the opposite direction, which was in agreement with the direction of their earth-directed saccades. In vestibular deficient (VD) subjects, postural sway was measured using a sway-referenced or earth-fixed support surface, and with or without a head movement sequence. 'Me protocol was selected for each patient during baseline testing, and corresponded to the most challenging condition in which the patient was able to maintain balance with eyes closed. Bilaterally VD subjects showed no postural decrement after centrifugation, while unilateral VD subjects had varying degrees of decrement. Unilateral VD subjects were tested twice; they underwent centrifugation both with right ear out and left ear out. Their post-centrifuation center of sway shifted at right angles depending on the centrifuge GIF orientation. Bilateral VD subjects bad shifts as well, but no consistent directional trend. VD subjects underestimated roll-tilt during centrifugation, These results suggest that orientation of the gravito-inertial vector and its magnitude arc both used by the central nervous system for calibration of multiple orientation systems. A change in the background gravito-inertial force (otolith input) can rapidly initiate postural and perceptual adaptation in several sensorimotor systems, independent of a structured visual surround.
Using wavelets to decompose the time frequency effects of monetary policy
NASA Astrophysics Data System (ADS)
Aguiar-Conraria, Luís; Azevedo, Nuno; Soares, Maria Joana
2008-05-01
Central banks have different objectives in the short and long run. Governments operate simultaneously at different timescales. Many economic processes are the result of the actions of several agents, who have different term objectives. Therefore, a macroeconomic time series is a combination of components operating on different frequencies. Several questions about economic time series are connected to the understanding of the behavior of key variables at different frequencies over time, but this type of information is difficult to uncover using pure time-domain or pure frequency-domain methods. To our knowledge, for the first time in an economic setup, we use cross-wavelet tools to show that the relation between monetary policy variables and macroeconomic variables has changed and evolved with time. These changes are not homogeneous across the different frequencies.
a Method of Time-Series Change Detection Using Full Polsar Images from Different Sensors
NASA Astrophysics Data System (ADS)
Liu, W.; Yang, J.; Zhao, J.; Shi, H.; Yang, L.
2018-04-01
Most of the existing change detection methods using full polarimetric synthetic aperture radar (PolSAR) are limited to detecting change between two points in time. In this paper, a novel method was proposed to detect the change based on time-series data from different sensors. Firstly, the overall difference image of a time-series PolSAR was calculated by ominous statistic test. Secondly, difference images between any two images in different times ware acquired by Rj statistic test. Generalized Gaussian mixture model (GGMM) was used to obtain time-series change detection maps in the last step for the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection by using the time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can detect the time-series change from different sensors.
Playing Games with Optimal Competitive Scheduling
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen
2005-01-01
This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, selfish preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource.
Differences in care burden of patients undergoing dialysis in different centres in the netherlands.
de Kleijn, Ria; Uyl-de Groot, Carin; Hagen, Chris; Diepenbroek, Adry; Pasker-de Jong, Pieternel; Ter Wee, Piet
2017-06-01
A classification model was developed to simplify planning of personnel at dialysis centres. This model predicted the care burden based on dialysis characteristics. However, patient characteristics and different dialysis centre categories might also influence the amount of care time required. To determine if there is a difference in care burden between different categories of dialysis centres and if specific patient characteristics predict nursing time needed for patient treatment. An observational study. Two hundred and forty-two patients from 12 dialysis centres. In 12 dialysis centres, nurses filled out the classification list per patient and completed a form with patient characteristics. Nephrologists filled out the Charlson Comorbidity Index. Independent observers clocked the time nurses spent on separate steps of the dialysis for each patient. Dialysis centres were categorised into four types. Data were analysed using regression models. In contrast to other dialysis centres, academic centres needed 14 minutes more care time per patient per dialysis treatment than predicted in the classification model. No patient characteristics were found that influenced this difference. The only patient characteristic that predicted the time required was gender, with more time required to treat women. Gender did not affect the difference between measured and predicted care time. Differences in care burden were observed between academic and other centres, with more time required for treatment in academic centres. Contribution of patient characteristics to the time difference was minimal. The only patient characteristics that predicted care time were previous transplantation, which reduced the time required, and gender, with women requiring more care time. © 2017 European Dialysis and Transplant Nurses Association/European Renal Care Association.
NASA Astrophysics Data System (ADS)
Wohlmuth, Johannes; Andersen, Jørgen Vitting
2006-05-01
We use agent-based models to study the competition among investors who use trading strategies with different amount of information and with different time scales. We find that mixing agents that trade on the same time scale but with different amount of information has a stabilizing impact on the large and extreme fluctuations of the market. Traders with the most information are found to be more likely to arbitrage traders who use less information in the decision making. On the other hand, introducing investors who act on two different time scales has a destabilizing effect on the large and extreme price movements, increasing the volatility of the market. Closeness in time scale used in the decision making is found to facilitate the creation of local trends. The larger the overlap in commonly shared information the more the traders in a mixed system with different time scales are found to profit from the presence of traders acting at another time scale than themselves.
Time Difference Amplifier with Robust Gain Using Closed-Loop Control
NASA Astrophysics Data System (ADS)
Nakura, Toru; Mandai, Shingo; Ikeda, Makoto; Asada, Kunihiro
This paper presents a Time Difference Amplifier (TDA) that amplifies the input time difference into the output time difference. Cross coupled chains of variable delay cells with the same number of stages are applicable for TDA, and the gain is adjusted via the closed-loop control. The TDA was fabricated using 65nm CMOS and the measurement results show that the time difference gain is 4.78 at a nominal power supply while the designed gain is 4.0. The gain is stable enough to be less than 1.4% gain shift under ±10% power supply voltage fluctuation.
Predicting mortality over different time horizons: which data elements are needed?
Goldstein, Benjamin A; Pencina, Michael J; Montez-Rath, Maria E; Winkelmayer, Wolfgang C
2017-01-01
Electronic health records (EHRs) are a resource for "big data" analytics, containing a variety of data elements. We investigate how different categories of information contribute to prediction of mortality over different time horizons among patients undergoing hemodialysis treatment. We derived prediction models for mortality over 7 time horizons using EHR data on older patients from a national chain of dialysis clinics linked with administrative data using LASSO (least absolute shrinkage and selection operator) regression. We assessed how different categories of information relate to risk assessment and compared discrete models to time-to-event models. The best predictors used all the available data (c-statistic ranged from 0.72-0.76), with stronger models in the near term. While different variable groups showed different utility, exclusion of any particular group did not lead to a meaningfully different risk assessment. Discrete time models performed better than time-to-event models. Different variable groups were predictive over different time horizons, with vital signs most predictive for near-term mortality and demographic and comorbidities more important in long-term mortality. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Effect of different planting time on different varieties of strawberries
NASA Astrophysics Data System (ADS)
Zhong, Yue; Luo, Ya; Ge, Cong; Mo, Qin; Lin, Yajie; Luo, Shu; Tang, Haoru
2018-04-01
The experiment chose two strawberry varieties which planted in two periods of September 10 and September 20, in order to identify the optimum planting time of strawberries by exploring the effects of different planting time on strawberry quality and flowering initially. The results showed that different planting time will affect the growth and quality of strawberry, and the quality of September 10 planting strawberries is better than September 20 planting strawberries while there do exist some differences between strawberry varieties. In summary, the preliminary determination is that in Hanyuan area, the Hong Yan and Zhang Ji's optimum planting time is 10 September.
Bai, Jing; Yang, Wei; Wang, Song; Guan, Rui-Hong; Zhang, Hui; Fu, Jing-Jing; Wu, Wei; Yan, Kun
2016-07-01
The purpose of this study was to explore the diagnostic value of the arrival time difference between lesions and surrounding lung tissue on contrast-enhanced sonography of subpleural pulmonary lesions. A total of 110 patients with subpleural pulmonary lesions who underwent both conventional and contrast-enhanced sonography and had a definite diagnosis were enrolled. After contrast agent injection, the arrival times in the lesion, lung, and chest wall were recorded. The arrival time differences between various tissues were also calculated. Statistical analysis showed a significant difference in the lesion arrival time, the arrival time difference between the lesion and lung, and the arrival time difference between the chest wall and lesion (all P < .001) for benign and malignant lesions. Receiver operating characteristic curve analysis revealed that the optimal diagnostic criterion was the arrival time difference between the lesion and lung, and that the best cutoff point was 2.5 seconds (later arrival signified malignancy). This new diagnostic criterion showed superior diagnostic accuracy (97.1%) compared to conventional diagnostic criteria. The individualized diagnostic method based on an arrival time comparison using contrast-enhanced sonography had high diagnostic accuracy (97.1%) with good feasibility and could provide useful diagnostic information for subpleural pulmonary lesions.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
Changes in occupational class differences in leisure-time physical activity: a follow-up study.
Seiluri, Tina; Lahti, Jouni; Rahkonen, Ossi; Lahelma, Eero; Lallukka, Tea
2011-03-01
Physical activity is known to have health benefits across population groups. However, less is known about changes over time in socioeconomic differences in leisure-time physical activity and the reasons for the changes. We hypothesised that class differences in leisure-time physical activity would widen over time due to declining physical activity among the lower occupational classes. We examined whether occupational class differences in leisure-time physical activity change over time in a cohort of Finnish middle-aged women and men. We also examined whether a set of selected covariates could account for the observed changes. The data were derived from the Helsinki Health Study cohort mail surveys; the respondents were 40-60-year-old employees of the City of Helsinki at baseline in 2000-2002 (n = 8960, response rate 67%). Follow-up questionnaires were sent to the baseline respondents in 2007 (n = 7332, response rate 83%). The outcome measure was leisure-time physical activity, including commuting, converted to metabolic equivalent tasks (MET). Socioeconomic position was measured by occupational class (professionals, semi-professionals, routine non-manual employees and manual workers). The covariates included baseline age, marital status, limiting long-lasting illness, common mental disorders, job strain, physical and mental health functioning, smoking, body mass index, and employment status at follow-up. Firstly the analyses focused on changes over time in age adjusted prevalence of leisure-time physical activity. Secondly, logistic regression analysis was used to adjust for covariates of changes in occupational class differences in leisure-time physical activity. At baseline there were no occupational class differences in leisure-time physical activity. Over the follow-up leisure-time physical activity increased among those in the higher classes and decreased among manual workers, suggesting the emergence of occupational class differences at follow-up. Women in routine non-manual and manual classes and men in the manual class tended to be more often physically inactive in their leisure-time (<14 MET hours/week) and to be less often active (>30 MET hours/week) than those in the top two classes. Adjustment for the covariates did not substantially affect the observed occupational class differences in leisure-time physical activity at follow-up. Occupational class differences in leisure-time physical activity emerged over the follow-up period among both women and men. Leisure-time physical activity needs to be promoted among ageing employees, especially among manual workers.
Labor Supply of Married Women in Part-Time and Full-Time Occupations
ERIC Educational Resources Information Center
Morgenstern, Richard D.; Hamovitch, William
1976-01-01
This study examines differences in the labor supply of married women to part-time and full-time occupations, concluding that there are major differences in the determinants of labor supply for married women in part-time as opposed to full-time occupations. (HD)
Annoyance due to railway vibration at different times of the day.
Peris, Eulalia; Woodcock, James; Sica, Gennaro; Moorhouse, Andrew T; Waddington, David C
2012-02-01
The time of day when vibration occurs is considered as a factor influencing the human response to vibration. The aim of the present paper is to identify the times of day during which railway vibration causes the greatest annoyance, to measure the differences between annoyance responses for different time periods and to obtain estimates of the time of day penalties. This was achieved using data from case studies comprised of face-to-face interviews and internal vibration measurements (N=755). Results indicate that vibration annoyance differs with time of day and that separate time of day weights can be applied when considering exposure-response relationships from railway vibration in residential environments. © 2012 Acoustical Society of America
NASA Technical Reports Server (NTRS)
Zhou, Wei
1993-01-01
In the high accurate measurement of periodic signals, the greatest common factor frequency and its characteristics have special functions. A method of time difference measurement - the time difference method by dual 'phase coincidence points' detection is described. This method utilizes the characteristics of the greatest common factor frequency to measure time or phase difference between periodic signals. It can suit a very wide frequency range. Measurement precision and potential accuracy of several picoseconds were demonstrated with this new method. The instrument based on this method is very simple, and the demand for the common oscillator is low. This method and instrument can be used widely.
ERIC Educational Resources Information Center
Henson, James M.; Reise, Steven P.; Kim, Kevin H.
2007-01-01
The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…
A High Stability Time Difference Readout Technique of RTD-Fluxgate Sensors
Pang, Na; Cheng, Defu; Wang, Yanzhang
2017-01-01
The performance of Residence Times Difference (RTD)-fluxgate sensors is closely related to the time difference readout technique. The noise of the induction signal affects the quality of the output signal of the following circuit and the time difference detection, so the stability of the sensor is limited. Based on the analysis of the uncertainty of the RTD-fluxgate using the Bidirectional Magnetic Saturation Time Difference (BMSTD) readout scheme, the relationship between the saturation state of the magnetic core and the target (DC) magnetic field is studied in this article. It is proposed that combining the excitation and induction signals can provide the Negative Magnetic Saturation Time (NMST), which is a detection quantity used to measure the target magnetic field. Also, a mathematical model of output response between NMST and the target magnetic field is established, which analyzes the output NMST and sensitivity of the RTD-fluxgate sensor under different excitation conditions and is compared to the BMSTD readout scheme. The experiment results indicate that this technique can effectively reduce the noise influence. The fluctuation of time difference is less than ±0.1 μs in a target magnetic field range of ±5 × 104 nT. The accuracy and stability of the sensor are improved, so the RTD-fluxgate using the readout technique of high stability time difference is suitable for detecting weak magnetic fields. PMID:29023409
[Psychological time, definition and challenges].
Droit-Volet, Sylvie
2012-10-01
Psychological time comprises different forms of time. Each form of time corresponds to different psychological mechanisms. The human being is subject to distortions of time under the effect of emotions. The effectiveness of social interaction depends on our aptitude to synchronise ourselves with others.
Timing Calibration in PET Using a Time Alignment Probe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, William W.; Thompson, Christopher J.
2006-05-05
We evaluate the Scanwell Time Alignment Probe for performing the timing calibration for the LBNL Prostate-Specific PET Camera. We calibrate the time delay correction factors for each detector module in the camera using two methods--using the Time Alignment Probe (which measures the time difference between the probe and each detector module) and using the conventional method (which measures the timing difference between all module-module combinations in the camera). These correction factors, which are quantized in 2 ns steps, are compared on a module-by-module basis. The values are in excellent agreement--of the 80 correction factors, 62 agree exactly, 17 differ bymore » 1 step, and 1 differs by 2 steps. We also measure on-time and off-time counting rates when the two sets of calibration factors are loaded into the camera and find that they agree within statistical error. We conclude that the performance using the Time Alignment Probe and conventional methods are equivalent.« less
Liu, Wensong; Yang, Jie; Zhao, Jinqi; Shi, Hongtao; Yang, Le
2018-02-12
The traditional unsupervised change detection methods based on the pixel level can only detect the changes between two different times with same sensor, and the results are easily affected by speckle noise. In this paper, a novel method is proposed to detect change based on time-series data from different sensors. Firstly, the overall difference image of the time-series PolSAR is calculated by omnibus test statistics, and difference images between any two images in different times are acquired by R j test statistics. Secondly, the difference images are segmented with a Generalized Statistical Region Merging (GSRM) algorithm which can suppress the effect of speckle noise. Generalized Gaussian Mixture Model (GGMM) is then used to obtain the time-series change detection maps in the final step of the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection using time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can not only detect the time-series change from different sensors, but it can also better suppress the influence of speckle noise and improve the overall accuracy and Kappa coefficient.
ERIC Educational Resources Information Center
Chang, Shu-Ren; Plake, Barbara S.; Kramer, Gene A.; Lien, Shu-Mei
2011-01-01
This study examined the amount of time that different ability-level examinees spend on questions they answer correctly or incorrectly across different pretest item blocks presented on a fixed-length, time-restricted computerized adaptive testing (CAT). Results indicate that different ability-level examinees require different amounts of time to…
Integrated method for chaotic time series analysis
Hively, Lee M.; Ng, Esmond G.
1998-01-01
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.
Oxalate content of different drinkable dilutions of tea infusions after different brewing times.
Lotfi Yagin, Neda; Mahdavi, Reza; Nikniaz, Zeinab
2012-01-01
The aims of this study were to determine the effect of different brewing times and diluting on oxalate content of loose-packed black teas consumed in Tabriz, Iran. The oxalate content of black teas after brewing for 5, 10, 15, 30, 60 minutes was measured in triplicate by enzymatic assay. In order to attain the most acceptable dilution of tea infusions, tea samples which were brewed for 15, 30 and 60 minutes were diluted two (120 ml), three (80 ml) and four (60 ml) times respectively. There was a stepwise increase in oxalate concentrations associated with increased brewing times (P< 0.001) with oxalate contents ranging from 4.4 mg/240 ml for the 5 min to 6.3 mg/240 ml for 60 min brewing times, respectively. There were significant differences between the mean oxalate content of different dilutions after brewing for 15, 30 and 60 minutes (P< 0.001). The oxalate content of Iranian consumed black tea after different brewing times and different dilution was below the recommended levels. Therefore, it seems that consumption of black tea several times per day would not pose significant health risk in kidney stone patients and susceptible individuals.
Oxalate Content of Different Drinkable Dilutions of Tea Infusions after Different Brewing Times
Lotfi Yagin, Neda; Mahdavi, Reza; Nikniaz, Zeinab
2012-01-01
Background: The aims of this study were to determine the effect of different brewing times and diluting on oxalate content of loose-packed black teas consumed in Tabriz, Iran. Methods: The oxalate content of black teas after brewing for 5, 10, 15, 30, 60 minutes was measured in triplicate by enzymatic assay. In order to attain the most acceptable dilution of tea infusions, tea samples which were brewed for 15, 30 and 60 minutes were diluted two (120 ml), three (80 ml) and four (60 ml) times respectively. Results: There was a stepwise increase in oxalate concentrations associated with increased brewing times (P< 0.001) with oxalate contents ranging from 4.4 mg/240 ml for the 5 min to 6.3 mg/240 ml for 60 min brewing times, respectively. There were significant differences between the mean oxalate content of different dilutions after brewing for 15, 30 and 60 minutes (P< 0.001). Conclusion: The oxalate content of Iranian consumed black tea after different brewing times and different dilution was below the recommended levels. Therefore, it seems that consumption of black tea several times per day would not pose significant health risk in kidney stone patients and susceptible individuals. PMID:24688937
Han, Hui; Wang, Gengfu; Su, Puyu
2016-01-01
To explore the relationship between pubertal timing and aggressive behaviors. Stratified random sampling was used to choose 5760 students from one junior high school and one high school. The pubertal development scale (PDS) questionnaire and perceived pubertal timing were used to evaluate the pubertal timing, and the Buss-Perry questionnaire was used to explore the students' aggressive behaviors. The score of aggressive behavior was significantly different in junior high school students with different perceived pubertal timing, the score of early pubertal timing was highest and the score of delay pubertal timing was lowest, and the score of physical aggression and verbal aggression of schoolboy in early pubertal timing and normal pubertal timing in high school was higher than the delay pubertal timing (P < 0.05). The score of physical aggression, anger and hostility of schoolgirl in early pubertal timing was highest, there was significant difference between them. The relationship between the perceived pubertal timing and the aggressive behavior was the physical aggression, anger and hostility score was highest in schoolgirls both in junior high school and high school, and the score of verbal aggression was higher in normal pubertal timing and early pubertal timing in schoolboys (P < 0.05), there was significant difference between them. There are closely relationship between the early pubertal timing and aggressive behaviors by used the PDS questionnaire, and the perceived pubertal timing is in a relatively large impact on girls' aggressive behaviors.
A wavelet based approach to measure and manage contagion at different time scales
NASA Astrophysics Data System (ADS)
Berger, Theo
2015-10-01
We decompose financial return series of US stocks into different time scales with respect to different market regimes. First, we examine dependence structure of decomposed financial return series and analyze the impact of the current financial crisis on contagion and changing interdependencies as well as upper and lower tail dependence for different time scales. Second, we demonstrate to which extent the information of different time scales can be used in the context of portfolio management. As a result, minimizing the variance of short-run noise outperforms a portfolio that minimizes the variance of the return series.
Effects of time of day on shopping behavior.
Chebat, J C
1999-04-01
Shoppers interviewed in a shopping mall at different times of the day show different activities within the mall and attitudes toward the products. Prices also vary with the time of the day. These results can be explained in terms of shopping values and the related demographic characteristics of the population visiting the shopping center at different times of the day.
Gravelle, Hugh; Siciliani, Luigi
2009-08-01
In many public healthcare systems treatments are rationed by waiting time. We examine the optimal allocation of a fixed supply of a given treatment between different groups of patients. Even in the absence of any distributional aims, welfare is increased by third degree waiting time discrimination: setting different waiting times for different groups waiting for the same treatment. Because waiting time imposes dead weight losses on patients, lower waiting times should be offered to groups with higher marginal waiting time costs and with less elastic demand for the treatment.
Sensitivity of Aerosol Multi-Sensor Daily Data Intercomparison to the Level 3 Dataday Definition
NASA Technical Reports Server (NTRS)
Leptoukh, Gregory; Lary, David; Shen, Suhung; Lynnes, Christopher
2010-01-01
Topics include: why people use Level 3 products, why someone might go wrong with Level 3 products, differences in L3 from different sensors, Level 3 data day definition, MODIS vs. MODIS, AOD MODIS Terra vs. Aqua in Pacific, AOD Aqua MODIS vs. MISR correlation map, MODIS vs MISR on Terra, MODIS atmospheric data day definition, orbit time difference for Terra and Aqua 2009-01-06, maximum time difference for Terra (Calendar day), artifact explains, data day definitions, local time distribution, spatial (local time) data day definition, maximum time difference between Terra and Aqua, Removing the artifact in 16-day AOD correlation, MODIS cloud top pressure, and MODIS Terra and Aqua vs. AIRS cloud top pressure.
Decoherence in quantum systems in a static gravitational field
NASA Astrophysics Data System (ADS)
Shariati, Ahmad; Khorrami, Mohammad; Loran, Farhang
2016-09-01
A small quantum system is studied which is a superposition of states localized in different positions in a static gravitational field. The time evolution of the correlation between different positions is investigated, and it is seen that there are two time scales for such an evolution (decoherence). Both time scales are inversely proportional to the red shift difference between the two points. These time scales correspond to decoherences which are linear and quadratic, respectively, in time.
NASA Astrophysics Data System (ADS)
Saturnino, Diana; Olsen, Nils; Finlay, Chris
2017-04-01
High-precision magnetic measurements collected by satellites such as Swarm or CHAMP,flying at altitudes between 300 and 800km, allow for improved geomagnetic field modelling. An accurate description of the internal (core and crust) field must account for contributions from other sources, such as the ionosphere and magnetosphere. However, the description of the rapidly changing external field contributions, particularly during the quiet times from which the data are selected, constitutes a major challenge of the construction of such models. Our study attempts to obtain improved knowledge on ionospheric field contributions during quiet times conditions, in particular during night local times. We use two different datasets: ground magnetic observatories time series (obtained below the ionospheric E-layer currents), and Swarm satellites measurements acquired above these currents. First, we remove from the data estimates of the core, lithospheric and large-scale magnetospheric magnetic contributions as given by the CHAOS-6 model, to obtain corrected time series. Then, we focus on the differences of the corrected time series: for a pair of ground magnetic observatories, we determine the time series of the difference, and similarly we determine time series differences at satellite altitude, given by the difference between the Swarm Alpha and Charlie satellites taken in the vicinity of the ground observatory locations. The obtained differences time series are analysed regarding their temporal and spatial scales variations, with emphasis on measurements during night local times.
Efficacy of cold light bleaching using different bleaching times and their effects on human enamel.
Wang, Wei; Zhu, Yuhe; Li, Jiajia; Liao, Susan; Ai, Hongjun
2013-01-01
This study investigated the efficacy of cold light bleaching using different bleaching times and the effects thereof on tooth enamel. Before and after bleaching, stained tooth specimens were subjected to visual and instrumental colorimetric assessments using Vita Shade Guide and spectrophotometric shade matching. Enamel surface alterations were examined using scanning electron microscopy (SEM) to analyze surface morphology, surface microhardness (SMH) measurement to determine changes in mechanical properties, and X-ray diffraction (XRD) to characterize post-bleaching enamel composition. Cold light bleaching successfully improved tooth color, with optimal efficacy when bleaching time was beyond 10 min. Significant differences in surface morphology were observed among the different bleaching times, but no significant differences were observed for enamel composition and surface microhardness among the different bleaching times. Results of this study revealed an association between the bleaching time of cold light bleaching and its whitening efficacy. Together with the results on enamel surface changes, this study provided positive evidence to support cold light bleaching as an in-office bleaching treatment.
Integrated method for chaotic time series analysis
Hively, L.M.; Ng, E.G.
1998-09-29
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.
Franco-Watkins, Ana M; Davis, Matthew E; Johnson, Joseph G
2016-11-01
Many decisions are made under suboptimal circumstances, such as time constraints. We examined how different experiences of time constraints affected decision strategies on a probabilistic inference task and whether individual differences in working memory accounted for complex strategy use across different levels of time. To examine information search and attentional processing, we used an interactive eye-tracking paradigm where task information was occluded and only revealed by an eye fixation to a given cell. Our results indicate that although participants change search strategies during the most restricted times, the occurrence of the shift in strategies depends both on how the constraints are applied as well as individual differences in working memory. This suggests that, in situations that require making decisions under time constraints, one can influence performance by being sensitive to working memory and, potentially, by acclimating people to the task time gradually.
Vergauwe, Evie; Hardman, Kyle O; Rouder, Jeffrey N; Roemer, Emily; McAllaster, Sara; Cowan, Nelson
2016-12-01
One popular idea is that, to support the maintenance of a set of elements over brief periods of time, the focus of attention rotates among the different elements, thereby serially refreshing the content of working memory (WM). In the research reported here, probe letters were presented between to-be-remembered letters, and response times to these probes were used to infer the status of the different items in WM. If the focus of attention cycles from one item to the next, its content should be different at different points in time, and this should be reflected in a change in the response time patterns over time. Across a set of four experiments, we demonstrated a striking pattern of invariance in the response time patterns over time, suggesting either that the content of the focus of attention did not change over time or that response times cannot be used to infer the content of the focus of attention. We discuss how this pattern constrains models of WM, attention, and human information processing.
Factors Affecting the Turnover of Different Groups of Part-Time Workers
ERIC Educational Resources Information Center
Senter, Jenell L.; Martin, James E.
2007-01-01
Past research on employee attitudes and behavior has focused mainly on full-time employees. When part-time employees have been studied, the research has concentrated on the differences between full-time and part-time employees. Recent research has suggested that part-time employees should not be viewed as a single, undifferentiated group. Instead…
The evolving block universe and the meshing together of times.
Ellis, George F R
2014-10-01
It has been proposed that spacetime should be regarded as an evolving block universe, bounded to the future by the present time, which continually extends to the future. This future boundary is defined at each time by measuring proper time along Ricci eigenlines from the start of the universe. A key point, then, is that physical reality can be represented at many different scales: hence, the passage of time may be seen as different at different scales, with quantum gravity determining the evolution of spacetime itself at the Planck scale, but quantum field theory and classical physics determining the evolution of events within spacetime at larger scales. The fundamental issue then arises as to how the effective times at different scales mesh together, leading to the concepts of global and local times. © 2014 New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Bienau, Miriam J.; Kröncke, Michael; Eiserhardt, Wolf L.; Otte, Annette; Graae, Bente J.; Hagen, Dagmar; Milbau, Ann; Durka, Walter; Eckstein, R. Lutz
2015-11-01
The topography within arctic-alpine landscapes is very heterogeneous, resulting in diverse snow distribution patterns, with different snowmelt timing in spring. This may influence the phenological development of arctic and alpine plant species and asynchronous flowering may promote adaptation of plants to their local environments. We studied how flowering phenology of the dominant dwarf shrub Empetrum hermaphroditum varied among three habitats (exposed ridges, sheltered depressions and birch forest) differing in winter snow depth and thus snowmelt timing in spring, and whether the observed patterns were consistent across three different study areas. Despite significant differences in snowmelt timing between habitats, full flowering of E. hermaphroditum was nearly synchronous between the habitats, and implies a high flowering overlap. Our data show that exposed ridges, which had a long lag phase between snowmelt and flowering, experienced different temperature and light conditions than the two late melting habitats between snowmelt and flowering. Our study demonstrates that small scale variation seems matter less to flowering of Empetrum than interannual differences in snowmelt timing.
A Research Note on Time With Children in Different- and Same-Sex Two-Parent Families.
Prickett, Kate C; Martin-Storey, Alexa; Crosnoe, Robert
2015-06-01
Public debate on same-sex marriage often focuses on the disadvantages that children raised by same-sex couples may face. On one hand, little evidence suggests any difference in the outcomes of children raised by same-sex parents and different-sex parents. On the other hand, most studies are limited by problems of sample selection and size, and few directly measure the parenting practices thought to influence child development. This research note demonstrates how the 2003-2013 American Time Use Survey (n=44,188) may help to address these limitations. Two-tier Cragg's Tobit alternative models estimated the amount of time that parents in different-sex and same-sex couples engaged in child-focused time. Women in same-sex couples were more likely than either women or men in different-sex couples to spend such time with children. Overall, women (regardless of the gender of their partners) and men coupled with other men spent significantly more time with children than men coupled with women, conditional on spending any child-focused time. These results support prior research that different-sex couples do not invest in children at appreciably different levels than same-sex couples. We highlight the potential for existing nationally representative data sets to provide preliminary insights into the developmental experiences of children in nontraditional families.
Liu, Xue-Li; Gai, Shuang-Shuang; Zhang, Shi-Le; Wang, Pu
2015-01-01
Background An important attribute of the traditional impact factor was the controversial 2-year citation window. So far, several scholars have proposed using different citation time windows for evaluating journals. However, there is no confirmation whether a longer citation time window would be better. How did the journal evaluation effects of 3IF, 4IF, and 6IF comparing with 2IF and 5IF? In order to understand these questions, we made a comparative study of impact factors with different citation time windows with the peer-reviewed scores of ophthalmologic journals indexed by Science Citation Index Expanded (SCIE) database. Methods The peer-reviewed scores of 28 ophthalmologic journals were obtained through a self-designed survey questionnaire. Impact factors with different citation time windows (including 2IF, 3IF, 4IF, 5IF, and 6IF) of 28 ophthalmologic journals were computed and compared in accordance with each impact factor’s definition and formula, using the citation analysis function of the Web of Science (WoS) database. An analysis of the correlation between impact factors with different citation time windows and peer-reviewed scores was carried out. Results Although impact factor values with different citation time windows were different, there was a high level of correlation between them when it came to evaluating journals. In the current study, for ophthalmologic journals’ impact factors with different time windows in 2013, 3IF and 4IF seemed the ideal ranges for comparison, when assessed in relation to peer-reviewed scores. In addition, the 3-year and 4-year windows were quite consistent with the cited peak age of documents published by ophthalmologic journals. Research Limitations Our study is based on ophthalmology journals and we only analyze the impact factors with different citation time window in 2013, so it has yet to be ascertained whether other disciplines (especially those with a later cited peak) or other years would follow the same or similar patterns. Originality/ Value We designed the survey questionnaire ourselves, specifically to assess the real influence of journals. We used peer-reviewed scores to judge the journal evaluation effect of impact factors with different citation time windows. The main purpose of this study was to help researchers better understand the role of impact factors with different citation time windows in journal evaluation. PMID:26295157
Liu, Xue-Li; Gai, Shuang-Shuang; Zhang, Shi-Le; Wang, Pu
2015-01-01
An important attribute of the traditional impact factor was the controversial 2-year citation window. So far, several scholars have proposed using different citation time windows for evaluating journals. However, there is no confirmation whether a longer citation time window would be better. How did the journal evaluation effects of 3IF, 4IF, and 6IF comparing with 2IF and 5IF? In order to understand these questions, we made a comparative study of impact factors with different citation time windows with the peer-reviewed scores of ophthalmologic journals indexed by Science Citation Index Expanded (SCIE) database. The peer-reviewed scores of 28 ophthalmologic journals were obtained through a self-designed survey questionnaire. Impact factors with different citation time windows (including 2IF, 3IF, 4IF, 5IF, and 6IF) of 28 ophthalmologic journals were computed and compared in accordance with each impact factor's definition and formula, using the citation analysis function of the Web of Science (WoS) database. An analysis of the correlation between impact factors with different citation time windows and peer-reviewed scores was carried out. Although impact factor values with different citation time windows were different, there was a high level of correlation between them when it came to evaluating journals. In the current study, for ophthalmologic journals' impact factors with different time windows in 2013, 3IF and 4IF seemed the ideal ranges for comparison, when assessed in relation to peer-reviewed scores. In addition, the 3-year and 4-year windows were quite consistent with the cited peak age of documents published by ophthalmologic journals. Our study is based on ophthalmology journals and we only analyze the impact factors with different citation time window in 2013, so it has yet to be ascertained whether other disciplines (especially those with a later cited peak) or other years would follow the same or similar patterns. We designed the survey questionnaire ourselves, specifically to assess the real influence of journals. We used peer-reviewed scores to judge the journal evaluation effect of impact factors with different citation time windows. The main purpose of this study was to help researchers better understand the role of impact factors with different citation time windows in journal evaluation.
ERIC Educational Resources Information Center
Nebraska Univ., Lincoln. Dept. of Agricultural Education.
A study was conducted to determine how vocational teachers used their time, whether teachers in different vocational areas used their time differently, whether a workshop on time management would improve their use of time, and if such factors as marital status, sex, and extended contracts influenced how vocational teachers used their time. Random…
Frequency-phase analysis of resting-state functional MRI
Goelman, Gadi; Dan, Rotem; Růžička, Filip; Bezdicek, Ondrej; Růžička, Evžen; Roth, Jan; Vymazal, Josef; Jech, Robert
2017-01-01
We describe an analysis method that characterizes the correlation between coupled time-series functions by their frequencies and phases. It provides a unified framework for simultaneous assessment of frequency and latency of a coupled time-series. The analysis is demonstrated on resting-state functional MRI data of 34 healthy subjects. Interactions between fMRI time-series are represented by cross-correlation (with time-lag) functions. A general linear model is used on the cross-correlation functions to obtain the frequencies and phase-differences of the original time-series. We define symmetric, antisymmetric and asymmetric cross-correlation functions that correspond respectively to in-phase, 90° out-of-phase and any phase difference between a pair of time-series, where the last two were never introduced before. Seed maps of the motor system were calculated to demonstrate the strength and capabilities of the analysis. Unique types of functional connections, their dominant frequencies and phase-differences have been identified. The relation between phase-differences and time-delays is shown. The phase-differences are speculated to inform transfer-time and/or to reflect a difference in the hemodynamic response between regions that are modulated by neurotransmitters concentration. The analysis can be used with any coupled functions in many disciplines including electrophysiology, EEG or MEG in neuroscience. PMID:28272522
Measurement of oxidative metabolism of the working human muscles by near-infrared spectroscopy
NASA Astrophysics Data System (ADS)
Yücetaş, Akin; Şayli, Ömer; Karahan, Mustafa; Akin, Ata
2006-02-01
Monitoring the oxygenation of skeletal muscle tissues during rest to work transient provides valuable information about the performance of a particular tissue in adapting to aerobic glycolysis. In this paper we analyze the temporal relation of O II consumption with deoxy-hemoglobin (Hb) signals measured by functional Near Infrared Spectroscopy (fNIRS) technique during moderate isotonic forearm finger joint flexion exercise under ischemic conditions and model it with a mono exponential equation with delay. The time constants of fitting equation are questioned under two different work loads and among subjects differing in gender. Ten (6 men and 4 women) subjects performed isotonic forearm finger joint flexion exercise with two different loads. It is shown that under the same load, men and women subjects generate similar time constants and time delays. However, apparent change in time constants and time delays were observed when exercise was performed under different loads. When t-test is applied to compare the outputs for time constants between 0.41202 Watts and 0.90252 Watts, P value of 9.3445x10 -4 < 0.05 is observed which implies that the differences between the time constants are statistically significant. When the same procedure is applied for the time delay comparison, P value of 0.027<0.05 is observed which implies that also the differences between the time delays are statistically significant.
NASA Astrophysics Data System (ADS)
Godsey, S. E.; Kirchner, J. W.
2008-12-01
The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.
Adjei, Nicholas Kofi; Brand, Tilman; Zeeb, Hajo
2017-01-01
Background Paradoxically, despite their longer life expectancy, women report poorer health than men. Time devoted to differing social roles could be an explanation for the observed gender differences in health among the elderly. The objective of this study was to explain gender differences in self-reported health among the elderly by taking time use activities, socio-economic positions, family characteristics and cross-national differences into account. Methods Data from the Multinational Time Use Study (MTUS) on 13,223 men and 18,192 women from Germany, Italy, Spain, UK and the US were analyzed. Multiple binary logistic regression models were used to examine the association between social factors and health for men and women separately. We further identified the relative contribution of different factors to total gender inequality in health using the Blinder-Oaxaca decomposition method. Results Whereas time allocated to paid work, housework and active leisure activities were positively associated with health, time devoted to passive leisure and personal activities were negatively associated with health among both men and women, but the magnitude of the association varied by gender and country. We found significant gender differences in health in Germany, Italy and Spain, but not in the other countries. The decomposition showed that differences in the time allocated to active leisure and level of educational attainment accounted for the largest health gap. Conclusions Our study represents a first step in understanding cross-national differences in the association between health status and time devoted to role-related activities among elderly men and women. The results, therefore, demonstrate the need of using an integrated framework of social factors in analyzing and explaining the gender and cross-national differences in the health of the elderly population. PMID:28949984
Statistical inference methods for sparse biological time series data.
Ndukum, Juliet; Fonseca, Luís L; Santos, Helena; Voit, Eberhard O; Datta, Susmita
2011-04-25
Comparing metabolic profiles under different biological perturbations has become a powerful approach to investigating the functioning of cells. The profiles can be taken as single snapshots of a system, but more information is gained if they are measured longitudinally over time. The results are short time series consisting of relatively sparse data that cannot be analyzed effectively with standard time series techniques, such as autocorrelation and frequency domain methods. In this work, we study longitudinal time series profiles of glucose consumption in the yeast Saccharomyces cerevisiae under different temperatures and preconditioning regimens, which we obtained with methods of in vivo nuclear magnetic resonance (NMR) spectroscopy. For the statistical analysis we first fit several nonlinear mixed effect regression models to the longitudinal profiles and then used an ANOVA likelihood ratio method in order to test for significant differences between the profiles. The proposed methods are capable of distinguishing metabolic time trends resulting from different treatments and associate significance levels to these differences. Among several nonlinear mixed-effects regression models tested, a three-parameter logistic function represents the data with highest accuracy. ANOVA and likelihood ratio tests suggest that there are significant differences between the glucose consumption rate profiles for cells that had been--or had not been--preconditioned by heat during growth. Furthermore, pair-wise t-tests reveal significant differences in the longitudinal profiles for glucose consumption rates between optimal conditions and heat stress, optimal and recovery conditions, and heat stress and recovery conditions (p-values <0.0001). We have developed a nonlinear mixed effects model that is appropriate for the analysis of sparse metabolic and physiological time profiles. The model permits sound statistical inference procedures, based on ANOVA likelihood ratio tests, for testing the significance of differences between short time course data under different biological perturbations.
Adjei, Nicholas Kofi; Brand, Tilman; Zeeb, Hajo
2017-01-01
Paradoxically, despite their longer life expectancy, women report poorer health than men. Time devoted to differing social roles could be an explanation for the observed gender differences in health among the elderly. The objective of this study was to explain gender differences in self-reported health among the elderly by taking time use activities, socio-economic positions, family characteristics and cross-national differences into account. Data from the Multinational Time Use Study (MTUS) on 13,223 men and 18,192 women from Germany, Italy, Spain, UK and the US were analyzed. Multiple binary logistic regression models were used to examine the association between social factors and health for men and women separately. We further identified the relative contribution of different factors to total gender inequality in health using the Blinder-Oaxaca decomposition method. Whereas time allocated to paid work, housework and active leisure activities were positively associated with health, time devoted to passive leisure and personal activities were negatively associated with health among both men and women, but the magnitude of the association varied by gender and country. We found significant gender differences in health in Germany, Italy and Spain, but not in the other countries. The decomposition showed that differences in the time allocated to active leisure and level of educational attainment accounted for the largest health gap. Our study represents a first step in understanding cross-national differences in the association between health status and time devoted to role-related activities among elderly men and women. The results, therefore, demonstrate the need of using an integrated framework of social factors in analyzing and explaining the gender and cross-national differences in the health of the elderly population.
Dynamics of squirrel monkey linear vestibuloocular reflex and interactions with fixation distance.
Telford, L; Seidman, S H; Paige, G D
1997-10-01
Horizontal, vertical, and torsional eye movements were recorded using the magnetic search-coil technique during linear accelerations along the interaural (IA) and dorsoventral (DV) head axes. Four squirrel monkeys were translated sinusoidally over a range of frequencies (0.5-4.0 Hz) and amplitudes (0.1-0.7 g peak acceleration). The linear vestibuloocular reflex (LVOR) was recorded in darkness after brief presentations of visual targets at various distances from the subject. With subjects positioned upright or nose-up relative to gravity, IA translations generated conjugate horizontal (IA horizontal) eye movements, whereas DV translations with the head nose-up or right-side down generated conjugate vertical (DV vertical) responses. Both were compensatory for linear head motion and are thus translational LVOR responses. In concert with geometric requirements, both IA-horizontal and DV-vertical response sensitivities (in deg eye rotation/cm head translation) were related linearly to reciprocal fixation distance as measured by vergence (in m-1, or meter-angles, MA). The relationship was characterized by linear regressions, yielding sensitivity slopes (in deg.cm-1.MA-1) and intercepts (sensitivity at 0 vergence). Sensitivity slopes were greatest at 4.0 Hz, but were only slightly more than half the ideal required to maintain fixation. Slopes declined with decreasing frequency, becoming negligible at 0.5 Hz. Small responses were observed when vergence was zero (intercept), although no response is required. Like sensitivity slope, the intercept was largest at 4.0 Hz and declined with decreasing frequency. Phase lead was near zero (compensatory) at 4.0 Hz, but increased as frequency declined. Changes in head orientation, motion axis (IA vs. DV), and acceleration amplitude produced slight and sporadic changes in LVOR parameters. Translational LVOR response characteristics are consistent with high-pass filtering within LVOR pathways. Along with horizontal eye movements, IA translation generated small torsional responses. In contrast to the translational LVORs, IA-torsional responses were not systematically modulated by vergence angle. The IA-torsional LVOR is not compensatory for translation because it cannot maintain image stability. Rather, it likely compensates for the effective head tilt simulated by translation. When analyzed in terms of effective head tilt, torsional responses were greatest at the lowest frequency and declined as frequency increased, consistent with low-pass filtering of otolith input. It is unlikely that IA-torsional responses compensate for actual head tilt, however, because they were similar for both upright and nose-up head orientations. The IA-torsional and -horizontal LVORs seem to respond only to linear acceleration along the IA head axis, and the DV-vertical LVOR to acceleration along the head's DV axis, regardless of gravity.
The Role of Perspective in Mental Time Travel.
Ansuini, Caterina; Cavallo, Andrea; Pia, Lorenzo; Becchio, Cristina
2016-01-01
Recent years have seen accumulating evidence for the proposition that people process time by mapping it onto a linear spatial representation and automatically "project" themselves on an imagined mental time line. Here, we ask whether people can adopt the temporal perspective of another person when travelling through time. To elucidate similarities and differences between time travelling from one's own perspective or from the perspective of another person, we asked participants to mentally project themselves or someone else (i.e., a coexperimenter) to different time points. Three basic properties of mental time travel were manipulated: temporal location (i.e., where in time the travel originates: past, present, and future), motion direction (either backwards or forwards), and temporal duration (i.e., the distance to travel: one, three, or five years). We found that time travels originating in the present lasted longer in the self- than in the other-perspective. Moreover, for self-perspective, but not for other-perspective, time was differently scaled depending on where in time the travel originated. In contrast, when considering the direction and the duration of time travelling, no dissimilarities between the self- and the other-perspective emerged. These results suggest that self- and other-projection, despite some differences, share important similarities in structure.
Jern, Patrick; Santtila, Pekka; Johansson, Ada; Varjonen, Markus; Witting, Katarina; von der Pahlen, Bettina; Sandnabba, Kenneth
2009-09-01
Recently, attempts to formulate valid and suitable definitions for (different subcategories of) premature ejaculation have resulted in substantial progress in the pursuit to gain knowledge about ejaculatory function. However, the association between ejaculatory dysfunction and different types of sexual activities has yet to be thoroughly investigated, and (due to conflicting results between studies) the potential effects of age and relationship length still need to be taken into account. The aim of this study is to investigate the associations of age, relationship length, frequency of different sexual activities, and different modes of achieving ejaculation with self-reported ejaculation latency time. The main outcome is establishing associations between age, relationship length, self-reported ejaculation latency time, and frequency of different kinds of sexual activities and different modes of achieving ejaculation (such as achieving ejaculation through oral or vaginal sex). Statistical analyses of data on age, relationship length, self-reported ejaculation latency time, and frequency of different sexual activities and different modes of achieving ejaculation were conducted on a population-based sample of 3,189 males aged 18-48 years (mean = 29.9 years, standard deviation = 6.94). Age and relationship length were significantly negatively associated with self-reported ejaculation latency time. Frequency of different kinds of sexual behavior generally had a positive association with self-reported ejaculation latency time, as had different modes of achieving ejaculation. The findings highlight the need for more extensive studies on and increased knowledge of different aspects of ejaculatory function before a valid and suitable definition for premature ejaculation can be formulated.
NASA Astrophysics Data System (ADS)
Gholibeigian, Hassan
2015-03-01
Iranian Philosopher, Mulla Sadra (1571-1640) in his theory of ``Substantial motion'' emphasized that ``the universe moves in its entity'', and ``the time is the fourth dimension of the universe'' This definition of space-time is proposed by him at three hundred years before Einstein. He argued that the time is magnitude of the motion (momentum) of the matter in its entity. In the other words, the time for each atom (body) is sum of the momentums of its involved fundamental particles. The momentum for each atom is different from the other atoms. In this methodology, by proposing some formulas, we can calculate the time for involved particles' momentum (time) for each atom in a second of the Eastern Time Zone (ETZ). Due to differences between these momentums during a second in ETZ, the time for each atom, will be different from the other atoms. This is the relativity in quantum physics. On the other hand, the God communicates with elementary particles via sub-particles (see my next paper) and transfers the packages (bit) of information and laws to them for processing and selection of their next step. Differences between packages like complexity and velocity of processing during the time, is the second variable in relativity of time for each atom which may be effective on the factor.
Agranovich, Anna V; Panter, A T; Puente, Antonio E; Touradji, Pegah
2011-07-01
Cultural differences in time attitudes and their effect on timed neuropsychological test performance were examined in matched non-clinical samples of 100 Russian and American adult volunteers using 8 tests that were previously reported to be relatively free of cultural bias: Color Trails Test (CTT); Ruff Figural Fluency Test (RFFT); Symbol Digit Modalities Test (SDMT); and Tower of London-Drexel Edition (ToL(Dx)). A measure of time attitudes, the Culture of Time Inventory (COTI-33) was used to assess time attitudes potentially affecting time-limited testing. Americans significantly outscored Russians on CTT, SDMT, and ToL(Dx) (p,.05) while differences in RFFT scores only approached statistical significance. Group differences also emerged in COTI-33 factor scores, which partially mediated differences in performance on CTT-1, SDMT, and ToL(Dx) initiation time, but did not account for the effect of culture on CTT-2. Significant effect of culture was revealed in ratings of familiarity with testing procedures that was negatively related to CTT, ToL(Dx), and SDMT scores. Current findings indicated that attitudes toward time may influence results of time limited testing and suggested that individuals who lack familiarity with timed testing procedures tend to obtain lower scores on timed tests.
Children's self-allocation and use of classroom curricular time.
Ingram, J; Worrall, N
1992-02-01
A class of 9-10 year-olds (N = 12) in a British primary school were observed as it moved over a one-year period through three types of classroom environment, traditional directive, transitional negotiative and established negotiative. Each environment offered the children a differing relationship with curricular time, its control and allocation, moving from teacher-allocated time to child allocation. Pupil self-report and classroom observation indicated differences in the balance of curricular spread and allocated time on curricular subject in relation to the type of classroom organisation and who controlled classroom time. These differences were at both class and individual child level. The established negotiative environment recorded the most equitable curricular balance, traditional directive the least. While individual children responded differently within and across the three classroom environments, the established negotiative where time was under child control recorded preference for longer activity periods compared to where the teacher controlled time allocations.
Sugimoto, Yumi; Kajiwara, Yoshinobu; Hirano, Kazufumi; Yamada, Shizuo; Tagawa, Noriko; Kobayashi, Yoshiharu; Hotta, Yoshihiro; Yamada, Jun
2008-09-11
Strain differences in immobility time in the forced swimming test were investigated in five strains of mice, namely, ICR, ddY, C57BL/6, DBA/2 and BALB/c mice. There were significant strain differences. The immobility times of ICR, ddY and C57BL/6 mice were longer than those of DBA/2 and BALB/c mice. Immobility times were not significantly related to locomotor activity in these strains. There were also differences in sensitivity to the selective serotonin reuptake inhibitor (SSRI) fluvoxamine. In ICR, ddY and C57BL/6 mice, fluvoxamine did not affect immobility time, while it reduced the immobility time of DBA/2 and BALB/c mice dose-dependently. The noradrenaline reuptake inhibitor desipramine decreased immobility time in all strains of mice. Serotonin (5-HT) transporter binding in the brains of all five strains of mice was also investigated. Analysis of 5-HT transporter binding revealed significant strain differences, being lower in DBA/2 and BALB/c mice than in other strains of mice. The amount of 5-HT transporter binding was correlated to baseline immobility time. However, there was no significant relation between noradrenaline transporter binding and immobility time. These results suggest that the duration of baseline immobility depends on the levels of 5-HT transporter binding, leading to apparent strain differences in immobility time in the forced swimming test. Furthermore, differences in 5-HT transporter binding may cause variations in responses to fluvoxamine.
Typewriting rate as a function of reaction time.
Hayes, V; Wilson, G D; Schafer, R L
1977-12-01
This study was designed to determine the relationship between reaction time and typewriting rate. Subjects were 24 typists ranging in age from 19 to 39 yr. Reaction times (.001 sec) to a light were recorded for each finger and to each alphabetic character and three punctuation marks. Analysis of variance yielded significant differences in reaction time among subjects and fingers. Correlation between typewriting rate and average reaction time to the alphabetic characters and three punctuation marks was --.75. Correlation between typewriting rate and the difference between the reaction time of the hands was --.42. Factors influencing typewriting rate may include reaction time of the fingers, difference between the reaction time of the hands, and reaction time to individual keys on the typewriter. Implications exist for instructional methodology and further research.
Ayala, Francisco; De Ste Croix, Mark; Sainz de Baranda, Pilar; Santonja, Fernando
2014-04-01
The purposes were twofold: (a) to ascertain the inter-session reliability of hamstrings total reaction time, pre-motor time and motor time; and (b) to examine sex-related differences in the hamstrings reaction times profile. Twenty-four men and 24 women completed the study. Biceps femoris and semitendinosus total reaction time, pre-motor time and motor time measured during eccentric isokinetic contractions were recorded on three different occasions. Inter-session reliability was examined through typical percentage error (CVTE), percentage change in the mean (CM) and intraclass correlations (ICC). For both biceps femoris and semitendinosus, total reaction time, pre-motor time and motor time measures demonstrated moderate inter-session reliability (CVTE<10%; CM<3%; ICC>0.7). The results also indicated that, although not statistically significant, women reported consistently longer hamstrings total reaction time (23.5ms), pre-motor time (12.7ms) and motor time (7.5ms) values than men. Therefore, an observed change larger than 5%, 9% and 8% for total reaction time, pre-motor time and motor time respectively from baseline scores after performing a training program would indicate that a real change was likely. Furthermore, while not statistically significant, sex differences were noted in the hamstrings reaction time profile which may play a role in the greater incidence of ACL injuries in women. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sex difference in Double Iron ultra-triathlon performance
2013-01-01
Background The present study examined the sex difference in swimming (7.8 km), cycling (360 km), running (84 km), and overall race times for Double Iron ultra-triathletes. Methods Sex differences in split times and overall race times of 1,591 men and 155 women finishing a Double Iron ultra-triathlon between 1985 and 2012 were analyzed. Results The annual number of finishes increased linearly for women and exponentially for men. Men achieved race times of 1,716 ± 243 min compared to 1,834 ± 261 min for women and were 118 ± 18 min (6.9%) faster (p < 0.01). Men finished swimming within 156 ± 63 min compared to women with 163 ± 31 min and were 8 ± 32 min (5.1 ± 5.0%) faster (p < 0.01). For cycling, men (852 ± 196 min) were 71 ± 70 min (8.3 ± 3.5%) faster than women (923 ± 126 min) (p < 0.01). Men completed the run split within 710 ± 145 min compared to 739 ± 150 min for women and were 30 ± 5 min (4.2 ± 3.4%) faster (p = 0.03). The annual three fastest men improved race time from 1,650 ± 114 min in 1985 to 1,339 ± 33 min in 2012 (p < 0.01). Overall race time for women remained unchanged at 1,593 ± 173 min with an unchanged sex difference of 27.1 ± 8.6%. In swimming, the split times for the annual three fastest women (148 ± 14 min) and men (127 ± 20 min) remained unchanged with an unchanged sex difference of 26.8 ± 13.5%. In cycling, the annual three fastest men improved the split time from 826 ± 60 min to 666 ± 18 min (p = 0.02). For women, the split time in cycling remained unchanged at 844 ± 54 min with an unchanged sex difference of 25.2 ± 7.3%. In running, the annual fastest three men improved split times from 649 ± 77 min to 532 ± 16 min (p < 0.01). For women, however, the split times remained unchanged at 657 ± 70 min with a stable sex difference of 32.4 ± 12.5%. Conclusions To summarize, the present findings showed that men were faster than women in Double Iron ultra-triathlon, men improved overall race times, cycling and running split times, and the sex difference remained unchanged across years for overall race time and split times. The sex differences for overall race times and split times were higher than reported for Ironman triathlon. PMID:23849631
Booth, John N; Muntner, Paul; Abdalla, Marwah; Diaz, Keith M; Viera, Anthony J; Reynolds, Kristi; Schwartz, Joseph E; Shimbo, Daichi
2016-02-01
To determine whether defining diurnal periods by self-report, fixed-time, or actigraphy produce different estimates of night-time and daytime ambulatory blood pressure (ABP). Over a median of 28 days, 330 participants completed two 24-h ABP and actigraphy monitoring periods with sleep diaries. Fixed night-time and daytime periods were defined as 0000-0600 h and 1000-2000 h, respectively. Using the first ABP period, within-individual differences for mean night-time and daytime ABP and kappa statistics for night-time and daytime hypertension (systolic/diastolic ABP≥120/70 mmHg and ≥135/85 mmHg, respectively) were estimated comparing self-report, fixed-time, or actigraphy for defining diurnal periods. Reproducibility of ABP was also estimated. Within-individual mean differences in night-time systolic ABP were small, suggesting little bias, when comparing the three approaches used to define diurnal periods. The distribution of differences, represented by 95% confidence intervals (CI), in night-time systolic and diastolic ABP and daytime systolic and diastolic ABP was narrowest for self-report versus actigraphy. For example, mean differences (95% CI) in night-time systolic ABP for self-report versus fixed-time was -0.53 (-6.61, +5.56) mmHg, self-report versus actigraphy was 0.91 (-3.61, +5.43) mmHg, and fixed-time versus actigraphy was 1.43 (-5.59, +8.46) mmHg. Agreement for night-time and daytime hypertension was highest for self-report versus actigraphy: kappa statistic (95% CI) = 0.91 (0.86,0.96) and 1.00 (0.98,1.00), respectively. The reproducibility of mean ABP and hypertension categories was similar using each approach. Given the high agreement with actigraphy, these data support using self-report to define diurnal periods on ABP monitoring. Further, the use of fixed-time periods may be a reasonable alternative approach.
Path integral for equities: Dynamic correlation and empirical analysis
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Cao, Yang; Lau, Ada; Tang, Pan
2012-02-01
This paper develops a model to describe the unequal time correlation between rate of returns of different stocks. A non-trivial fourth order derivative Lagrangian is defined to provide an unequal time propagator, which can be fitted to the market data. A calibration algorithm is designed to find the empirical parameters for this model and different de-noising methods are used to capture the signals concealed in the rate of return. The detailed results of this Gaussian model show that the different stocks can have strong correlation and the empirical unequal time correlator can be described by the model's propagator. This preliminary study provides a novel model for the correlator of different instruments at different times.
White, Eric J; Emanuelsson, Olof; Scalzo, David; Royce, Thomas; Kosak, Steven; Oakeley, Edward J; Weissman, Sherman; Gerstein, Mark; Groudine, Mark; Snyder, Michael; Schübeler, Dirk
2004-12-21
Duplication of the genome during the S phase of the cell cycle does not occur simultaneously; rather, different sequences are replicated at different times. The replication timing of specific sequences can change during development; however, the determinants of this dynamic process are poorly understood. To gain insights into the contribution of developmental state, genomic sequence, and transcriptional activity to replication timing, we investigated the timing of DNA replication at high resolution along an entire human chromosome (chromosome 22) in two different cell types. The pattern of replication timing was correlated with respect to annotated genes, gene expression, novel transcribed regions of unknown function, sequence composition, and cytological features. We observed that chromosome 22 contains regions of early- and late-replicating domains of 100 kb to 2 Mb, many (but not all) of which are associated with previously described chromosomal bands. In both cell types, expressed sequences are replicated earlier than nontranscribed regions. However, several highly transcribed regions replicate late. Overall, the DNA replication-timing profiles of the two different cell types are remarkably similar, with only nine regions of difference observed. In one case, this difference reflects the differential expression of an annotated gene that resides in this region. Novel transcribed regions with low coding potential exhibit a strong propensity for early DNA replication. Although the cellular function of such transcripts is poorly understood, our results suggest that their activity is linked to the replication-timing program.
A Single-Unit Design Structure and Gender Differences in the Swimming World Championships
Pushkar, Svetlana; Issurin, Vladimir B.; Verbitsky, Oleg
2014-01-01
Four 50 meter male/female finals - the freestyle, butterfly, breaststroke, and backstroke - swum during individual events at the Swimming World Championships (SWCs) can be defined in four clusters. The aim of the present study was to use a single-unit design structure, in which the swimmer was defined at only one scale, to evaluate gender differences in start reaction times among elite swimmers in 50 m events. The top six male and female swimmers in the finals of four swimming stroke final events in six SWCs were analyzed. An unpaired t-test was used. The p-values were evaluated using Neo-Fisherian significance assessments (Hurlbert and Lombardi, 2012). For the freestyle, gender differences in the start reaction times were positively identified for five of the six SWCs. For the backstroke, gender differences in the start reaction times could be dismissed for five of the six SWCs. For both the butterfly and breaststroke, gender differences in the start reaction times yielded inconsistent statistical differences. Pooling all swimmers together (df = 286) showed that an overall gender difference in the start reaction times could be positively identified: p = 0.00004. The contrast between the gender differences in start reaction times between the freestyle and backstroke may be associated with different types of gender adaptations to swimming performances. When the natural groupings of swimming stroke final events were ignored, sacrificial pseudoreplication occurred, which may lead to erroneous statistical differences. PMID:25414754
Practicality of performing medical procedures in chemical protective ensembles.
Garner, Alan; Laurence, Helen; Lee, Anna
2004-04-01
To determine whether certain life saving medical procedures can be successfully performed while wearing different levels of personal protective equipment (PPE), and whether these procedures can be performed in a clinically useful time frame. We assessed the capability of eight medical personnel to perform airway maintenance and antidote administration procedures on manikins, in all four described levels of PPE. The levels are: Level A--a fully encapsulated chemically resistant suit; Level B--a chemically resistant suit, gloves and boots with a full-faced positive pressure supplied air respirator; Level C--a chemically resistant splash suit, boots and gloves with an air-purifying positive or negative pressure respirator; Level D--a work uniform. Time in seconds to inflate the lungs of the manikin with bag-valve-mask, laryngeal mask airway (LMA) and endotracheal tube (ETT) were determined, as was the time to secure LMAs and ETTs with either tape or linen ties. Time to insert a cannula in a manikin was also determined. There was a significant difference in time taken to perform procedures in differing levels of personal protective equipment (F21,72 = 1.75, P = 0.04). Significant differences were found in: time to lung inflation using an endotracheal tube (A vs. C mean difference and standard error 75.6 +/- 23.9 s, P = 0.03; A vs. D mean difference and standard error 78.6 +/- 23.9 s, P = 0.03); time to insert a cannula (A vs. D mean difference and standard error 63.6 +/- 11.1 s, P < 0.001; C vs. D mean difference and standard error 40.0 +/- 11.1 s, P = 0.01). A significantly greater time to complete procedures was documented in Level A PPE (fully encapsulated suits) compared with Levels C and D. There was however, no significant difference in times between Level B and Level C. The common practice of equipping hospital and medical staff with only Level C protection should be re-evaluated.
[The research of Valeriana amurensis seed germination characteristics].
Liu, Juan; Yang, Chun-Rong; Jiang, Bo; Fang, Min; Du, Juan
2011-10-01
To study the effect of different treatments on the Valeriana amurensis seed germination rate. Used different chemical reagents and seed soakings on the routine germination test and the orthogonal test of the Valeriana amurensis seed, calculated the germination rate under different germination condition. Valeriana amurensis treated with different chemical reagends had different germination rate. The suitable immersion time could enhance Valeriana amurensis seed germination rate. Different treatment time, different disposal temperature, different germination temperature would have an impact on the Valeriana amurensis seed germination rate. In order to raise the Valeriana amurensis seed germination rate, use appropriate treatment on the seed before plant seeds; The seed growing must under suitable time and temperature.
Improvements in brain activation detection using time-resolved diffuse optical means
NASA Astrophysics Data System (ADS)
Montcel, Bruno; Chabrier, Renee; Poulet, Patrick
2005-08-01
An experimental method based on time-resolved absorbance difference is described. The absorbance difference is calculated over each temporal step of the optical signal with the time-resolved Beer-Lambert law. Finite element simulations show that each step corresponds to a different scanned zone and that cerebral contribution increases with the arrival time of photons. Experiments are conducted at 690 and 830 nm with a time-resolved system consisting of picosecond laser diodes, micro-channel plate photo-multiplier tube and photon counting modules. The hemodynamic response to a short finger tapping stimulus is measured over the motor cortex. Time-resolved absorbance difference maps show that variations in the optical signals are not localized in superficial regions of the head, which testify for their cerebral origin. Furthermore improvements in the detection of cerebral activation is achieved through the increase of variations in absorbance by a factor of almost 5 for time-resolved measurements as compared to non-time-resolved measurements.
Value of information of repair times for offshore wind farm maintenance planning
NASA Astrophysics Data System (ADS)
Seyr, Helene; Muskulus, Michael
2016-09-01
A large contribution to the total cost of energy in offshore wind farms is due to maintenance costs. In recent years research has focused therefore on lowering the maintenance costs using different approaches. Decision support models for scheduling the maintenance exist already, dealing with different factors influencing the scheduling. Our contribution deals with the uncertainty in the repair times. Given the mean repair times for different turbine components we make some assumptions regarding the underlying repair time distribution. We compare the results of a decision support model for the mean times to repair and those repair time distributions. Additionally, distributions with the same mean but different variances are compared under the same conditions. The value of lowering the uncertainty in the repair time is calculated and we find that using distributions significantly decreases the availability, when scheduling maintenance for multiple turbines in a wind park. Having detailed information about the repair time distribution may influence the results of maintenance modeling and might help identify cost factors.
Dodzi, Madodana S; Muchenje, Voster
2012-10-01
The time budgets and daily milk yield of Jersey and Friesland cows and their crosses were compared in a pasture-based system by recording the time spent grazing, drinking, lying, standing and walking in four seasons of the year (cool-dry, hot-dry, hot-wet and post-rainy). Observations were made from 0800 to 1400 hours on seven cows per breed. Seven observers monitored the cows at 10-min intervals for 6 h using stop watches. Time spent standing was higher (P < 0.05) for Friesland compared to Jersey cows and the crossbred cows during the hot-wet season. Time spent walking differed among the three genotypes with the Jersey spending more time (P < 0.05) in both hot-wet and cool-dry seasons. No differences were noted on time spent lying down (P > 0.05) across the genotypes in the hot-wet season. In the cool-dry season, differences in time spent grazing (P < 0.05) were noted with the Jersey cows spending more time. The Friesland and the crossbred spent more time lying down (P < 0.05) than the Jersey cows in the cool-dry season. No time differences were noted for time spent standing (P > 0.05) in the same season. The Jersey cows spent the longest time walking (P < 0.05) during the cool-dry period. There were seasonal differences in time spent in all activities (P < 0.05). Time spent on grazing was longest in post-rainy season and lowest in hot-wet season. Differences were observed in the time spent lying down (P < 0.05). The longest period was observed in the hot-dry season and lowest in the hot-wet season. Daily milk yield varied (P < 0.05) with breed with the Friesland and Jersey producing higher yields than the crosses. The highest amount was produced in hot-dry and the least in hot-wet season. Milk yield and lying down were positively correlated (P < 0.05) in Jersey and Friesland cows. Standing was negatively correlated with milk yield (P < 0.05) in both Friesland and Jersey cows. No significant relationship was observed for the crossbred cows. It was concluded that the genotypes show different levels of sensitivity to seasons and that a relationship exists between milk yield and time budgets.
Time to Criterion: An Experimental Study.
ERIC Educational Resources Information Center
Anderson, Lorin W.
The purpose of the study was to investigate the magnitude of individual differences in time-to-criterion and the stability of these differences. Time-to-criterion was defined in two ways: the amount of elapsed time required to attain the criterion level and the amount of on-task time required to attain the criterion level. Ninety students were…
Time in Early Childhood: Creative Possibilities with Different Conceptions of Time
ERIC Educational Resources Information Center
Farquhar, Sandy
2016-01-01
Time is an important driver of pedagogy which is often overlooked in the busy atmosphere of an early childhood centre. Engaging philosophically with three different concepts of time, and drawing examples from literature and art to focus attention on how time is constituted in early childhood centres, this article argues that we inhabit the…
The effect of premilking udder preparation on Holstein cows milked 3 times daily.
Watters, R D; Schuring, N; Erb, H N; Schukken, Y H; Galton, D M
2012-03-01
Premilking udder preparation (including forestripping and duration of lag time-the time between first tactile stimulation and attachment of milking unit) might influence milking measures such as milking unit on-time, incidence of bimodality, and milk flow rates in Holstein cows milked 3 times daily. Holstein cows (n=786) from an 1,800-cow commercial dairy herd were enrolled under a restricted randomized design to determine the effect of 9 different premilking routines. Lag times were 0, 60, 90, 120, and 240s and included forestripping or no forestripping for a total of 9 treatments (no forestripping for 0 lag time); the study was conducted from February to November 2008. All cow-treatment combinations were compared with the control: predipping plus forestripping and drying with 90s of lag time. Cows were initially assigned to 1 of 3 treatments for a period of 7d and upon completion of the first 7-d period were reassigned to a different treatment until all treatments had been completed. From one treatment period to the next, cows had to switch stimulation method with no restriction on lag time. Cows did not receive all treatments during the duration of the trial. Early- to mid-lactation cows (EML; 17-167 DIM) and late-lactation cows (LL; 174-428 DIM) were housed in 2 different pens. Milk yield was significantly different between dip + forestrip and dip+dry for 2 of the treatments for EML cows compared with dip + forestrip and 90 s of lag-time (DF90); however, this was not thought to be due to treatment because the significant lag times were very different (60 and 240 s) and neither was an extreme value. Milk yield did not differ with treatment for the LL cows. Milking unit on-time did not differ when comparing all treatments for EML with treatment DF90; however, an increase in milking unit on-time occurred when lag time was 60s or less for LL cows. The highest incidence of bimodal milk curves was when lag time = 0 and this was independent of stage of lactation; a lag time of 240 s had the second-highest incidence of bimodal milk curves for EML and LL cows. Milk harvested in the first 2 min was lower for lag times of 0 and 240 s when compared with DF90. Increasing the lag time for all cows appeared to improve overall milking time efficiency (although lag time had no effect on EML cows). Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
All the stereotypes confirmed: differences in how Australian boys and girls use their time.
Ferrar, Katia E; Olds, Tim S; Walters, Julie L
2012-10-01
To influence adolescent health, a greater understanding of time use and covariates such as gender is required. To explore gender-specific time use patterns in Australian adolescents using high-resolution time use data. This study analyzed 24-hour recall time use data collected as part of the 2007 Australian National Children's Nutrition and Physical Activity Survey (n = 2,200). Univariate analyses to determine gender differences in time use were conducted. Boys spent more (p < .0001) time participating in screen-based (17.7 % vs. 14.2% daily time) and physical activities (10.7% vs. 9.2%). Girls spent more (p < .0001) time being social (4.7% vs. 3.4% daily time), studying (2.0% vs. 1.7%), and doing household chores (4.7% vs. 3.4%). There are gender-specific differences in time use behavior among Australian adolescents. The results reinforce existing time use gender-based stereotypes. Implications. The gender-specific time use behaviors offer intervention design possibilities.
A time-spectral approach to numerical weather prediction
NASA Astrophysics Data System (ADS)
Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai
2018-05-01
Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.
The Reading Habits of Developmental College Students at Different Levels of Reading Proficiency.
ERIC Educational Resources Information Center
Sheorey, Ravi; Mokhtari, Kouider
1994-01-01
Examines differences in reading habits of developmental college students with varying levels of reading proficiency. Finds that subjects spent an unusually low amount of time on academic reading and even less time on nonacademic reading. Finds no significant differences between high- and low-proficient readers with regard to amount of time spent…
NASA Technical Reports Server (NTRS)
Lansing, Faiza S.; Rascoe, Daniel L.
1993-01-01
This paper presents a modified Finite-Difference Time-Domain (FDTD) technique using a generalized conformed orthogonal grid. The use of the Conformed Orthogonal Grid, Finite Difference Time Domain (GFDTD) enables the designer to match all the circuit dimensions, hence eliminating a major source o error in the analysis.
NASA Astrophysics Data System (ADS)
Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken
2016-07-01
Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.
Woody, Carol Ann; Olsen, Jeffrey B.; Reynolds, Joel H.; Bentzen, Paul
2000-01-01
Sockeye salmon Oncorhynchus nerka in two tributary streams (about 20 km apart) of the same lake were compared for temporal variation in phenotypic (length, depth adjusted for length) and genotypic (six microsatellite loci) traits. Peak run time (July 16 versus 11 August) and run duration (43 versus 26 d) differed between streams. Populations were sampled twice, including an overlapping point in time. Divergence at microsatellite loci followed a temporal cline: Population sample groups collected at the same time were not different (F ST = 0), whereas those most separated in time were different (F ST = 0.011, P = 0.001). Although contemporaneous sample groups did not differ significantly in microsatellite genotypes (F ST = 0), phenotypic traits did differ significantly (MANOVA, P < 0.001). Fish from the larger stream were larger; fish from the smaller stream were smaller, suggesting differential fitness related to size. Results indicate run time differences among and within sockeye salmon populations may strongly influence levels of gene flow.
Time perception and time perspective differences between adolescents and adults.
Siu, Nicolson Y F; Lam, Heidi H Y; Le, Jacqueline J Y; Przepiorka, Aneta M
2014-09-01
The present experiment aimed to investigate the differences in time perception and time perspective between subjects representing two developmental stages, namely adolescence and middle adulthood. Twenty Chinese adolescents aged 15-25 and twenty Chinese adults aged 35-55 participated in the study. A time discrimination task and a time reproduction task were implemented to measure the accuracy of their time perception. The Zimbardo Time Perspective Inventory (Short-Form) was adopted to assess their time orientation. It was found that adolescents performed better than adults in both the time discrimination task and the time reproduction task. Adolescents were able to differentiate different time intervals with greater accuracy and reproduce the target duration more precisely. For the time reproduction task, it was also found that adults tended to overestimate the duration of the target stimuli while adolescents were more likely to underestimate it. As regards time perspective, adults were more future-oriented than adolescents, whereas adolescents were more present-oriented than adults. No significant relationship was found between time perspective and time perception. Copyright © 2014 Elsevier B.V. All rights reserved.
Estimating crustal heterogeneity from double-difference tomography
Got, J.-L.; Monteiller, V.; Virieux, J.; Okubo, P.
2006-01-01
Seismic velocity parameters in limited, but heterogeneous volumes can be inferred using a double-difference tomographic algorithm, but to obtain meaningful results accuracy must be maintained at every step of the computation. MONTEILLER et al. (2005) have devised a double-difference tomographic algorithm that takes full advantage of the accuracy of cross-spectral time-delays of large correlated event sets. This algorithm performs an accurate computation of theoretical travel-time delays in heterogeneous media and applies a suitable inversion scheme based on optimization theory. When applied to Kilauea Volcano, in Hawaii, the double-difference tomography approach shows significant and coherent changes to the velocity model in the well-resolved volumes beneath the Kilauea caldera and the upper east rift. In this paper, we first compare the results obtained using MONTEILLER et al.'s algorithm with those obtained using the classic travel-time tomographic approach. Then, we evaluated the effect of using data series of different accuracies, such as handpicked arrival-time differences ("picking differences"), on the results produced by double-difference tomographic algorithms. We show that picking differences have a non-Gaussian probability density function (pdf). Using a hyperbolic secant pdf instead of a Gaussian pdf allows improvement of the double-difference tomographic result when using picking difference data. We completed our study by investigating the use of spatially discontinuous time-delay data. ?? Birkha??user Verlag, Basel, 2006.
INDIVIDUAL DIFFERENCES IN IMPULSIVE CHOICE AND TIMING IN RATS
Galtress, Tiffany; Garcia, Ana; Kirkpatrick, Kimberly
2012-01-01
Individual differences in impulsive choice behavior have been linked to a variety of behavioral problems including substance abuse, smoking, gambling, and poor financial decision-making. Given the potential importance of individual differences in impulsive choice as a predictor of behavioral problems, the present study sought to measure the extent of individual differences in a normal sample of hooded Lister rats. Three experiments utilized variations of a delay discounting task to measure the degree of variation in impulsive choice behavior across individual rats. The individual differences accounted for 22–55% of the variance in choice behavior across the three experiments. In Experiments 2 and 3, the individual differences were still apparent when behavior was measured across multiple choice points. Large individual differences in the rate of responding, and modest individual differences in timing of responding were also observed during occasional peak trials. The individual differences in timing and rate, however, did not correlate consistently with individual differences in choice behavior. This suggests that a variety of factors may affect choice behavior, response rate, and response timing. PMID:22851792
Individual Differences in Motor Timing and Its Relation to Cognitive and Fine Motor Skills
Lorås, Håvard; Stensdotter, Ann-Katrin; Öhberg, Fredrik; Sigmundsson, Hermundur
2013-01-01
The present study investigated the relationship between individual differences in timing movements at the level of milliseconds and performance on selected cognitive and fine motor skills. For this purpose, young adult participants (N = 100) performed a repetitive movement task paced by an auditory metronome at different rates. Psychometric measures included the digit-span and symbol search subtasks from the Wechsler battery as well as the Raven SPM. Fine motor skills were assessed with the Purdue Pegboard test. Motor timing performance was significantly related (mean r = .3) to cognitive measures, and explained both unique and shared variance with information-processing speed of Raven's scores. No significant relations were found between motor timing measures and fine motor skills. These results show that individual differences in cognitive and motor timing performance is to some extent dependent upon shared processing not associated with individual differences in manual dexterity. PMID:23874952
Individual differences in motor timing and its relation to cognitive and fine motor skills.
Lorås, Håvard; Stensdotter, Ann-Katrin; Öhberg, Fredrik; Sigmundsson, Hermundur
2013-01-01
The present study investigated the relationship between individual differences in timing movements at the level of milliseconds and performance on selected cognitive and fine motor skills. For this purpose, young adult participants (N = 100) performed a repetitive movement task paced by an auditory metronome at different rates. Psychometric measures included the digit-span and symbol search subtasks from the Wechsler battery as well as the Raven SPM. Fine motor skills were assessed with the Purdue Pegboard test. Motor timing performance was significantly related (mean r = .3) to cognitive measures, and explained both unique and shared variance with information-processing speed of Raven's scores. No significant relations were found between motor timing measures and fine motor skills. These results show that individual differences in cognitive and motor timing performance is to some extent dependent upon shared processing not associated with individual differences in manual dexterity.
Al-Sobayel, Hana; Al-Hazzaa, Hazzaa M; Abahussain, Nanda A; Qahwaji, Dina M; Musaiger, Abdulrahman O
2015-01-01
The aim of the study was to examine the gender differences and predictors of leisure versus non-leisure time physical activities among Saudi adolescents aged 14-19 years. The multistage stratified cluster random sampling technique was used. A sample of 1,388 males and 1,500 females enrolled in secondary schools in three major cities in Saudi Arabia was included. Anthropometric measurements were performed and Body Mass Index was calculated. Physical activity, sedentary behaviours and dietary habits were measured using a self-reported validated questionnaire. The total time spent in leisure and non-leisure physical activity per week was 90 and 77 minutes, respectively. The males spent more time per week in leisure-time physical activities than females. Females in private schools spent more time during the week in leisure-time physical activities, compared to females in Stateschools. There was a significant difference between genders by obesity status interaction in leisure-time physical activity. Gender, and other factors, predicted total duration spent in leisure-time and non-leisure-time physical activity. The study showed that female adolescents are much less active than males, especially in leisure-time physical activities. Programmes to promote physical activity among adolescents are urgently needed, with consideration of gender differences.
Lee, Sewon; Lee, Kiyoung
2017-06-22
Time location patterns are a significant factor for exposure assessment models of air pollutants. Factors associated with time location patterns in urban populations are typically due to high air pollution levels in urban areas. The objective of this study was to determine the seasonal differences in time location patterns in two urban cities. A Time Use Survey of Korean Statistics (KOSTAT) was conducted in the summer, fall, and winter of 2014. Time location data from Seoul and Busan were collected, together with demographic information obtained by diaries and questionnaires. Determinants of the time spent at each location were analyzed by multiple linear regression and the stepwise method. Seoul and Busan participants had similar time location profiles over the three seasons. The time spent at own home, other locations, workplace/school and during walk were similar over the three seasons in both the Seoul and Busan participants. The most significant time location pattern factors were employment status, age, gender, monthly income, and spouse. Season affected the time spent at the workplace/school and other locations in the Seoul participants, but not in the Busan participants. The seasons affected each time location pattern of the urban population slightly differently, but overall there were few differences.
Demographic Group Differences in Adolescents' Time Attitudes
ERIC Educational Resources Information Center
Andretta, James R.; Worrell, Frank C.; Mello, Zena R.; Dixson, Dante D.; Baik, Sharon H.
2013-01-01
In the present study, we examined demographic differences in time attitudes in a sample of 293 adolescents. Time attitudes were measured using the Adolescent Time Attitude Scale (Mello & Worrell, 2007; Worrell, Mello, & Buhl, 2011), which assesses positive and negative attitudes toward the past, the present, and the future. Generally, African…
The Joint Staff Officer’s Guide 2000
2000-01-01
sociological problems created by differences in customs, religions , and standards of living. These fac- tors point to the need for a different mental...deployment data. See also time- phased force and deployment data. (JP 1-02) times. (DOD) (C-, D-, M-days end at 2400 hours Universal Time ( zulu time
Time Translation of Quantum Properties
NASA Astrophysics Data System (ADS)
Laura, Roberto; Vanni, Leonardo
2009-02-01
Based on the notion of time translation, we develop a formalism to deal with the logic of quantum properties at different times. In our formalism it is possible to enlarge the usual notion of context to include composed properties involving properties at different times. We compare our results with the theory of consistent histories.
NASA Astrophysics Data System (ADS)
Régis, J.-M.; Jolie, J.; Mach, H.; Simpson, G. S.; Blazhev, A.; Pascovici, G.; Pfeiffer, M.; Rudigier, M.; Saed-Samii, N.; Warr, N.; Blanc, A.; de France, G.; Jentschel, M.; Köster, U.; Mutti, P.; Soldner, T.; Ur, C. A.; Urban, W.; Bruce, A. M.; Drouet, F.; Fraile, L. M.; Ilieva, S.; Korten, W.; Kröll, T.; Lalkovski, S.; Mărginean, S.; Paziy, V.; Podolyák, Zs.; Regan, P. H.; Stezowski, O.; Vancraeyenest, A.
2015-05-01
A novel method for direct electronic "fast-timing" lifetime measurements of nuclear excited states via γ-γ coincidences using an array equipped with N very fast high-resolution LaBr3(Ce) scintillator detectors is presented. The generalized centroid difference method provides two independent "start" and "stop" time spectra obtained without any correction by a superposition of the N(N - 1)/2 calibrated γ-γ time difference spectra of the N detector fast-timing system. The two fast-timing array time spectra correspond to a forward and reverse gating of a specific γ-γ cascade and the centroid difference as the time shift between the centroids of the two time spectra provides a picosecond-sensitive mirror-symmetric observable of the set-up. The energydependent mean prompt response difference between the start and stop events is calibrated and used as a single correction for lifetime determination. These combined fast-timing array mean γ-γ zero-time responses can be determined for 40 keV < Eγ < 1.4 MeV with a precision better than 10 ps using a 152Eu γ-ray source. The new method is described with examples of (n,γ) and (n,f,γ) experiments performed at the intense cold-neutron beam facility PF1B of the Institut Laue-Langevin in Grenoble, France, using 16 LaBr3(Ce) detectors within the EXILL&FATIMA campaign in 2013. The results are discussed with respect to possible systematic errors induced by background contributions.
Sonnby-Borgström, Marianne; Jönsson, Peter; Svensson, Owe
2008-04-01
Previous studies on gender differences in facial imitation and verbally reported emotional contagion have investigated emotional responses to pictures of facial expressions at supraliminal exposure times. The aim of the present study was to investigate how gender differences are related to different exposure times, representing information processing levels from subliminal (spontaneous) to supraliminal (emotionally regulated). Further, the study aimed at exploring correlations between verbally reported emotional contagion and facial responses for men and women. Masked pictures of angry, happy and sad facial expressions were presented to 102 participants (51 men) at exposure times from subliminal (23 ms) to clearly supraliminal (2500 ms). Myoelectric activity (EMG) from the corrugator and the zygomaticus was measured and the participants reported their hedonic tone (verbally reported emotional contagion) after stimulus exposures. The results showed an effect of exposure time on gender differences in facial responses as well as in verbally reported emotional contagion. Women amplified imitative responses towards happy vs. angry faces and verbally reported emotional contagion with prolonged exposure times, whereas men did not. No gender differences were detected at the subliminal or borderliminal exposure times, but at the supraliminal exposure gender differences were found in imitation as well as in verbally reported emotional contagion. Women showed correspondence between their facial responses and their verbally reported emotional contagion to a greater extent than men. The results were interpreted in terms of gender differences in emotion regulation, rather than as differences in biologically prepared emotional reactivity.
Circuit for measuring time differences among events
Romrell, Delwin M.
1977-01-01
An electronic circuit has a plurality of input terminals. Application of a first input signal to any one of the terminals initiates a timing sequence. Later inputs to the same terminal are ignored but a later input to any other terminal of the plurality generates a signal which can be used to measure the time difference between the later input and the first input signal. Also, such time differences may be measured between the first input signal and an input signal to any other terminal of the plurality or the circuit may be reset at any time by an external reset signal.
Partecke, Jesko; Van't Hof, Thomas; Gwinner, Eberhard
2004-01-01
Species which have settled in urban environments are exposed to different conditions from their wild conspecifics. A previous comparative study of an urban and a forest-living European blackbird population had revealed a three weeks earlier onset of gonadal growth in urban individuals. These physiological adjustments are either the result of genetic differences that have evolved during the urbanization process, or of phenotypic flexibility resulting from the bird's exposure to the different environmental conditions of town or forest. To identify which of these two mechanisms causes the differences in reproductive timing, hand-reared birds originating from the urban and the forest populations were kept in identical conditions. The substantial differences in the timing of reproduction between urban and forest birds known from the field did not persist under laboratory conditions, indicating that temporal differences in reproductive timing between these two populations are mainly a result of phenotypic flexibility. Nevertheless, urban males initiated plasma luteinizing hormone (LH) secretion and testicular development earlier than forest males in their first reproductive season. Moreover, plasma LH concentration and follicle size declined earlier in urban females than in forest females, suggesting that genetic differences are also involved and might contribute to the variations in the timing of reproduction in the wild. PMID:15451688
P.C. Stoy; M.C. Dietze; A.D. Richardson; R. Vargas; A.G. Barr; R.S. Anderson; M.A. Arain; I.T. Baker; T.A. Black; J.M. Chen; R.B. Cook; C.M. Gough; R.F. Grant; D.Y. Hollinger; R.C. Izaurralde; C.J. Kucharik; P. Lafleur; B.E. Law; S. Liu; E. Lokupitiya; Y. Luo; J. W. Munger; C. Peng; B. Poulter; D.T. Price; D. M. Ricciuto; W. J. Riley; A. K. Sahoo; K. Schaefer; C.R. Schwalm; H. Tian; H. Verbeeck; E. Weng
2013-01-01
Earth system processes exhibit complex patterns across time, as do the models that seek to replicate these processes. Model output may or may not be significantly related to observations at different times and on different frequencies. Conventional model diagnostics provide an aggregate view of model-data agreement, but usually do not identify the time and frequency...
Same Hours, Different Time Distribution: Any Difference in EFL?
ERIC Educational Resources Information Center
Serrano, Raquel; Munoz, Carmen
2007-01-01
The effects of the distribution of instructional time on the acquisition of a second or foreign language are still not well known. This paper will analyze the performance of adult students enrolled in three different types of EFL programs in which the distribution of time varies. The first one, called "extensive", distributes a total of 110 h in 7…
1992-01-01
As this statement describes, institutional healthcare providers have recorded timing differences and continue to be confronted with circumstances warranting their recording. Medicare arrangements for paying for capital related cost are the most frequent basis of recording or changing previously recorded timing differences but other bases for recording or changing previously recorded timing differences are described in this statement. As this statement was being prepared for publication, another significant change in Medicare arrangements for paying for capital-related cost was under consideration. This statement is intended to describe the issue of timing differences generally and not to focus only on changes called for by a single legislative or regulatory action. Furthermore, the disposition of the currently proposed change is uncertain. Accordingly, the examples included in the appendices of this statement are the same as were included in the exposure draft of this statement released in 1990. HFMA has indicated its intent to prepare examples of the recording of changes in timing differences called for by legislation or regulations when the terms are sufficiently certain to warrant action. Those examples will be based on the conclusions included in this statement.
Steps Toward Optimal Competitive Scheduling
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen
2006-01-01
This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum ofsers preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.