Science.gov

Sample records for auditory time coding

  1. Coding space-time stimulus dynamics in auditory brain maps

    PubMed Central

    Wang, Yunyan; Gutfreund, Yoram; Peña, José L.

    2014-01-01

    Sensory maps are often distorted representations of the environment, where ethologically-important ranges are magnified. The implication of a biased representation extends beyond increased acuity for having more neurons dedicated to a certain range. Because neurons are functionally interconnected, non-uniform representations influence the processing of high-order features that rely on comparison across areas of the map. Among these features are time-dependent changes of the auditory scene generated by moving objects. How sensory representation affects high order processing can be approached in the map of auditory space of the owl's midbrain, where locations in the front are over-represented. In this map, neurons are selective not only to location but also to location over time. The tuning to space over time leads to direction selectivity, which is also topographically organized. Across the population, neurons tuned to peripheral space are more selective to sounds moving into the front. The distribution of direction selectivity can be explained by spatial and temporal integration on the non-uniform map of space. Thus, the representation of space can induce biased computation of a second-order stimulus feature. This phenomenon is likely observed in other sensory maps and may be relevant for behavior. PMID:24782781

  2. Precise Feature Based Time Scales and Frequency Decorrelation Lead to a Sparse Auditory Code

    PubMed Central

    Chen, Chen; Read, Heather L.; Escabí, Monty A.

    2012-01-01

    Sparse redundancy reducing codes have been proposed as efficient strategies for representing sensory stimuli. A prevailing hypothesis suggests that sensory representations shift from dense redundant codes in the periphery to selective sparse codes in cortex. We propose an alternative framework where sparseness and redundancy depend on sensory integration time scales and demonstrate that the central nucleus of the inferior colliculus (ICC) of cats encodes sound features by precise sparse spike trains. Direct comparisons with auditory cortical neurons demonstrate that ICC responses were sparse and uncorrelated as long as the spike train time scales were matched to the sensory integration time scales relevant to ICC neurons. Intriguingly, correlated spiking in the ICC was substantially lower than predicted by linear or nonlinear models and strictly observed for neurons with best frequencies within a “critical band,” the hallmark of perceptual frequency resolution in mammals. This is consistent with a sparse asynchronous code throughout much of the ICC and a complementary correlation code within a critical band that may allow grouping of perceptually relevant cues. PMID:22723685

  3. Refractoriness enhances temporal coding by auditory nerve fibers.

    PubMed

    Avissar, Michael; Wittig, John H; Saunders, James C; Parsons, Thomas D

    2013-05-01

    A universal property of spiking neurons is refractoriness, a transient decrease in discharge probability immediately following an action potential (spike). The refractory period lasts only one to a few milliseconds, but has the potential to affect temporal coding of acoustic stimuli by auditory neurons, which are capable of submillisecond spike-time precision. Here this possibility was investigated systematically by recording spike times from chicken auditory nerve fibers in vivo while stimulating with repeated pure tones at characteristic frequency. Refractory periods were tightly distributed, with a mean of 1.58 ms. A statistical model was developed to recapitulate each fiber's responses and then used to predict the effect of removing the refractory period on a cell-by-cell basis for two largely independent facets of temporal coding: faithful entrainment of interspike intervals to the stimulus frequency and precise synchronization of spike times to the stimulus phase. The ratio of the refractory period to the stimulus period predicted the impact of refractoriness on entrainment and synchronization. For ratios less than ∼0.9, refractoriness enhanced entrainment and this enhancement was often accompanied by an increase in spike-time precision. At higher ratios, little or no change in entrainment or synchronization was observed. Given the tight distribution of refractory periods, the ability of refractoriness to improve temporal coding is restricted to neurons responding to low-frequency stimuli. Enhanced encoding of low frequencies likely affects sound localization and pitch perception in the auditory system, as well as perception in nonauditory sensory modalities, because all spiking neurons exhibit refractoriness. PMID:23637161

  4. Changing Auditory Time with Prismatic Goggles

    ERIC Educational Resources Information Center

    Magnani, Barbara; Pavani, Francesco; Frassinetti, Francesca

    2012-01-01

    The aim of the present study was to explore the spatial organization of auditory time and the effects of the manipulation of spatial attention on such a representation. In two experiments, we asked 28 adults to classify the duration of auditory stimuli as "short" or "long". Stimuli were tones of high or low pitch, delivered left or right of the…

  5. How the owl resolves auditory coding ambiguity.

    PubMed

    Mazer, J A

    1998-09-01

    The barn owl (Tyto alba) uses interaural time difference (ITD) cues to localize sounds in the horizontal plane. Low-order binaural auditory neurons with sharp frequency tuning act as narrow-band coincidence detectors; such neurons respond equally well to sounds with a particular ITD and its phase equivalents and are said to be phase ambiguous. Higher-order neurons with broad frequency tuning are unambiguously selective for single ITDs in response to broad-band sounds and show little or no response to phase equivalents. Selectivity for single ITDs is thought to arise from the convergence of parallel, narrow-band frequency channels that originate in the cochlea. ITD tuning to variable bandwidth stimuli was measured in higher-order neurons of the owl's inferior colliculus to examine the rules that govern the relationship between frequency channel convergence and the resolution of phase ambiguity. Ambiguity decreased as stimulus bandwidth increased, reaching a minimum at 2-3 kHz. Two independent mechanisms appear to contribute to the elimination of ambiguity: one suppressive and one facilitative. The integration of information carried by parallel, distributed processing channels is a common theme of sensory processing that spans both modality and species boundaries. The principles underlying the resolution of phase ambiguity and frequency channel convergence in the owl may have implications for other sensory systems, such as electrolocation in electric fish and the computation of binocular disparity in the avian and mammalian visual systems. PMID:9724807

  6. Central auditory conduction time in the rat.

    PubMed

    Shaw, N A

    1990-01-01

    Central conduction time is the time for an afferent volley to traverse the central pathways of a sensory system. In the present study, central auditory conduction time (CACT) was calculated for the rat, the first such formal measurement in any animal. Brainstem auditory evoked potentials (BAEPs) were recorded simultaneously with the primary response of the auditory cortex (P1). The latency of wave II of the BAEP, which arises in the cochlear nucleus, was subtracted from that of P1. This yielded a mean CACT of 6.6 ms. The results confirm a previous theoretical estimate that CACT in the rat is at least twice as long as central somatosensory conduction time. PMID:2311700

  7. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities.

    PubMed

    Deneux, Thomas; Kempf, Alexandre; Daret, Aurélie; Ponsot, Emmanuel; Bathellier, Brice

    2016-01-01

    Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. PMID:27580932

  8. Auditory Inspection Time, Intelligence and Pitch Discrimination.

    ERIC Educational Resources Information Center

    Deary, Ian J.; And Others

    1989-01-01

    An auditory inspection time (AIT) test, pitch discrimination tests, and verbal and non-verbal mental ability tests were administered to 59 undergraduates and 119 12-year-old school children. Results indicate that AIT correlations with intelligence are due to AIT being an index of information intake speeds. (TJH)

  9. Codes for sound-source location in nontonotopic auditory cortex.

    PubMed

    Middlebrooks, J C; Xu, L; Eddins, A C; Green, D M

    1998-08-01

    We evaluated two hypothetical codes for sound-source location in the auditory cortex. The topographical code assumed that single neurons are selective for particular locations and that sound-source locations are coded by the cortical location of small populations of maximally activated neurons. The distributed code assumed that the responses of individual neurons can carry information about locations throughout 360 degrees of azimuth and that accurate sound localization derives from information that is distributed across large populations of such panoramic neurons. We recorded from single units in the anterior ectosylvian sulcus area (area AES) and in area A2 of alpha-chloralose-anesthetized cats. Results obtained in the two areas were essentially equivalent. Noise bursts were presented from loudspeakers spaced in 20 degrees intervals of azimuth throughout 360 degrees of the horizontal plane. Spike counts of the majority of units were modulated >50% by changes in sound-source azimuth. Nevertheless, sound-source locations that produced greater than half-maximal spike counts often spanned >180 degrees of azimuth. The spatial selectivity of units tended to broaden and, often, to shift in azimuth as sound pressure levels (SPLs) were increased to a moderate level. We sometimes saw systematic changes in spatial tuning along segments of electrode tracks as long as 1.5 mm but such progressions were not evident at higher sound levels. Moderate-level sounds presented anywhere in the contralateral hemifield produced greater than half-maximal activation of nearly all units. These results are not consistent with the hypothesis of a topographic code. We used an artificial-neural-network algorithm to recognize spike patterns and, thereby, infer the locations of sound sources. Network input consisted of spike density functions formed by averages of responses to eight stimulus repetitions. Information carried in the responses of single units permitted reasonable estimates of sound

  10. Improving Hearing Performance Using Natural Auditory Coding Strategies

    NASA Astrophysics Data System (ADS)

    Rattay, Frank

    Sound transfer from the human ear to the brain is based on three quite different neural coding principles when the continuous temporal auditory source signal is sent as binary code in excellent quality via 30,000 nerve fibers per ear. Cochlear implants are well-accepted neural prostheses for people with sensory hearing loss, but currently the devices are inspired only by the tonotopic principle. According to this principle, every sound frequency is mapped to a specific place along the cochlea. By electrical stimulation, the frequency content of the acoustic signal is distributed via few contacts of the prosthesis to corresponding places and generates spikes there. In contrast to the natural situation, the artificially evoked information content in the auditory nerve is quite poor, especially because the richness of the temporal fine structure of the neural pattern is replaced by a firing pattern that is strongly synchronized with an artificial cycle duration. Improvement in hearing performance is expected by involving more of the ingenious strategies developed during evolution.

  11. Diverse cortical codes for scene segmentation in primate auditory cortex

    PubMed Central

    Semple, Malcolm N.

    2015-01-01

    The temporal coherence of amplitude fluctuations is a critical cue for segmentation of complex auditory scenes. The auditory system must accurately demarcate the onsets and offsets of acoustic signals. We explored how and how well the timing of onsets and offsets of gated tones are encoded by auditory cortical neurons in awake rhesus macaques. Temporal features of this representation were isolated by presenting otherwise identical pure tones of differing durations. Cortical response patterns were diverse, including selective encoding of onset and offset transients, tonic firing, and sustained suppression. Spike train classification methods revealed that many neurons robustly encoded tone duration despite substantial diversity in the encoding process. Excellent discrimination performance was achieved by neurons whose responses were primarily phasic at tone offset and by those that responded robustly while the tone persisted. Although diverse cortical response patterns converged on effective duration discrimination, this diversity significantly constrained the utility of decoding models referenced to a spiking pattern averaged across all responses or averaged within the same response category. Using maximum likelihood-based decoding models, we demonstrated that the spike train recorded in a single trial could support direct estimation of stimulus onset and offset. Comparisons between different decoding models established the substantial contribution of bursts of activity at sound onset and offset to demarcating the temporal boundaries of gated tones. Our results indicate that relatively few neurons suffice to provide temporally precise estimates of such auditory “edges,” particularly for models that assume and exploit the heterogeneity of neural responses in awake cortex. PMID:25695655

  12. Brain-Generated Estradiol Drives Long-Term Optimization of Auditory Coding to Enhance the Discrimination of Communication Signals

    PubMed Central

    Tremere, Liisa A.; Pinaud, Raphael

    2011-01-01

    Auditory processing and hearing-related pathologies are heavily influenced by steroid hormones in a variety of vertebrate species including humans. The hormone estradiol has been recently shown to directly modulate the gain of central auditory neurons, in real-time, by controlling the strength of inhibitory transmission via a non-genomic mechanism. The functional relevance of this modulation, however, remains unknown. Here we show that estradiol generated in the songbird homologue of the mammalian auditory association cortex, rapidly enhances the effectiveness of the neural coding of complex, learned acoustic signals in awake zebra finches. Specifically, estradiol increases mutual information rates, coding efficiency and the neural discrimination of songs. These effects are mediated by estradiol’s modulation of both rate and temporal coding of auditory signals. Interference with the local action or production of estradiol in the auditory forebrain of freely-behaving animals disrupts behavioral responses to songs, but not to other behaviorally-relevant communication signals. Our findings directly show that estradiol is a key regulator of auditory function in the adult vertebrate brain. PMID:21368039

  13. Subthreshold resonance properties contribute to the efficient coding of auditory spatial cues.

    PubMed

    Remme, Michiel W H; Donato, Roberta; Mikiel-Hunter, Jason; Ballestero, Jimena A; Foster, Simon; Rinzel, John; McAlpine, David

    2014-06-01

    Neurons in the medial superior olive (MSO) and lateral superior olive (LSO) of the auditory brainstem code for sound-source location in the horizontal plane, extracting interaural time differences (ITDs) from the stimulus fine structure and interaural level differences (ILDs) from the stimulus envelope. Here, we demonstrate a postsynaptic gradient in temporal processing properties across the presumed tonotopic axis; neurons in the MSO and the low-frequency limb of the LSO exhibit fast intrinsic electrical resonances and low input impedances, consistent with their processing of ITDs in the temporal fine structure. Neurons in the high-frequency limb of the LSO show low-pass electrical properties, indicating they are better suited to extracting information from the slower, modulated envelopes of sounds. Using a modeling approach, we assess ITD and ILD sensitivity of the neural filters to natural sounds, demonstrating that the transformation in temporal processing along the tonotopic axis contributes to efficient extraction of auditory spatial cues. PMID:24843153

  14. State-Dependent Population Coding in Primary Auditory Cortex

    PubMed Central

    Pachitariu, Marius; Lyamzin, Dmitry R.; Sahani, Maneesh

    2015-01-01

    Sensory function is mediated by interactions between external stimuli and intrinsic cortical dynamics that are evident in the modulation of evoked responses by cortical state. A number of recent studies across different modalities have demonstrated that the patterns of activity in neuronal populations can vary strongly between synchronized and desynchronized cortical states, i.e., in the presence or absence of intrinsically generated up and down states. Here we investigated the impact of cortical state on the population coding of tones and speech in the primary auditory cortex (A1) of gerbils, and found that responses were qualitatively different in synchronized and desynchronized cortical states. Activity in synchronized A1 was only weakly modulated by sensory input, and the spike patterns evoked by tones and speech were unreliable and constrained to a small range of patterns. In contrast, responses to tones and speech in desynchronized A1 were temporally precise and reliable across trials, and different speech tokens evoked diverse spike patterns with extremely weak noise correlations, allowing responses to be decoded with nearly perfect accuracy. Restricting the analysis of synchronized A1 to activity within up states yielded similar results, suggesting that up states are not equivalent to brief periods of desynchronization. These findings demonstrate that the representational capacity of A1 depends strongly on cortical state, and suggest that cortical state should be considered as an explicit variable in all studies of sensory processing. PMID:25653363

  15. Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant

    PubMed Central

    Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa

    2015-01-01

    Introduction  The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. Objective  To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. Methods  This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. Results  There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. Conclusion  There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied. PMID:27413409

  16. Distinct Subthreshold Mechanisms Underlying Rate-Coding Principles in Primate Auditory Cortex.

    PubMed

    Gao, Lixia; Kostlan, Kevin; Wang, Yunyan; Wang, Xiaoqin

    2016-08-17

    A key computational principle for encoding time-varying signals in auditory and somatosensory cortices of monkeys is the opponent model of rate coding by two distinct populations of neurons. However, the subthreshold mechanisms that give rise to this computation have not been revealed. Because the rate-coding neurons are only observed in awake conditions, it is especially challenging to probe their underlying cellular mechanisms. Using a novel intracellular recording technique that we developed in awake marmosets, we found that the two types of rate-coding neurons in auditory cortex exhibited distinct subthreshold responses. While the positive-monotonic neurons (monotonically increasing firing rate with increasing stimulus repetition frequency) displayed sustained depolarization at high repetition frequency, the negative-monotonic neurons (opposite trend) instead exhibited hyperpolarization at high repetition frequency but sustained depolarization at low repetition frequency. The combination of excitatory and inhibitory subthreshold events allows the cortex to represent time-varying signals through these two opponent neuronal populations. PMID:27478016

  17. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study.

    PubMed

    Stekelenburg, Jeroen J; Vroomen, Jean

    2015-11-11

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. PMID:25641042

  18. Norepinephrine Modulates Coding of Complex Vocalizations in the Songbird Auditory Cortex Independent of Local Neuroestrogen Synthesis.

    PubMed

    Ikeda, Maaya Z; Jeon, Sung David; Cowell, Rosemary A; Remage-Healey, Luke

    2015-06-24

    The catecholamine norepinephrine plays a significant role in auditory processing. Most studies to date have examined the effects of norepinephrine on the neuronal response to relatively simple stimuli, such as tones and calls. It is less clear how norepinephrine shapes the detection of complex syntactical sounds, as well as the coding properties of sensory neurons. Songbirds provide an opportunity to understand how auditory neurons encode complex, learned vocalizations, and the potential role of norepinephrine in modulating the neuronal computations for acoustic communication. Here, we infused norepinephrine into the zebra finch auditory cortex and performed extracellular recordings to study the modulation of song representations in single neurons. Consistent with its proposed role in enhancing signal detection, norepinephrine decreased spontaneous activity and firing during stimuli, yet it significantly enhanced the auditory signal-to-noise ratio. These effects were all mimicked by clonidine, an α-2 receptor agonist. Moreover, a pattern classifier analysis indicated that norepinephrine enhanced the ability of single neurons to accurately encode complex auditory stimuli. Because neuroestrogens are also known to enhance auditory processing in the songbird brain, we tested the hypothesis that norepinephrine actions depend on local estrogen synthesis. Neither norepinephrine nor adrenergic receptor antagonist infusion into the auditory cortex had detectable effects on local estradiol levels. Moreover, pretreatment with fadrozole, a specific aromatase inhibitor, did not block norepinephrine's neuromodulatory effects. Together, these findings indicate that norepinephrine enhances signal detection and information encoding for complex auditory stimuli by suppressing spontaneous "noise" activity and that these actions are independent of local neuroestrogen synthesis. PMID:26109659

  19. Differential Coding of Conspecific Vocalizations in the Ventral Auditory Cortical Stream

    PubMed Central

    Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.

    2014-01-01

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  20. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  1. Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH).

    PubMed

    Tierney, Adam; Kraus, Nina

    2014-01-01

    Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills. PMID:25505879

  2. Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH)

    PubMed Central

    Tierney, Adam; Kraus, Nina

    2014-01-01

    Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills. PMID:25505879

  3. Flexible information coding in human auditory cortex during perception, imagery, and STM of complex sounds.

    PubMed

    Linke, Annika C; Cusack, Rhodri

    2015-07-01

    Auditory cortex is the first cortical region of the human brain to process sounds. However, it has recently been shown that its neurons also fire in the absence of direct sensory input, during memory maintenance and imagery. This has commonly been taken to reflect neural coding of the same acoustic information as during the perception of sound. However, the results of the current study suggest that the type of information encoded in auditory cortex is highly flexible. During perception and memory maintenance, neural activity patterns are stimulus specific, reflecting individual sound properties. Auditory imagery of the same sounds evokes similar overall activity in auditory cortex as perception. However, during imagery abstracted, categorical information is encoded in the neural patterns, particularly when individuals are experiencing more vivid imagery. This highlights the necessity to move beyond traditional "brain mapping" inference in human neuroimaging, which assumes common regional activation implies similar mental representations. PMID:25603030

  4. The effect of real-time auditory feedback on learning new characters.

    PubMed

    Danna, Jérémy; Fontaine, Maureen; Paz-Villagrán, Vietminh; Gondre, Charles; Thoret, Etienne; Aramaki, Mitsuko; Kronland-Martinet, Richard; Ystad, Sølvi; Velay, Jean-Luc

    2015-10-01

    The present study investigated the effect of handwriting sonification on graphomotor learning. Thirty-two adults, distributed in two groups, learned four new characters with their non-dominant hand. The experimental design included a pre-test, a training session, and two post-tests, one just after the training sessions and another 24h later. Two characters were learned with and two without real-time auditory feedback (FB). The first group first learned the two non-sonified characters and then the two sonified characters whereas the reverse order was adopted for the second group. Results revealed that auditory FB improved the speed and fluency of handwriting movements but reduced, in the short-term only, the spatial accuracy of the trace. Transforming kinematic variables into sounds allows the writer to perceive his/her movement in addition to the written trace and this might facilitate handwriting learning. However, there were no differential effects of auditory FB, neither long-term nor short-term for the subjects who first learned the characters with auditory FB. We hypothesize that the positive effect on the handwriting kinematics was transferred to characters learned without FB. This transfer effect of the auditory FB is discussed in light of the Theory of Event Coding. PMID:25533208

  5. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    PubMed

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  6. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    PubMed Central

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  7. Odors Bias Time Perception in Visual and Auditory Modalities

    PubMed Central

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  8. Odors Bias Time Perception in Visual and Auditory Modalities.

    PubMed

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  9. Auditory training improves neural timing in the human brainstem.

    PubMed

    Russo, Nicole M; Nicol, Trent G; Zecker, Steven G; Hayes, Erin A; Kraus, Nina

    2005-01-01

    The auditory brainstem response reflects neural encoding of the acoustic characteristic of a speech syllable with remarkable precision. Some children with learning impairments demonstrate abnormalities in this preconscious measure of neural encoding especially in background noise. This study investigated whether auditory training targeted to remediate perceptually-based learning problems would alter the neural brainstem encoding of the acoustic sound structure of speech in such children. Nine subjects, clinically diagnosed with a language-based learning problem (e.g., dyslexia), worked with auditory perceptual training software. Prior to beginning and within three months after completing the training program, brainstem responses to the syllable /da/ were recorded in quiet and background noise. Subjects underwent additional auditory neurophysiological, perceptual, and cognitive testing. Ten control subjects, who did not participate in any remediation program, underwent the same battery of tests at time intervals equivalent to the trained subjects. Transient and sustained (frequency-following response) components of the brainstem response were evaluated. The primary pathway afferent volley -- neural events occurring earlier than 11 ms after stimulus onset -- did not demonstrate plasticity. However, quiet-to-noise inter-response correlations of the sustained response ( approximately 11-50 ms) increased significantly in the trained children, reflecting improved stimulus encoding precision, whereas control subjects did not exhibit this change. Thus, auditory training can alter the preconscious neural encoding of complex sounds by improving neural synchrony in the auditory brainstem. Additionally, several measures of brainstem response timing were related to changes in cortical physiology, as well as perceptual, academic, and cognitive measures from pre- to post-training. PMID:15474654

  10. Time-sharing visual and auditory tracking tasks

    NASA Technical Reports Server (NTRS)

    Tsang, Pamela S.; Vidulich, Michael A.

    1987-01-01

    An experiment is described which examined the benefits of distributing the input demands of two tracking tasks as a function of task integrality. Visual and auditory compensatory tracking tasks were utilized. Results indicate that presenting the two tracking signals in two input modalities did not improve time-sharing efficiency. This was attributed to the difficulty insensitivity phenomenon.

  11. Speech Compensation for Time-Scale-Modified Auditory Feedback

    ERIC Educational Resources Information Center

    Ogane, Rintaro; Honda, Masaaki

    2014-01-01

    Purpose: The purpose of this study was to examine speech compensation in response to time-scale-modified auditory feedback during the transition of the semivowel for a target utterance of /ija/. Method: Each utterance session consisted of 10 control trials in the normal feedback condition followed by 20 perturbed trials in the modified auditory…

  12. The Time Course of Neural Changes Underlying Auditory Perceptual Learning

    PubMed Central

    Atienza, Mercedes; Cantero, Jose L.; Dominguez-Marin, Elena

    2002-01-01

    Improvement in perception takes place within the training session and from one session to the next. The present study aims at determining the time course of perceptual learning as revealed by changes in auditory event-related potentials (ERPs) reflecting preattentive processes. Subjects were trained to discriminate two complex auditory patterns in a single session. ERPs were recorded just before and after training, while subjects read a book and ignored stimulation. ERPs showed a negative wave called mismatch negativity (MMN)—which indexes automatic detection of a change in a homogeneous auditory sequence—just after subjects learned to consciously discriminate the two patterns. ERPs were recorded again 12, 24, 36, and 48 h later, just before testing performance on the discrimination task. Additional behavioral and neurophysiological changes were found several hours after the training session: an enhanced P2 at 24 h followed by shorter reaction times, and an enhanced MMN at 36 h. These results indicate that gains in performance on the discrimination of two complex auditory patterns are accompanied by different learning-dependent neurophysiological events evolving within different time frames, supporting the hypothesis that fast and slow neural changes underlie the acquisition of improved perception. PMID:12075002

  13. Burst Firing is a Neural Code in an Insect Auditory System

    PubMed Central

    Eyherabide, Hugo G.; Rokem, Ariel; Herz, Andreas V. M.; Samengo, Inés

    2008-01-01

    Various classes of neurons alternate between high-frequency discharges and silent intervals. This phenomenon is called burst firing. To analyze burst activity in an insect system, grasshopper auditory receptor neurons were recorded in vivo for several distinct stimulus types. The experimental data show that both burst probability and burst characteristics are strongly influenced by temporal modulations of the acoustic stimulus. The tendency to burst, hence, is not only determined by cell-intrinsic processes, but also by their interaction with the stimulus time course. We study this interaction quantitatively and observe that bursts containing a certain number of spikes occur shortly after stimulus deflections of specific intensity and duration. Our findings suggest a sparse neural code where information about the stimulus is represented by the number of spikes per burst, irrespective of the detailed interspike-interval structure within a burst. This compact representation cannot be interpreted as a firing-rate code. An information-theoretical analysis reveals that the number of spikes per burst reliably conveys information about the amplitude and duration of sound transients, whereas their time of occurrence is reflected by the burst onset time. The investigated neurons encode almost half of the total transmitted information in burst activity. PMID:18946533

  14. Development of Visuo-Auditory Integration in Space and Time

    PubMed Central

    Gori, Monica; Sandini, Giulio; Burr, David

    2012-01-01

    Adults integrate multisensory information optimally (e.g., Ernst and Banks, 2002) while children do not integrate multisensory visual-haptic cues until 8–10 years of age (e.g., Gori et al., 2008). Before that age strong unisensory dominance occurs for size and orientation visual-haptic judgments, possibly reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. Within the framework of the cross-sensory calibration hypothesis, we investigate visual-auditory integration in both space and time with child-friendly spatial and temporal bisection tasks. Unimodal and bimodal (conflictual and not) audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. On the contrary, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (for PSEs), and bimodal thresholds higher than the Bayesian prediction. Only in the adult group did bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behavior develops late. We suggest that the visual dominance for space and the auditory dominance for time could reflect a cross-sensory comparison of vision in the spatial visuo-audio task and a cross-sensory comparison of audition in the temporal visuo-audio task. PMID:23060759

  15. Interactive coding of visual spatial frequency and auditory amplitude-modulation rate.

    PubMed

    Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Mossbridge, Julia; Suzuki, Satoru

    2012-03-01

    Spatial frequency is a fundamental visual feature coded in primary visual cortex, relevant for perceiving textures, objects, hierarchical structures, and scenes, as well as for directing attention and eye movements. Temporal amplitude-modulation (AM) rate is a fundamental auditory feature coded in primary auditory cortex, relevant for perceiving auditory objects, scenes, and speech. Spatial frequency and temporal AM rate are thus fundamental building blocks of visual and auditory perception. Recent results suggest that crossmodal interactions are commonplace across the primary sensory cortices and that some of the underlying neural associations develop through consistent multisensory experience such as audio-visually perceiving speech, gender, and objects. We demonstrate that people consistently and absolutely (rather than relatively) match specific auditory AM rates to specific visual spatial frequencies. We further demonstrate that this crossmodal mapping allows amplitude-modulated sounds to guide attention to and modulate awareness of specific visual spatial frequencies. Additional results show that the crossmodal association is approximately linear, based on physical spatial frequency, and generalizes to tactile pulses, suggesting that the association develops through multisensory experience during manual exploration of surfaces. PMID:22326023

  16. Auditory Inspection Time and Intelligence: What Is the Direction of Causation?

    ERIC Educational Resources Information Center

    Deary, Ian J.

    1995-01-01

    Tested three competing structural equation models concerning auditory inspection time (AIT) and cognitive ability. Found that auditory inspection times near age 11 correlate most strongly with later high IQ. (ET)

  17. Getting back on the beat: links between auditory-motor integration and precise auditory processing at fast time scales.

    PubMed

    Tierney, Adam; Kraus, Nina

    2016-03-01

    The auditory system is unique in its ability to precisely detect the timing of perceptual events and use this information to update motor plans, a skill that is crucial for language. However, the characteristics of the auditory system that enable this temporal precision are only beginning to be understood. Previous work has shown that participants who can tap consistently to a metronome have neural responses to sound with greater phase coherence from trial to trial. We hypothesized that this relationship is driven by a link between the updating of motor output by auditory feedback and neural precision. Moreover, we hypothesized that neural phase coherence at both fast time scales (reflecting subcortical processing) and slow time scales (reflecting cortical processing) would be linked to auditory-motor timing integration. To test these hypotheses, we asked participants to synchronize to a pacing stimulus, and then changed either the tempo or the timing of the stimulus to assess whether they could rapidly adapt. Participants who could rapidly and accurately resume synchronization had neural responses to sound with greater phase coherence. However, this precise timing was limited to the time scale of 10 ms (100 Hz) or faster; neural phase coherence at slower time scales was unrelated to performance on this task. Auditory-motor adaptation therefore specifically depends upon consistent auditory processing at fast, but not slow, time scales. PMID:26750313

  18. Sound coding in the auditory nerve of gerbils.

    PubMed

    Huet, Antoine; Batrel, Charlène; Tang, Yong; Desmadryl, Gilles; Wang, Jing; Puel, Jean-Luc; Bourien, Jérôme

    2016-08-01

    Gerbils possess a very specialized cochlea in which the low-frequency inner hair cells (IHCs) are contacted by auditory nerve fibers (ANFs) having a high spontaneous rate (SR), whereas high frequency IHCs are innervated by ANFs with a greater SR-based diversity. This specificity makes this animal a unique model to investigate, in the same cochlea, the functional role of different pools of ANFs. The distribution of the characteristic frequencies of fibers shows a clear bimodal shape (with a first mode around 1.5 kHz and a second around 12 kHz) and a notch in the histogram near 3.5 kHz. Whereas the mean thresholds did not significantly differ in the two frequency regions, the shape of the rate-intensity functions does vary significantly with the fiber characteristic frequency. Above 3.5 kHz, the sound-driven rate is greater and the slope of the rate-intensity function is steeper. Interestingly, high-SR fibers show a very good synchronized onset response in quiet (small first-spike latency jitter) but a weak response under noisy conditions. The low-SR fibers exhibit the opposite behavior, with poor onset synchronization in quiet but a robust response in noise. Finally, the greater vulnerability of low-SR fibers to various injuries including noise- and age-related hearing loss is discussed with regard to patients with poor speech intelligibility in noisy environments. Together, these results emphasize the need to perform relevant clinical tests to probe the distribution of ANFs in humans, and develop appropriate techniques of rehabilitation. This article is part of a Special Issue entitled . PMID:27220483

  19. Studies in auditory timing: 1. Simple patterns.

    PubMed

    Hirsh, I J; Monahan, C B; Grant, K W; Singh, P G

    1990-03-01

    Listeners' accuracy in discriminating one temporal pattern from another was measured in three psychophysical experiments. When the standard pattern consisted of equally timed (isochronic) brief tones, whose interonset intervals (IOIs) were 50, 100, or 200 msec, the accuracy in detecting an asynchrony or deviation of one tone in the sequence was about as would be predicted from older research on the discrimination of single time intervals (6%-8% at an IOI of 200 msec, 11%-12% at an IOI of 100 msec, and almost 20% at an IOI of 50 msec). In a series of 6 or 10 tones, this accuracy was independent of position of delay for IOIs of 100 and 200 msec. At 50 msec, however, accuracy depended on position, being worst in initial positions and best in final positions. When one tone in a series of six has a frequency different from the others, there is some evidence (at IOI = 200 msec) that interval discrimination is relatively poorer for the tone with the different frequency. Similarly, even if all tones have the same frequency but one interval in the series is made twice as long as the others, temporal discrimination is poorer for the tones bordering the longer interval, although this result is dependent on tempo or IOI. Results with these temporally more complex patterns may be interpreted in part by applying the relative Weber ratio to the intervals before and after the delayed tone. Alternatively, these experiments may show the influence of accent on the temporal discrimination of individual tones. PMID:2326145

  20. Studies in auditory timing: 2. Rhythm patterns.

    PubMed

    Monahan, C B; Hirsh, I J

    1990-03-01

    Listeners discriminated between 6-tone rhythmic patterns that differed only in the delay of the temporal position of one of the tones. On each trial, feedback was given and the subject's performance determined the amount of delay on the next trial. The 6 tones of the patterns marked off 5 intervals. In the first experiment, patterns comprised 3 "short" and 2 "long" intervals: 12121, 21121, and so forth, where the long (2) was twice the length of a short (1). In the second experiment, patterns were the complements of the patterns in the first experiment and comprised 2 shorts and 3 longs: 21212, 12212, and so forth. Each pattern was tested 45 times (5 positions of the delayed tone x 3 tempos x 3 replications). Consistent with previous work on simple interval discrimination, absolute discrimination (delta t in milliseconds) was poorer the longer the intervals (i.e., the slower the tempo). Measures of relative discrimination (delta t/t, where t was the short interval, the long interval, or the average of 2 intervals surrounding the delayed tone) were better the slower the tempo. Beyond these global results, large interactions of pattern with position of the delayed tone and tempo suggest that different models of performance are needed to explain behavior at the different tempos. A Weber's law model fit the slow-tempo data better than did a model based on positions of "natural accent" (Povel & Essens, 1985). PMID:2326146

  1. Seasonal Plasticity of Precise Spike Timing in the Avian Auditory System

    PubMed Central

    Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A.

    2015-01-01

    Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843

  2. Seasonal plasticity of precise spike timing in the avian auditory system.

    PubMed

    Caras, Melissa L; Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A

    2015-02-25

    Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843

  3. A real-time auditory feedback system for retraining gait.

    PubMed

    Maulucci, Ruth A; Eckhouse, Richard H

    2011-01-01

    Stroke is the third leading cause of death in the United States and the principal cause of major long-term disability, incurring substantial distress as well as medical cost. Abnormal and inefficient gait patterns are widespread in survivors of stroke, yet gait is a major determinant of independent living. It is not surprising, therefore, that improvement of walking function is the most commonly stated priority of the survivors. Although many such individuals achieve the goal of walking, the caliber of their walking performance often limits endurance and quality of life. The ultimate goal of the research presented here is to use real-time auditory feedback to retrain gait in patients with chronic stroke. The strategy is to convert the motion of the foot into an auditory signal, and then use this auditory signal as feedback to inform the subject of the existence as well as the magnitude of error during walking. The initial stage of the project is described in this paper. The design and implementation of the new feedback method for lower limb training is explained. The question of whether the patient is physically capable of handling such training is explored. PMID:22255509

  4. GOES satellite time code dissemination

    NASA Technical Reports Server (NTRS)

    Beehler, R. E.

    1983-01-01

    The GOES time code system, the performance achieved to date, and some potential improvements in the future are discussed. The disseminated time code is originated from a triply redundant set of atomic standards, time code generators and related equipment maintained by NBS at NOAA's Wallops Island, VA satellite control facility. It is relayed by two GOES satellites located at 75 W and 135 W longitude on a continuous basis to users within North and South America (with overlapping coverage) and well out into the Atlantic and Pacific ocean areas. Downlink frequencies are near 468 MHz. The signals from both satellites are monitored and controlled from the NBS labs at Boulder, CO with additional monitoring input from geographically separated receivers in Washington, D.C. and Hawaii. Performance experience with the received time codes for periods ranging from several years to one day is discussed. Results are also presented for simultaneous, common-view reception by co-located receivers and by receivers separated by several thousand kilometers.

  5. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex.

    PubMed

    Trapeau, Régis; Schönwiesner, Marc

    2015-09-01

    The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere. PMID:26054873

  6. Auditory Stimuli Coding by Postsynaptic Potential and Local Field Potential Features

    PubMed Central

    de Assis, Juliana M.; Santos, Mikaelle O.; de Assis, Francisco M.

    2016-01-01

    The relation between physical stimuli and neurophysiological responses, such as action potentials (spikes) and Local Field Potentials (LFP), has recently been experimented in order to explain how neurons encode auditory information. However, none of these experiments presented analyses with postsynaptic potentials (PSPs). In the present study, we have estimated information values between auditory stimuli and amplitudes/latencies of PSPs and LFPs in anesthetized rats in vivo. To obtain these values, a new method of information estimation was used. This method produced more accurate estimates than those obtained by using the traditional binning method; a fact that was corroborated by simulated data. The traditional binning method could not certainly impart such accuracy even when adjusted by quadratic extrapolation. We found that the information obtained from LFP amplitude variation was significantly greater than the information obtained from PSP amplitude variation. This confirms the fact that LFP reflects the action of many PSPs. Results have shown that the auditory cortex codes more information of stimuli frequency with slow oscillations in groups of neurons than it does with slow oscillations in neurons separately. PMID:27513950

  7. Auditory reafferences: the influence of real-time feedback on movement control

    PubMed Central

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes. PMID:25688230

  8. The Neural Code for Auditory Space Depends on Sound Frequency and Head Size in an Optimal Manner

    PubMed Central

    Harper, Nicol S.; Scott, Brian H.; Semple, Malcolm N.; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)–the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron’s maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  9. The neural code for auditory space depends on sound frequency and head size in an optimal manner.

    PubMed

    Harper, Nicol S; Scott, Brian H; Semple, Malcolm N; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  10. Impairment of Auditory-Motor Timing and Compensatory Reorganization after Ventral Premotor Cortex Stimulation

    PubMed Central

    Kornysheva, Katja; Schubotz, Ricarda I.

    2011-01-01

    Integrating auditory and motor information often requires precise timing as in speech and music. In humans, the position of the ventral premotor cortex (PMv) in the dorsal auditory stream renders this area a node for auditory-motor integration. Yet, it remains unknown whether the PMv is critical for auditory-motor timing and which activity increases help to preserve task performance following its disruption. 16 healthy volunteers participated in two sessions with fMRI measured at baseline and following rTMS (rTMS) of either the left PMv or a control region. Subjects synchronized left or right finger tapping to sub-second beat rates of auditory rhythms in the experimental task, and produced self-paced tapping during spectrally matched auditory stimuli in the control task. Left PMv rTMS impaired auditory-motor synchronization accuracy in the first sub-block following stimulation (p<0.01, Bonferroni corrected), but spared motor timing and attention to task. Task-related activity increased in the homologue right PMv, but did not predict the behavioral effect of rTMS. In contrast, anterior midline cerebellum revealed most pronounced activity increase in less impaired subjects. The present findings suggest a critical role of the left PMv in feed-forward computations enabling accurate auditory-motor timing, which can be compensated by activity modulations in the cerebellum, but not in the homologue region contralateral to stimulation. PMID:21738657

  11. Fractionated Reaction Time Responses to Auditory and Electrocutaneous Stimuli.

    ERIC Educational Resources Information Center

    Beehler, Pamela J. Hoyes; Kamen, Gary

    1986-01-01

    An investigation was conducted to equate auditory and electrocutaneous stimuli. These equated stimuli were used in a second investigation examining neuromotor responses to stimuli of varying intensity. Results are provided. (Author/MT)

  12. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    PubMed

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control. PMID:26835556

  13. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations

    PubMed Central

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems. PMID:26292285

  14. Bat auditory cortex – model for general mammalian auditory computation or special design solution for active time perception?

    PubMed

    Kössl, Manfred; Hechavarria, Julio; Voss, Cornelia; Schaefer, Markus; Vater, Marianne

    2015-03-01

    Audition in bats serves passive orientation, alerting functions and communication as it does in other vertebrates. In addition, bats have evolved echolocation for orientation and prey detection and capture. This put a selective pressure on the auditory system in regard to echolocation-relevant temporal computation and frequency analysis. The present review attempts to evaluate in which respect the processing modules of bat auditory cortex (AC) are a model for typical mammalian AC function or are designed for echolocation-unique purposes. We conclude that, while cortical area arrangement and cortical frequency processing does not deviate greatly from that of other mammals, the echo delay time-sensitive dorsal cortex regions contain special designs for very powerful time perception. Different bat species have either a unique chronotopic cortex topography or a distributed salt-and-pepper representation of echo delay. The two designs seem to enable similar behavioural performance. PMID:25728173

  15. Time-dependent Neural Processing of Auditory Feedback during Voice Pitch Error Detection

    PubMed Central

    Behroozmand, Roozbeh; Liu, Hanjun; Larson, Charles R.

    2012-01-01

    The neural responses to sensory consequences of a self-produced motor act are suppressed compared with those in response to a similar but externally generated stimulus. Previous studies in the somatosensory and auditory systems have shown that the motor-induced suppression of the sensory mechanisms is sensitive to delays between the motor act and the onset of the stimulus. The present study investigated time-dependent neural processing of auditory feedback in response to self-produced vocalizations. ERPs were recorded in response to normal and pitch-shifted voice auditory feedback during active vocalization and passive listening to the playback of the same vocalizations. The pitch-shifted stimulus was delivered to the subjects’ auditory feedback after a randomly chosen time delay between the vocal onset and the stimulus presentation. Results showed that the neural responses to delayed feedback perturbations were significantly larger than those in response to the pitch-shifted stimulus occurring at vocal onset. Active vocalization was shown to enhance neural responsiveness to feedback alterations only for nonzero delays compared with passive listening to the playback. These findings indicated that the neural mechanisms of auditory feedback processing are sensitive to timing between the vocal motor commands and the incoming auditory feedback. Time-dependent neural processing of auditory feedback may be an important feature of the audio-vocal integration system that helps to improve the feedback-based monitoring and control of voice structure through vocal error detection and correction. PMID:20146608

  16. Predicted effects of sensorineural hearing loss on across-fiber envelope coding in the auditory nervea

    PubMed Central

    Swaminathan, Jayaganesh; Heinz, Michael G.

    2011-01-01

    Cross-channel envelope correlations are hypothesized to influence speech intelligibility, particularly in adverse conditions. Acoustic analyses suggest speech envelope correlations differ for syllabic and phonemic ranges of modulation frequency. The influence of cochlear filtering was examined here by predicting cross-channel envelope correlations in different speech modulation ranges for normal and impaired auditory-nerve (AN) responses. Neural cross-correlation coefficients quantified across-fiber envelope coding in syllabic (0–5 Hz), phonemic (5–64 Hz), and periodicity (64–300 Hz) modulation ranges. Spike trains were generated from a physiologically based AN model. Correlations were also computed using the model with selective hair-cell damage. Neural predictions revealed that envelope cross-correlation decreased with increased characteristic-frequency separation for all modulation ranges (with greater syllabic-envelope correlation than phonemic or periodicity). Syllabic envelope was highly correlated across many spectral channels, whereas phonemic and periodicity envelopes were correlated mainly between adjacent channels. Outer-hair-cell impairment increased the degree of cross-channel correlation for phonemic and periodicity ranges for speech in quiet and in noise, thereby reducing the number of independent neural information channels for envelope coding. In contrast, outer-hair-cell impairment was predicted to decrease cross-channel correlation for syllabic envelopes in noise, which may partially account for the reduced ability of hearing-impaired listeners to segregate speech in complex backgrounds. PMID:21682421

  17. Rate and synchronization measures of periodicity coding in cat primary auditory cortex.

    PubMed

    Eggermont, J J

    1991-11-01

    Periodicity coding was studied in primary auditory cortex of the ketamine anesthetized cat by simultaneously recording with two electrodes from up to 6 neural units in response to one second long click trains presented once per 3 s. Trains with click rates of 1, 2, 4, 8, 16 and 32/s were used and the responses of the single units were quantified by both rate measures (entrainment and rate modulation transfer function, rMTF) and synchronization measures (vector strength VS and temporal modulation transfer functions, tMTF). The rate measures resulted in low-pass functions of click rate and the synchrony measures resulted in band-pass functions of click rate. Limiting rates (-6 dB point of maximum response) were in the range of 3-24 Hz depending on the measure used. Best modulating frequencies were in the range of 5-8 Hz again depending on the synchrony measure used. It appeared that especially the VS was highly sensitive to spontaneous firing rate, duration of the post click suppression and the size of the rebound response after the suppression. These factors were dominantly responsible for the band-pass character of the VS-rate function and the peak VS frequency was nearly identical to the inverse of the suppression period. It is concluded that the use of the VS and to a lesser extent also the tMTF as the sole measure for the characterization of periodicity coding is not recommended in cases where there is a strong suppression of spontaneous activity. The combination of entrainment and tMTF appeared to characterize the periodicity coding in an unambiguous way. PMID:1769910

  18. Auditory Spatial Coding Flexibly Recruits Anterior, but Not Posterior, Visuotopic Parietal Cortex.

    PubMed

    Michalka, Samantha W; Rosen, Maya L; Kong, Lingqiang; Shinn-Cunningham, Barbara G; Somers, David C

    2016-03-01

    Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2-4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands. PMID:26656996

  19. Auditory Spatial Coding Flexibly Recruits Anterior, but Not Posterior, Visuotopic Parietal Cortex

    PubMed Central

    Michalka, Samantha W.; Rosen, Maya L.; Kong, Lingqiang; Shinn-Cunningham, Barbara G.; Somers, David C.

    2016-01-01

    Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2–4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands. PMID:26656996

  20. Impaired timing adjustments in response to time-varying auditory perturbation during connected speech production in persons who stutter.

    PubMed

    Cai, Shanqing; Beal, Deryk S; Ghosh, Satrajit S; Guenther, Frank H; Perkell, Joseph S

    2014-02-01

    Auditory feedback (AF), the speech signal received by a speaker's own auditory system, contributes to the online control of speech movements. Recent studies based on AF perturbation provided evidence for abnormalities in the integration of auditory error with ongoing articulation and phonation in persons who stutter (PWS), but stopped short of examining connected speech. This is a crucial limitation considering the importance of sequencing and timing in stuttering. In the current study, we imposed time-varying perturbations on AF while PWS and fluent participants uttered a multisyllabic sentence. Two distinct types of perturbations were used to separately probe the control of the spatial and temporal parameters of articulation. While PWS exhibited only subtle anomalies in the AF-based spatial control, their AF-based fine-tuning of articulatory timing was substantially weaker than normal, especially in early parts of the responses, indicating slowness in the auditory-motor integration for temporal control. PMID:24486601

  1. Speech Enhancement for Listeners with Hearing Loss Based on a Model for Vowel Coding in the Auditory Midbrain

    PubMed Central

    Rao, Akshay; Carney, Laurel H.

    2015-01-01

    A novel signal-processing strategy is proposed to enhance speech for listeners with hearing loss. The strategy focuses on improving vowel perception based on a recent hypothesis for vowel coding in the auditory system. Traditionally, studies of neural vowel encoding have focused on the representation of formants (peaks in vowel spectra) in the discharge patterns of the population of auditory-nerve (AN) fibers. A recent hypothesis focuses instead on vowel encoding in the auditory midbrain, and suggests a robust representation of formants. AN fiber discharge rates are characterized by pitch-related fluctuations having frequency-dependent modulation depths. Fibers tuned to frequencies near formants exhibit weaker pitch-related fluctuations than those tuned to frequencies between formants. Many auditory midbrain neurons show tuning to amplitude modulation frequency in addition to audio frequency. According to the auditory midbrain vowel encoding hypothesis, the response-map of a population of midbrain neurons tuned to modulations near voice-pitch exhibits minima near formant frequencies, due to the lack of strong pitch-related fluctuations at their inputs. This representation is robust over the range of noise conditions in which speech intelligibility is also robust for normal-hearing listeners. Based on this hypothesis, a vowel-enhancement strategy has been proposed that aims to restore vowel-encoding at the level of the auditory midbrain. The signal-processing consists of pitch tracking, formant-tracking and formant enhancement. The novel formant-tracking method proposed here estimates the first two formant frequencies by modeling characteristics of the auditory periphery, such as saturated discharge-rates of AN fibers and modulation tuning properties of auditory midbrain neurons. The formant enhancement stage aims to restore the representation of formants at the level of the midbrain by increasing the dominance of a single harmonic near each formant and saturating

  2. Speech enhancement for listeners with hearing loss based on a model for vowel coding in the auditory midbrain.

    PubMed

    Rao, Akshay; Carney, Laurel H

    2014-07-01

    A novel signal-processing strategy is proposed to enhance speech for listeners with hearing loss. The strategy focuses on improving vowel perception based on a recent hypothesis for vowel coding in the auditory system. Traditionally, studies of neural vowel encoding have focused on the representation of formants (peaks in vowel spectra) in the discharge patterns of the population of auditory-nerve (AN) fibers. A recent hypothesis focuses instead on vowel encoding in the auditory midbrain, and suggests a robust representation of formants. AN fiber discharge rates are characterized by pitch-related fluctuations having frequency-dependent modulation depths. Fibers tuned to frequencies near formants exhibit weaker pitch-related fluctuations than those tuned to frequencies between formants. Many auditory midbrain neurons show tuning to amplitude modulation frequency in addition to audio frequency. According to the auditory midbrain vowel encoding hypothesis, the response map of a population of midbrain neurons tuned to modulations near voice pitch exhibits minima near formant frequencies, due to the lack of strong pitch-related fluctuations at their inputs. This representation is robust over the range of noise conditions in which speech intelligibility is also robust for normal-hearing listeners. Based on this hypothesis, a vowel-enhancement strategy has been proposed that aims to restore vowel encoding at the level of the auditory midbrain. The signal processing consists of pitch tracking, formant tracking, and formant enhancement. The novel formant-tracking method proposed here estimates the first two formant frequencies by modeling characteristics of the auditory periphery, such as saturated discharge rates of AN fibers and modulation tuning properties of auditory midbrain neurons. The formant enhancement stage aims to restore the representation of formants at the level of the midbrain by increasing the dominance of a single harmonic near each formant and saturating

  3. Using Spatial Manipulation to Examine Interactions between Visual and Auditory Encoding of Pitch and Time

    PubMed Central

    McLachlan, Neil M.; Greco, Loretta J.; Toner, Emily C.; Wilson, Sarah J.

    2010-01-01

    Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians. PMID:21833287

  4. Using Spatial Manipulation to Examine Interactions between Visual and Auditory Encoding of Pitch and Time.

    PubMed

    McLachlan, Neil M; Greco, Loretta J; Toner, Emily C; Wilson, Sarah J

    2010-01-01

    Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians. PMID:21833287

  5. Plasticity in the neural coding of auditory space in the mammalian brain

    NASA Astrophysics Data System (ADS)

    King, Andrew J.; Parsons, Carl H.; Moore, David R.

    2000-10-01

    Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the "cocktail party effect") are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy.

  6. Temporal envelope of time-compressed speech represented in the human auditory cortex

    PubMed Central

    Nourski, Kirill V.; Reale, Richard A.; Oya, Hiroyuki; Kawasaki, Hiroto; Kovach, Christopher K.; Chen, Haiming; Howard, Matthew A.; Brugge, John F.

    2010-01-01

    Speech comprehension relies on temporal cues contained in the speech envelope, and the auditory cortex has been implicated as playing a critical role in encoding this temporal information. We investigated auditory cortical responses to speech stimuli in subjects undergoing invasive electrophysiological monitoring for pharmacologically refractory epilepsy. Recordings were made from multi-contact electrodes implanted in Heschl’s gyrus (HG). Speech sentences, time-compressed from 0.75 to 0.20 of natural speaking rate, elicited average evoked potentials (AEPs) and increases in event-related band power (ERBP) of cortical high frequency (70–250 Hz) activity. Cortex of posteromedial HG, the presumed core of human auditory cortex, represented the envelope of speech stimuli in the AEP and ERBP. Envelope-following in ERBP, but not in AEP, was evident in both language dominant and non-dominant hemispheres for relatively high degrees of compression where speech was not comprehensible. Compared to posteromedial HG, responses from anterolateral HG — an auditory belt field — exhibited longer latencies, lower amplitudes and little or no time locking to the speech envelope. The ability of the core auditory cortex to follow the temporal speech envelope over a wide range of speaking rates leads us to conclude that such capacity in itself is not a limiting factor for speech comprehension. PMID:20007480

  7. Brain Correlates of Early Auditory Processing Are Attenuated by Expectations for Time and Pitch

    ERIC Educational Resources Information Center

    Lange, Kathrin

    2009-01-01

    The present study investigated how auditory processing is modulated by expectations for time and pitch by analyzing reaction times and event-related potentials (ERPs). In two experiments, tone sequences were presented to the participants, who had to discriminate whether the last tone of the sequence contained a short gap or was continuous…

  8. Multiplicative auditory spatial receptive fields created by a hierarchy of population codes.

    PubMed

    Fischer, Brian J; Anderson, Charles H; Peña, José Luis

    2009-01-01

    A multiplicative combination of tuning to interaural time difference (ITD) and interaural level difference (ILD) contributes to the generation of spatially selective auditory neurons in the owl's midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl's inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl's inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system. PMID:19956693

  9. Developing and Selecting Auditory Warnings for a Real-Time Behavioral Intervention

    PubMed Central

    Bellettiere, John; Hughes, Suzanne C.; Liles, Sandy; Boman-Davis, Marie; Klepeis, Neil; Blumberg, Elaine; Mills, Jeff; Berardi, Vincent; Obayashi, Saori; Allen, T. Tracy; Hovell, Melbourne F.

    2015-01-01

    Real-time sensing and computing technologies are increasingly used in the delivery of real-time health behavior interventions. Auditory signals play a critical role in many of these interventions, impacting not only behavioral response but also treatment adherence and participant retention. Yet, few behavioral interventions that employ auditory feedback report the characteristics of sounds used and even fewer design signals specifically for their intervention. This paper describes a four-step process used in developing and selecting auditory warnings for a behavioral trial designed to reduce indoor secondhand smoke exposure. In step one, relevant information was gathered from ergonomic and behavioral science literature to assist a panel of research assistants in developing criteria for intervention-specific auditory feedback. In step two, multiple sounds were identified through internet searches and modified in accordance with the developed criteria, and two sounds were selected that best met those criteria. In step three, a survey was conducted among 64 persons from the primary sampling frame of the larger behavioral trial to compare the relative aversiveness of sounds, determine respondents' reported behavioral reactions to those signals, and assess participant's preference between sounds. In the final step, survey results were used to select the appropriate sound for auditory warnings. Ultimately, a single-tone pulse, 500 milliseconds (ms) in length that repeats every 270 ms for 3 cycles was chosen for the behavioral trial. The methods described herein represent one example of steps that can be followed to develop and select auditory feedback tailored for a given behavioral intervention. PMID:25745633

  10. The shape of ears to come: dynamic coding of auditory space.

    PubMed

    King, A J.; Schnupp, J W.H.; Doubell, T P.

    2001-06-01

    In order to pinpoint the location of a sound source, we make use of a variety of spatial cues that arise from the direction-dependent manner in which sounds interact with the head, torso and external ears. Accurate sound localization relies on the neural discrimination of tiny differences in the values of these cues and requires that the brain circuits involved be calibrated to the cues experienced by each individual. There is growing evidence that the capacity for recalibrating auditory localization continues well into adult life. Many details of how the brain represents auditory space and of how those representations are shaped by learning and experience remain elusive. However, it is becoming increasingly clear that the task of processing auditory spatial information is distributed over different regions of the brain, some working hierarchically, others independently and in parallel, and each apparently using different strategies for encoding sound source location. PMID:11390297

  11. Frequency tuning and intensity coding of sound in the auditory periphery of the lake sturgeon, Acipenser fulvescens

    PubMed Central

    Meyer, Michaela; Fay, Richard R.; Popper, Arthur N.

    2010-01-01

    Acipenser fulvescens, the lake sturgeon, belongs to one of the few extant non-teleost ray-finned (bony) fishes. The sturgeons (family Acipenseridae) have a phylogenetic history that dates back about 250 million years. The study reported here is the first investigation of peripheral coding strategies for spectral analysis in the auditory system in a non-teleost bony fish. We used a shaker system to simulate the particle motion component of sound during electrophysiological recordings of isolated single units from the eighth nerve innervating the saccule and lagena. Background activity and response characteristics of saccular and lagenar afferents (such as thresholds, response–level functions and temporal firing) resembled the ones found in teleosts. The distribution of best frequencies also resembled data in teleosts (except for Carassius auratus, goldfish) tested with the same stimulation method. The saccule and lagena in A. fulvescens contain otoconia, in contrast to the solid otoliths found in teleosts, however, this difference in otolith structure did not appear to affect threshold, frequency tuning, intensity- or temporal responses of auditory afferents. In general, the physiological characteristics common to A. fulvescens, teleosts and land vertebrates reflect important functions of the auditory system that may have been conserved throughout the evolution of vertebrates. PMID:20400642

  12. Auditory and motor contributions to the timing of melodies under cognitive load.

    PubMed

    Maes, Pieter-Jan; Giacofci, Madison; Leman, Marc

    2015-10-01

    Current theoretical models and empirical research suggest that sensorimotor control and feedback processes may guide time perception and production. In the current study, we investigated the role of motor control and auditory feedback in an interval-production task performed under heightened cognitive load. We hypothesized that general associative learning mechanisms enable the calibration of time against patterns of dynamic change in motor control processes and auditory feedback information. In Experiment 1, we applied a dual-task interference paradigm consisting of a finger-tapping (continuation) task in combination with a working memory task. Participants (nonmusicians) had to either perform or avoid arm movements between successive key presses (continuous vs. discrete). Auditory feedback from a key press (a piano tone) filled either the complete duration of the target interval or only a small part (long vs. short). Results suggested that both continuous movement control and long piano feedback tones contributed to regular timing production. In Experiment 2, we gradually adjusted the duration of the long auditory feedback tones throughout the duration of a trial. The results showed that a gradual shortening of tones throughout time increased the rate at which participants performed tone onsets. Overall, our findings suggest that the human perceptual-motor system may be important in guiding temporal behavior under cognitive load. PMID:26098119

  13. Auditory Imagery Shapes Movement Timing and Kinematics: Evidence from a Musical Task

    ERIC Educational Resources Information Center

    Keller, Peter E.; Dalla Bella, Simone; Koch, Iring

    2010-01-01

    The role of anticipatory auditory imagery in music-like sequential action was investigated by examining timing accuracy and kinematics using a motion capture system. Musicians responded to metronomic pacing signals by producing three unpaced taps on three vertically aligned keys at the given tempo. Taps triggered tones in two out of three blocked…

  14. Reaction Time and Accuracy in Individuals with Aphasia during Auditory Vigilance Tasks

    ERIC Educational Resources Information Center

    Laures, Jacqueline S.

    2005-01-01

    Research indicates that attentional deficits exist in aphasic individuals. However, relatively little is known about auditory vigilance performance in individuals with aphasia. The current study explores reaction time (RT) and accuracy in 10 aphasic participants and 10 nonbrain-damaged controls during linguistic and nonlinguistic auditory…

  15. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar.

    PubMed

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-01-01

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power. PMID:27483261

  16. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses

    PubMed Central

    Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli

    2015-01-01

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in

  17. Time coded distribution via broadcasting stations

    NASA Technical Reports Server (NTRS)

    Leschiutta, S.; Pettiti, V.; Detoma, E.

    1979-01-01

    The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.

  18. Real-time neural coding of memory.

    PubMed

    Tsien, Joe Z

    2007-01-01

    Recent identification of network-level functional coding units, termed neural cliques, in the hippocampus has allowed real-time patterns of memory traces to be mathematically described, intuitively visualized, and dynamically deciphered. Any given episodic event is represented and encoded by the activation of a set of neural clique assemblies that are organized in a categorical and hierarchical manner. This hierarchical feature-encoding pyramid is invariantly composed of the general feature-encoding clique at the bottom, sub-general feature-encoding cliques in the middle, and highly specific feature-encoding cliques at the top. This hierarchical and categorical organization of neural clique assemblies provides the network-level mechanism the capability of not only achieving vast storage capacity, but also generating commonalities from the individual behavioral episodes and converting them to the abstract concepts and generalized knowledge that are essential for intelligence and adaptive behaviors. Furthermore, activation patterns of the neural clique assemblies can be mathematically converted to strings of binary codes that would permit universal categorizations of the brain's internal representations across individuals and species. Such universal brain codes can also potentially facilitate the unprecedented brain-machine interface communications. PMID:17925242

  19. Working memory for time intervals in auditory rhythmic sequences

    PubMed Central

    Teki, Sundeep; Griffiths, Timothy D.

    2014-01-01

    The brain can hold information about multiple objects in working memory. It is not known, however, whether intervals of time can be stored in memory as distinct items. Here, we developed a novel paradigm to examine temporal memory where listeners were required to reproduce the duration of a single probed interval from a sequence of intervals. We demonstrate that memory performance significantly varies as a function of temporal structure (better memory in regular vs. irregular sequences), interval size (better memory for sub- vs. supra-second intervals), and memory load (poor memory for higher load). In contrast memory performance is invariant to attentional cueing. Our data represent the first systematic investigation of temporal memory in sequences that goes beyond previous work based on single intervals. The results support the emerging hypothesis that time intervals are allocated a working memory resource that varies with the amount of other temporal information in a sequence. PMID:25477849

  20. Uncertainty in visual and auditory series is coded by modality-general and modality-specific neural systems.

    PubMed

    Nastase, Samuel; Iacovella, Vittorio; Hasson, Uri

    2014-04-01

    Coding for the degree of disorder in a temporally unfolding sensory input allows for optimized encoding of these inputs via information compression and predictive processing. Prior neuroimaging work has examined sensitivity to statistical regularities within single sensory modalities and has associated this function with the hippocampus, anterior cingulate, and lateral temporal cortex. Here we investigated to what extent sensitivity to input disorder, quantified by Markov entropy, is subserved by modality-general or modality-specific neural systems when participants are not required to monitor the input. Participants were presented with rapid (3.3 Hz) auditory and visual series varying over four levels of entropy, while monitoring an infrequently changing fixation cross. For visual series, sensitivity to the magnitude of disorder was found in early visual cortex, the anterior cingulate, and the intraparietal sulcus. For auditory series, sensitivity was found in inferior frontal, lateral temporal, and supplementary motor regions implicated in speech perception and sequencing. Ventral premotor and central cingulate cortices were identified as possible candidates for modality-general uncertainty processing, exhibiting marginal sensitivity to disorder in both modalities. The right temporal pole differentiated the highest and lowest levels of disorder in both modalities, but did not show general sensitivity to the parametric manipulation of disorder. Our results indicate that neural sensitivity to input disorder relies largely on modality-specific systems embedded in extended sensory cortices, though uncertainty-related processing in frontal regions may be driven by both input modalities. PMID:23408389

  1. Code for Calculating Regional Seismic Travel Time

    SciTech Connect

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    2009-07-10

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minus predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.

  2. Code for Calculating Regional Seismic Travel Time

    Energy Science and Technology Software Center (ESTSC)

    2009-07-10

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forwardmore » travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minus predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less

  3. Using Reaction Time and Equal Latency Contours to Derive Auditory Weighting Functions in Sea Lions and Dolphins.

    PubMed

    Finneran, James J; Mulsow, Jason; Schlundt, Carolyn E

    2016-01-01

    Subjective loudness measurements are used to create equal-loudness contours and auditory weighting functions for human noise-mitigation criteria; however, comparable direct measurements of subjective loudness with animal subjects are difficult to conduct. In this study, simple reaction time to pure tones was measured as a proxy for subjective loudness in a Tursiops truncatus and Zalophus californianus. Contours fit to equal reaction-time curves were then used to estimate the shapes of auditory weighting functions. PMID:26610970

  4. Auditory attention to frequency and time: an analogy to visual local–global stimuli

    PubMed Central

    Justus, Timothy; List, Alexandra

    2007-01-01

    Two priming experiments demonstrated exogenous attentional persistence to the fundamental auditory dimensions of frequency (Experiment 1) and time (Experiment 2). In a divided-attention task, participants responded to an independent dimension, the identification of three-tone sequence patterns, for both prime and probe stimuli. The stimuli were specifically designed to parallel the local–global hierarchical letter stimuli of [Navon D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353–383] and the task was designed to parallel subsequent work in visual attention using Navon stimuli [Robertson, L. C. (1996). Attentional persistence for features of hierarchical patterns. Journal of Experimental Psychology: General, 125, 227–249; Ward, L. M. (1982). Determinants of attention to local and global features of visual forms. Journal of Experimental Psychology: Human Perception and Performance, 8, 562–581]. The results are discussed in terms of previous work in auditory attention and previous approaches to auditory local–global processing. PMID:16297675

  5. Neural mechanisms underlying auditory feedback control of speech

    PubMed Central

    Reilly, Kevin J.; Guenther, Frank H.

    2013-01-01

    The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557

  6. Some optimal partial-unit-memory codes. [time-invariant binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Lauer, G. S.

    1979-01-01

    A class of time-invariant binary convolutional codes is defined, called partial-unit-memory codes. These codes are optimal in the sense of having maximum free distance for given values of R, k (the number of encoder inputs), and mu (the number of encoder memory cells). Optimal codes are given for rates R = 1/4, 1/3, 1/2, and 2/3, with mu not greater than 4 and k not greater than mu + 3, whenever such a code is better than previously known codes. An infinite class of optimal partial-unit-memory codes is also constructed based on equidistant block codes.

  7. Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Squires, N.; Galambos, R.

    1975-01-01

    Short latency (under 10 msec) responses elicited by bursts of white noise were recorded from the scalps of human subjects. Response alterations produced by changes in the noise burst duration (on-time), inter-burst interval (off-time), and onset and offset shapes were analyzed. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise time but was unaffected by changes in fall time. Increases in stimulus duration, and therefore in loudness, resulted in a systematic increase in latency. This was probably due to response recovery processes, since the effect was eliminated with increases in stimulus off-time. The amplitude of wave V was insensitive to changes in signal rise and fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It was concluded that wave V of the human auditory brainstem evoked response is solely an onset response.

  8. Coding of sound direction in the auditory periphery of the lake sturgeon, Acipenser fulvescens

    PubMed Central

    Popper, Arthur N.; Fay, Richard R.

    2012-01-01

    The lake sturgeon, Acipenser fulvescens, belongs to one of the few extant nonteleost ray-finned fishes and diverged from the main vertebrate lineage about 250 million years ago. The aim of this study was to use this species to explore the peripheral neural coding strategies for sound direction and compare these results to modern bony fishes (teleosts). Extracellular recordings were made from afferent neurons innervating the saccule and lagena of the inner ear while the fish was stimulated using a shaker system. Afferents were highly directional and strongly phase locked to the stimulus. Directional response profiles resembled cosine functions, and directional preferences occurred at a wide range of stimulus intensities (spanning at least 60 dB re 1 nm displacement). Seventy-six percent of afferents were directionally selective for stimuli in the vertical plane near 90° (up down) and did not respond to horizontal stimulation. Sixty-two percent of afferents responsive to horizontal stimulation had their best axis in azimuths near 0° (front back). These findings suggest that in the lake sturgeon, in contrast to teleosts, the saccule and lagena may convey more limited information about the direction of a sound source, raising the possibility that this species uses a different mechanism for localizing sound. For azimuth, a mechanism could involve the utricle or perhaps the computation of arrival time differences. For elevation, behavioral strategies such as directing the head to maximize input to the area of best sensitivity may be used. Alternatively, the lake sturgeon may have a more limited ability for sound source localization compared with teleosts. PMID:22031776

  9. The time-course of distractor processing in auditory spatial negative priming.

    PubMed

    Möller, Malte; Mayr, Susanne; Buchner, Axel

    2016-09-01

    The spatial negative priming effect denotes slowed-down and sometimes more error-prone responding to a location that previously contained a distractor as compared with a previously unoccupied location. In vision, this effect has been attributed to the inhibition of irrelevant locations, and recently, of their task-assigned responses. Interestingly, auditory versions of the task did not yield evidence for inhibitory processing of task-irrelevant events which might suggest modality-specific distractor processing in vision and audition. Alternatively, the inhibitory processes may differ in how they develop over time. If this were the case, the absence of inhibitory after-effects might be due to an inappropriate timing of successive presentations in previous auditory spatial negative priming tasks. Specifically, the distractor may not yet have been inhibited or inhibition may already have dissipated at the time performance is assessed. The present study was conducted to test these alternatives. Participants indicated the location of a target sound in the presence of a concurrent distractor sound. Performance was assessed between two successive prime-probe presentations. The time between the prime response and the probe sounds (response-stimulus interval, RSI) was systematically varied between three groups (600, 1250, 1900 ms). For all RSI groups, the results showed no evidence for inhibitory distractor processing but conformed to the predictions of the feature mismatching hypothesis. The results support the assumption that auditory distractor processing does not recruit an inhibitory mechanism but involves the integration of spatial and sound identity features into common representations. PMID:26233234

  10. Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Squires, N.; Galambos, R.

    1975-01-01

    Short latency (under 10 msec) evoked responses elicited by bursts of white noise were recorded from the scalp of human subjects. Response alterations produced by changes in the noise burst duration (on-time) inter-burst interval (off-time), and onset and offset shapes are reported and evaluated. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise-time but was unaffected by changes in fall-time. The amplitude of wave V was insensitive to changes in signal rise-and-fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It is concluded that wave V of the human auditory brainstem evoked response is solely an onset response.

  11. Improved temporal coding of sinusoids in electric stimulation of the auditory nerve using desynchronizing pulse trains

    NASA Astrophysics Data System (ADS)

    Litvak, Leonid M.; Delgutte, Bertrand; Eddington, Donald K.

    2003-10-01

    Rubinstein et al. [Hearing Res. 127, 108-118 (1999)] suggested that the representation of electric stimulus waveforms in the temporal discharge patterns of auditory-nerve fiber (ANF) might be improved by introducing an ongoing, high-rate, desynchronizing pulse train (DPT). To test this hypothesis, activity of ANFs was studied in acutely deafened, anesthetized cats in response to 10-min-long, 5-kpps electric pulse trains that were sinusoidally modulated for 400 ms every second. Two classes of responses to sinusoidal modulations of the DPT were observed. Fibers that only responded transiently to the unmodulated DPT showed hyper synchronization and narrow dynamic ranges to sinusoidal modulators, much as responses to electric sinusoids presented without a DPT. In contrast, fibers that exhibited sustained responses to the DPT were sensitive to modulation depths as low as 0.25% for a modulation frequency of 417 Hz. Over a 20-dB range of modulation depths, responses of these fibers resembled responses to tones in a healthy ear in both discharge rate and synchronization index. This range is much wider than the dynamic range typically found with electrical stimulation without a DPT, and comparable to the dynamic range for acoustic stimulation. These results suggest that a stimulation strategy that uses small signals superimposed upon a large DPT to encode sounds may evoke temporal discharge patterns in some ANFs that resemble responses to sound in a healthy ear.

  12. The Visual and Auditory Reaction Time of Adolescents with Respect to Their Academic Achievements

    ERIC Educational Resources Information Center

    Taskin, Cengiz

    2016-01-01

    The aim of this study was to examine in visual and auditory reaction time of adolescents with respect to their academic achievement level. Five hundred adolescent children from the Turkey, (age=15.24±0.78 years; height=168.80±4.89 cm; weight=65.24±4.30 kg) for two hundred fifty male and (age=15.28±0.74; height=160.40±5.77 cm; weight=55.32±4.13 kg)…

  13. Audiological changes over time in adolescents and young adults with auditory neuropathy spectrum disorder.

    PubMed

    Chandan, Hunsur Suresh; Prabhu, Prashanth

    2015-07-01

    Auditory neuropathy spectrum disorder (ANSD) describes a condition in which a patient's otoacoustic emissions (OAE) are (or were at one time) present and auditory brainstem responses (ABR) are abnormal or absent. ANSD is also diagnosed based on the presence of cochlear microphonics and abnormal or absent ABRs with or without abnormalities of OAE. We noted the changes in audiological characteristics over time with respect to pure tone thresholds, OAEs and Speech Identification Scores (SIS) in seven individuals with ANSD. The results indicated that all the individuals with ANSD had decreased SIS over time, whereas there was subsequent reduction in pure tone thresholds only in nine out of fourteen ears. There was absence of OAEs for two individuals in both ears during the follow-up evaluations. There was no regular pattern of changes in pure tone thresholds or SIS across all individuals. This indicates that there may be gradual worsening of hearing abilities in individuals with ANSD. Thus, regular follow-up and monitoring of audiological changes are necessary for individuals with ANSD. Also, longitudinal studies need to be done to further add evidence to the audiological changes over time in individuals with ANSD. PMID:25577995

  14. System Measures Errors Between Time-Code Signals

    NASA Technical Reports Server (NTRS)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  15. Auditory Processing of Amplitude Envelope Rise Time in Adults Diagnosed with Developmental Dyslexia

    ERIC Educational Resources Information Center

    Pasquini, Elisabeth S.; Corriveau, Kathleen H.; Goswami, Usha

    2007-01-01

    Studies of basic (nonspeech) auditory processing in adults thought to have developmental dyslexia have yielded a variety of data. Yet there has been little consensus regarding the explanatory value of auditory processing in accounting for reading difficulties. Recently, however, a number of studies of basic auditory processing in children with…

  16. A grouped binary time code for telemetry and space applications

    NASA Technical Reports Server (NTRS)

    Chi, A. R.

    1979-01-01

    A computer oriented time code designed for users with various time resolution requirements is presented. It is intended as a time code for spacecraft and ground applications where direct code compatibility with automatic data processing equipment is of primary consideration. The principal features of this time code are: byte oriented format, selectable resolution options (from seconds to nanoseconds); and long ambiguity period. The time code is compatible with the new data handling and management concepts such as the NASA End-to-End Data System and the Telemetry Data Packetization format.

  17. Auditory Time-Interval Perception as Causal Inference on Sound Sources

    PubMed Central

    Sawai, Ken-ichi; Sato, Yoshiyuki; Aihara, Kazuyuki

    2012-01-01

    Perception of a temporal pattern in a sub-second time scale is fundamental to conversation, music perception, and other kinds of sound communication. However, its mechanism is not fully understood. A simple example is hearing three successive sounds with short time intervals. The following misperception of the latter interval is known: underestimation of the latter interval when the former is a little shorter or much longer than the latter, and overestimation of the latter when the former is a little longer or much shorter than the latter. Although this misperception of auditory time intervals for simple stimuli might be a cue to understanding the mechanism of time-interval perception, there exists no model that comprehensively explains it. Considering a previous experiment demonstrating that illusory perception does not occur for stimulus sounds with different frequencies, it might be plausible to think that the underlying mechanism of time-interval perception involves a causal inference on sound sources: herein, different frequencies provide cues for different causes. We construct a Bayesian observer model of this time-interval perception. We introduce a probabilistic variable representing the causality of sounds in the model. As prior knowledge, the observer assumes that a single sound source produces periodic and short time intervals, which is consistent with several previous works. We conducted numerical simulations and confirmed that our model can reproduce the misperception of auditory time intervals. A similar phenomenon has also been reported in visual and tactile modalities, though the time ranges for these are wider. This suggests the existence of a common mechanism for temporal pattern perception over modalities. This is because these different properties can be interpreted as a difference in time resolutions, given that the time resolutions for vision and touch are lower than those for audition. PMID:23226136

  18. Norm-Based Coding of Voice Identity in Human Auditory Cortex

    PubMed Central

    Latinus, Marianne; McAleer, Phil; Bestelmeyer, Patricia E.G.; Belin, Pascal

    2013-01-01

    Summary Listeners exploit small interindividual variations around a generic acoustical structure to discriminate and identify individuals from their voice—a key requirement for social interactions. The human brain contains temporal voice areas (TVA) [1] involved in an acoustic-based representation of voice identity [2–6], but the underlying coding mechanisms remain unknown. Indirect evidence suggests that identity representation in these areas could rely on a norm-based coding mechanism [4, 7–11]. Here, we show by using fMRI that voice identity is coded in the TVA as a function of acoustical distance to two internal voice prototypes (one male, one female)—approximated here by averaging a large number of same-gender voices by using morphing [12]. Voices more distant from their prototype are perceived as more distinctive and elicit greater neuronal activity in voice-sensitive cortex than closer voices—a phenomenon not merely explained by neuronal adaptation [13, 14]. Moreover, explicit manipulations of distance-to-mean by morphing voices toward (or away from) their prototype elicit reduced (or enhanced) neuronal activity. These results indicate that voice-sensitive cortex integrates relevant acoustical features into a complex representation referenced to idealized male and female voice prototypes. More generally, they shed light on remarkable similarities in cerebral representations of facial and vocal identity. PMID:23707425

  19. Effect of red bull energy drink on auditory reaction time and maximal voluntary contraction.

    PubMed

    Goel, Vartika; Manjunatha, S; Pai, Kirtana M

    2014-01-01

    The use of "Energy Drinks" (ED) is increasing in India. Students specially use these drinks to rejuvenate after strenuous exercises or as a stimulant during exam times. The most common ingredient in EDs is caffeine and a popular ED available and commonly used is Red Bull, containing 80 mg of caffeine in 250 ml bottle. The primary aim of this study was to investigate the effects of Red Bull energy drink on Auditory reaction time and Maximal voluntary contraction. A homogeneous group containing twenty medical students (10 males, 10 females) participated in a crossover study in which they were randomized to supplement with Red Bull (2 mg/kg body weight of caffeine) or isoenergetic isovolumetric noncaffeinated control drink (a combination of Appy Fizz, Cranberry juice and soda) separated by 7 days. Maximal voluntary contraction (MVC) was recorded as the highest of the 3 values of maximal isometric force generated from the dominant hand using hand grip dynamometer (Biopac systems). Auditory reaction time (ART) was the average of 10 values of the time interval between the click sound and response by pressing the push button using hand held switch (Biopac systems). The energy and control drinks after one hour of consumption significantly reduced the Auditory reaction time in males (ED 232 ± 59 Vs 204 ± 34 s and Control 223 ± 57 Vs 210 ± 51 s; p < 0.05) as well as in females (ED 227 ± 56 Vs 214 ± 48 s and Control 224 ± 45 Vs 215 ± 36 s; p < 0.05) but had no effect on MVC in either sex (males ED 381 ± 37 Vs 371 ± 36 and Control 375 ± 61 Vs 363 ± 36 Newton, females ED 227 ± 23 Vs 227 ± 32 and Control 234 ± 46 Vs 228 ± 37 Newton). When compared across the gender groups, there was no significant difference between males and females in the effects of any of the drinks on the ART but there was an overall significantly lower MVC in females compared to males. Both energy drink and the control drink significantly improve the reaction time but may not have any effect

  20. Effect of red bull energy drink on auditory reaction time and maximal voluntary contraction.

    PubMed

    Goel, Vartika; Manjunatha, S; Pai, Kirtana M

    2014-01-01

    The use of "Energy Drinks" (ED) is increasing in India. Students specially use these drinks to rejuvenate after strenuous exercises or as a stimulant during exam times. The most common ingredient in EDs is caffeine and a popular ED available and commonly used is Red Bull, containing 80 mg of caffeine in 250 ml bottle. The primary aim of this study was to investigate the effects of Red Bull energy drink on Auditory reaction time and Maximal voluntary contraction. A homogeneous group containing twenty medical students (10 males, 10 females) participated in a crossover study in which they were randomized to supplement with Red Bull (2 mg/kg body weight of caffeine) or isoenergetic isovolumetric noncaffeinated control drink (a combination of Appy Fizz, Cranberry juice and soda) separated by 7 days. Maximal voluntary contraction (MVC) was recorded as the highest of the 3 values of maximal isometric force generated from the dominant hand using hand grip dynamometer (Biopac systems). Auditory reaction time (ART) was the average of 10 values of the time interval between the click sound and response by pressing the push button using hand held switch (Biopac systems). The energy and control drinks after one hour of consumption significantly reduced the Auditory reaction time in males (ED 232 ± 59 Vs 204 ± 34 s and Control 223 ± 57 Vs 210 ± 51 s; p < 0.05) as well as in females (ED 227 ± 56 Vs 214 ± 48 s and Control 224 ± 45 Vs 215 ± 36 s; p < 0.05) but had no effect on MVC in either sex (males ED 381 ± 37 Vs 371 ± 36 and Control 375 ± 61 Vs 363 ± 36 Newton, females ED 227 ± 23 Vs 227 ± 32 and Control 234 ± 46 Vs 228 ± 37 Newton). When compared across the gender groups, there was no significant difference between males and females in the effects of any of the drinks on the ART but there was an overall significantly lower MVC in females compared to males. Both energy drink and the control drink significantly improve the reaction time but may not have any effect

  1. Ability of primary auditory cortical neurons to detect amplitude modulation with rate and temporal codes: neurometric analysis

    PubMed Central

    Johnson, Jeffrey S.; Yin, Pingbo; O'Connor, Kevin N.

    2012-01-01

    Amplitude modulation (AM) is a common feature of natural sounds, and its detection is biologically important. Even though most sounds are not fully modulated, the majority of physiological studies have focused on fully modulated (100% modulation depth) sounds. We presented AM noise at a range of modulation depths to awake macaque monkeys while recording from neurons in primary auditory cortex (A1). The ability of neurons to detect partial AM with rate and temporal codes was assessed with signal detection methods. On average, single-cell synchrony was as or more sensitive than spike count in modulation detection. Cells are less sensitive to modulation depth if tested away from their best modulation frequency, particularly for temporal measures. Mean neural modulation detection thresholds in A1 are not as sensitive as behavioral thresholds, but with phase locking the most sensitive neurons are more sensitive, suggesting that for temporal measures the lower-envelope principle cannot account for thresholds. Three methods of preanalysis pooling of spike trains (multiunit, similar to convergence from a cortical column; within cell, similar to convergence of cells with matched response properties; across cell, similar to indiscriminate convergence of cells) all result in an increase in neural sensitivity to modulation depth for both temporal and rate codes. For the across-cell method, pooling of a few dozen cells can result in detection thresholds that approximate those of the behaving animal. With synchrony measures, indiscriminate pooling results in sensitive detection of modulation frequencies between 20 and 60 Hz, suggesting that differences in AM response phase are minor in A1. PMID:22422997

  2. The Dynamics of Disruption from Altered Auditory Feedback: Further Evidence for a Dissociation of Sequencing and Timing

    ERIC Educational Resources Information Center

    Pfordresher, Peter Q.; Kulpa, J. D.

    2011-01-01

    Three experiments were designed to test whether perception and action are coordinated in a way that distinguishes sequencing from timing (Pfordresher, 2003). Each experiment incorporated a trial design in which altered auditory feedback (AAF) was presented for varying lengths of time and then withdrawn. Experiments 1 and 2 included AAF that…

  3. SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code

    SciTech Connect

    Hua, D; Fowler, T

    2004-06-15

    A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.

  4. Code extraction from encoded signal in time-spreading optical code division multiple access.

    PubMed

    Si, Zhijian; Yin, Feifei; Xin, Ming; Chen, Hongwei; Chen, Minghua; Xie, Shizhong

    2010-01-15

    A vulnerability that allows eavesdroppers to extract the code from the waveform of the noiselike encoded signal of an isolated user in a standard time-spreading optical code division multiple access communication system using bipolar phase code is experimentally demonstrated. The principle is based on fine structure in the encoded signal. Each dip in the waveform corresponds to a transition of the bipolar code. Eavesdroppers can get the code by analyzing the chip numbers between any two transitions; then a decoder identical to the legal user's can be fabricated, and they can get the properly decoded signal. PMID:20081977

  5. Effects of location and timing of co-activated neurons in the auditory midbrain on cortical activity: implications for a new central auditory prosthesis

    NASA Astrophysics Data System (ADS)

    Straka, Małgorzata M.; McMahon, Melissa; Markovitz, Craig D.; Lim, Hubert H.

    2014-08-01

    Objective. An increasing number of deaf individuals are being implanted with central auditory prostheses, but their performance has generally been poorer than for cochlear implant users. The goal of this study is to investigate stimulation strategies for improving hearing performance with a new auditory midbrain implant (AMI). Previous studies have shown that repeated electrical stimulation of a single site in each isofrequency lamina of the central nucleus of the inferior colliculus (ICC) causes strong suppressive effects in elicited responses within the primary auditory cortex (A1). Here we investigate if improved cortical activity can be achieved by co-activating neurons with different timing and locations across an ICC lamina and if this cortical activity varies across A1. Approach. We electrically stimulated two sites at different locations across an isofrequency ICC lamina using varying delays in ketamine-anesthetized guinea pigs. We recorded and analyzed spike activity and local field potentials across different layers and locations of A1. Results. Co-activating two sites within an isofrequency lamina with short inter-pulse intervals (<5 ms) could elicit cortical activity that is enhanced beyond a linear summation of activity elicited by the individual sites. A significantly greater extent of normalized cortical activity was observed for stimulation of the rostral-lateral region of an ICC lamina compared to the caudal-medial region. We did not identify any location trends across A1, but the most cortical enhancement was observed in supragranular layers, suggesting further integration of the stimuli through the cortical layers. Significance. The topographic organization identified by this study provides further evidence for the presence of functional zones across an ICC lamina with locations consistent with those identified by previous studies. Clinically, these results suggest that co-activating different neural populations in the rostral-lateral ICC rather

  6. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    ERIC Educational Resources Information Center

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  7. Adaptation to visual or auditory time intervals modulates the perception of visual apparent motion

    PubMed Central

    Zhang, Huihui; Chen, Lihan; Zhou, Xiaolin

    2012-01-01

    It is debated whether sub-second timing is subserved by a centralized mechanism or by the intrinsic properties of task-related neural activity in specific modalities (Ivry and Schlerf, 2008). By using a temporal adaptation task, we investigated whether adapting to different time intervals conveyed through stimuli in different modalities (i.e., frames of a visual Ternus display, visual blinking discs, or auditory beeps) would affect the subsequent implicit perception of visual timing, i.e., inter-stimulus interval (ISI) between two frames in a Ternus display. The Ternus display can induce two percepts of apparent motion (AM), depending on the ISI between the two frames: “element motion” for short ISIs, in which the endmost disc is seen as moving back and forth while the middle disc at the overlapping or central position remains stationary; “group motion” for longer ISIs, in which both discs appear to move in a manner of lateral displacement as a whole. In Experiment 1, participants adapted to either the typical “element motion” (ISI = 50 ms) or the typical “group motion” (ISI = 200 ms). In Experiments 2 and 3, participants adapted to a time interval of 50 or 200 ms through observing a series of two paired blinking discs at the center of the screen (Experiment 2) or hearing a sequence of two paired beeps (with pitch 1000 Hz). In Experiment 4, participants adapted to sequences of paired beeps with either low pitches (500 Hz) or high pitches (5000 Hz). After adaptation in each trial, participants were presented with a Ternus probe in which the ISI between the two frames was equal to the transitional threshold of the two types of motions, as determined by a pretest. Results showed that adapting to the short time interval in all the situations led to more reports of “group motion” in the subsequent Ternus probes; adapting to the long time interval, however, caused no aftereffect for visual adaptation but significantly more reports of group motion for

  8. Asynchrony adaptation reveals neural population code for audio-visual timing

    PubMed Central

    Roach, Neil W.; Heron, James; Whitaker, David; McGraw, Paul V.

    2011-01-01

    The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexible—adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects. PMID:20961905

  9. Real-time pseudocolor coding thermal ghost imaging.

    PubMed

    Duan, Deyang; Xia, Yunjie

    2014-01-01

    In this work, a color ghost image of a black-and-white object is obtained by a real-time pseudocolor coding technique that includes equal spatial frequency pseudocolor coding and equal density pseudocolor coding. This method makes the black-and-white ghost image more conducive to observation. Furthermore, since the ghost imaging comes from the intensity cross-correlations of the two beams, ghost imaging with the real-time pseudocolor coding technique is better than classical optical imaging with the same technique in overcoming the effects of light interference. PMID:24561954

  10. Precise inhibition is essential for microsecond interaural time difference coding

    NASA Astrophysics Data System (ADS)

    Brand, Antje; Behrend, Oliver; Marquardt, Torsten; McAlpine, David; Grothe, Benedikt

    2002-05-01

    Microsecond differences in the arrival time of a sound at the two ears (interaural time differences, ITDs) are the main cue for localizing low-frequency sounds in space. Traditionally, ITDs are thought to be encoded by an array of coincidence-detector neurons, receiving excitatory inputs from the two ears via axons of variable length (`delay lines'), to create a topographic map of azimuthal auditory space. Compelling evidence for the existence of such a map in the mammalian lTD detector, the medial superior olive (MSO), however, is lacking. Equally puzzling is the role of a-temporally very precise-glycine-mediated inhibitory input to MSO neurons. Using in vivo recordings from the MSO of the Mongolian gerbil, we found the responses of ITD-sensitive neurons to be inconsistent with the idea of a topographic map of auditory space. Moreover, local application of glycine and its antagonist strychnine by iontophoresis (through glass pipette electrodes, by means of an electric current) revealed that precisely timed glycine-controlled inhibition is a critical part of the mechanism by which the physiologically relevant range of ITDs is encoded in the MSO. A computer model, simulating the response of a coincidence-detector neuron with bilateral excitatory inputs and a temporally precise contralateral inhibitory input, supports this conclusion.

  11. Secular Slowing of Auditory Simple Reaction Time in Sweden (1959–1985)

    PubMed Central

    Madison, Guy; Woodley of Menie, Michael A.; Sänger, Justus

    2016-01-01

    There are indications that simple reaction time might have slowed in Western populations, based on both cohort- and multi-study comparisons. A possible limitation of the latter method in particular is measurement error stemming from methods variance, which results from the fact that instruments and experimental conditions change over time and between studies. We therefore set out to measure the simple auditory reaction time (SRT) of 7,081 individuals (2,997 males and 4,084 females) born in Sweden 1959–1985 (subjects were aged between 27 and 54 years at time of measurement). Depending on age cut-offs and adjustment for aging related slowing of SRT, the data indicate that SRT has increased by between 3 and 16 ms in the 27 birth years covered in the present sample. This slowing is unlikely to be explained by attrition, which was evaluated by comparing the general intelligence × birth-year interactions and standard deviations for both male participants and dropouts, utilizing military conscript cognitive ability data. The present result is consistent with previous studies employing alternative methods, and may indicate the operation of several synergistic factors, such as recent micro-evolutionary trends favoring lower g in Sweden and the effects of industrially produced neurotoxic substances on peripheral nerve conduction velocity. PMID:27588000

  12. Secular Slowing of Auditory Simple Reaction Time in Sweden (1959-1985).

    PubMed

    Madison, Guy; Woodley Of Menie, Michael A; Sänger, Justus

    2016-01-01

    There are indications that simple reaction time might have slowed in Western populations, based on both cohort- and multi-study comparisons. A possible limitation of the latter method in particular is measurement error stemming from methods variance, which results from the fact that instruments and experimental conditions change over time and between studies. We therefore set out to measure the simple auditory reaction time (SRT) of 7,081 individuals (2,997 males and 4,084 females) born in Sweden 1959-1985 (subjects were aged between 27 and 54 years at time of measurement). Depending on age cut-offs and adjustment for aging related slowing of SRT, the data indicate that SRT has increased by between 3 and 16 ms in the 27 birth years covered in the present sample. This slowing is unlikely to be explained by attrition, which was evaluated by comparing the general intelligence × birth-year interactions and standard deviations for both male participants and dropouts, utilizing military conscript cognitive ability data. The present result is consistent with previous studies employing alternative methods, and may indicate the operation of several synergistic factors, such as recent micro-evolutionary trends favoring lower g in Sweden and the effects of industrially produced neurotoxic substances on peripheral nerve conduction velocity. PMID:27588000

  13. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System

    PubMed Central

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L.; Wennekers, Thomas; Chicca, Elisabetta

    2011-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems. PMID:22347163

  14. Neural Basis of the Time Window for Subjective Motor-Auditory Integration

    PubMed Central

    Toida, Koichi; Ueno, Kanako; Shimada, Sotaro

    2016-01-01

    Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs) elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN) and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2) and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms), and hence reduction in the feeling of authorship of the sound (the sense of agency). In contrast, the enhanced-P2 was most prominent in short-delay (≤200 ms) conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components. PMID:26779000

  15. Hearing aid gain prescriptions balance restoration of auditory nerve mean-rate and spike-timing representations of speech.

    PubMed

    Dinath, Faheem; Bruce, Ian C

    2008-01-01

    Linear and nonlinear amplification schemes for hearing aids have thus far been developed and evaluated based on perceptual criteria such as speech intelligibility, sound comfort, and loudness equalization. Finding amplification schemes that optimize all of these perceptual metrics has proven difficult. Using a physiological model, Bruce et al. [1] investigated the effects of single-band gain adjustments to linear amplification prescriptions. Optimal gain adjustments for model auditory-nerve fiber responses to speech sentences from the TIMIT database were dependent on whether the error metric included the spike timing information (i.e., a time-resolution of several microseconds) or the mean firing rates (i.e., a time-resolution of several milliseconds). Results showed that positive gain adjustments are required to optimize the mean firing rate responses, whereas negative gain adjustments tend to optimize spike timing information responses. In this paper we examine the results in more depth using a similar optimization scheme applied to a synthetic vowel /E/. It is found that negative gain adjustments (i.e., below the linear gain prescriptions) minimize the spread of synchrony and deviation of the phase response to vowel formants in responses containing spike-timing information. In contrast, positive gain adjustments (i.e., above the linear gain prescriptions) normalize the distribution of mean discharge rates in the auditory nerve responses. Thus, linear amplification prescriptions appear to find a balance between restoring the spike-timing and mean-rate information in auditory-nerve responses. PMID:19163029

  16. Average discharge rate representation of voice onset time in the chinchilla auditory nerve

    SciTech Connect

    Sinex, D.G.; McDonald, L.P.

    1988-05-01

    Responses of chinchilla auditory-nerve fibers to synthesized stop consonants differing in voice onset time (VOT) were obtained. The syllables, heard as /ga/--/ka/ or /da/--/ta/, were similar to those previously used by others in psychophysical experiments with human and with chinchilla subjects. Average discharge rates of neurons tuned to the frequency region near the first formant generally increased at the onset of voicing, for VOTs longer than 20 ms. These rate increases were closely related to spectral amplitude changes associated with the onset of voicing and with the activation of the first formant; as a result, they provided accurate information about VOT. Neurons tuned to frequency regions near the second and third formants did not encode VOT in their average discharge rates. Modulations in the average rates of these neurons reflected spectral variations that were independent of VOT. The results are compared to other measurements of the peripheral encoding of speech sounds and to psychophysical observations suggesting that syllables with large variations in VOT are heard as belonging to one of only two phonemic categories.

  17. Spike timing precision changes with spike rate adaptation in the owl's auditory space map.

    PubMed

    Keller, Clifford H; Takahashi, Terry T

    2015-10-01

    Spike rate adaptation (SRA) is a continuing change of responsiveness to ongoing stimuli, which is ubiquitous across species and levels of sensory systems. Under SRA, auditory responses to constant stimuli change over time, relaxing toward a long-term rate often over multiple timescales. With more variable stimuli, SRA causes the dependence of spike rate on sound pressure level to shift toward the mean level of recent stimulus history. A model based on subtractive adaptation (Benda J, Hennig RM. J Comput Neurosci 24: 113-136, 2008) shows that changes in spike rate and level dependence are mechanistically linked. Space-specific neurons in the barn owl's midbrain, when recorded under ketamine-diazepam anesthesia, showed these classical characteristics of SRA, while at the same time exhibiting changes in spike timing precision. Abrupt level increases of sinusoidally amplitude-modulated (SAM) noise initially led to spiking at higher rates with lower temporal precision. Spike rate and precision relaxed toward their long-term values with a time course similar to SRA, results that were also replicated by the subtractive model. Stimuli whose amplitude modulations (AMs) were not synchronous across carrier frequency evoked spikes in response to stimulus envelopes of a particular shape, characterized by the spectrotemporal receptive field (STRF). Again, abrupt stimulus level changes initially disrupted the temporal precision of spiking, which then relaxed along with SRA. We suggest that shifts in latency associated with stimulus level changes may differ between carrier frequency bands and underlie decreased spike precision. Thus SRA is manifest not simply as a change in spike rate but also as a change in the temporal precision of spiking. PMID:26269555

  18. Alamouti-type polarization-time coding in coded-modulation schemes with coherent detection.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-01

    We present the Almouti-type polarization-time (PT) coding scheme suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The PT-decoder is found it to be similar to the Alamouti combiner. We also describe how to determine the symbols log-likelihood ratios in the presence of laser phase noise. We show that the proposed scheme is able to compensate even 800 ps of differential group delay, for the system operating at 10 Gb/s, with negligible penalty. The proposed scheme outperforms equal-gain combining polarization diversity OFDM scheme. However, the polarization diversity coded-OFDM and PT-coding based coded-OFDM schemes perform comparable. The proposed scheme has the potential of doubling the spectral efficiency compared to polarization diversity schemes. PMID:18773025

  19. Coding for Communication Channels with Dead-Time Constraints

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2004-01-01

    Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM

  20. Neuronal code for extended time in the hippocampus

    PubMed Central

    Mankin, Emily A.; Sparks, Fraser T.; Slayyeh, Begum; Sutherland, Robert J.; Leutgeb, Stefan; Leutgeb, Jill K.

    2012-01-01

    The time when an event occurs can become part of autobiographical memories. In brain structures that support such memories, a neural code should exist that represents when or how long ago events occurred. Here we describe a neuronal coding mechanism in hippocampus that can be used to represent the recency of an experience over intervals of hours to days. When the same event is repeated after such time periods, the activity patterns of hippocampal CA1 cell populations progressively differ with increasing temporal distances. Coding for space and context is nonetheless preserved. Compared with CA1, the firing patterns of hippocampal CA3 cell populations are highly reproducible, irrespective of the time interval, and thus provide a stable memory code over time. Therefore, the neuronal activity patterns in CA1 but not CA3 include a code that can be used to distinguish between time intervals on an extended scale, consistent with behavioral studies showing that the CA1 area is selectively required for temporal coding over such periods. PMID:23132944

  1. Neural coding of interaural time differences with bilateral cochlear implants: effects of congenital deafness.

    PubMed

    Hancock, Kenneth E; Noel, Victor; Ryugo, David K; Delgutte, Bertrand

    2010-10-20

    Human bilateral cochlear implant users do poorly on tasks involving interaural time differences (ITD), a cue that provides important benefits to the normal hearing, especially in challenging acoustic environments, yet the precision of neural ITD coding in acutely deafened, bilaterally implanted cats is essentially normal (Smith and Delgutte, 2007a). One explanation for this discrepancy is that the extended periods of binaural deprivation typically experienced by cochlear implant users degrades neural ITD sensitivity, by either impeding normal maturation of the neural circuitry or altering it later in life. To test this hypothesis, we recorded from single units in inferior colliculus of two groups of bilaterally implanted, anesthetized cats that contrast maximally in binaural experience: acutely deafened cats, which had normal binaural hearing until experimentation, and congenitally deaf white cats, which received no auditory inputs until the experiment. Rate responses of only half as many neurons showed significant ITD sensitivity to low-rate pulse trains in congenitally deaf cats compared with acutely deafened cats. For neurons that were ITD sensitive, ITD tuning was broader and best ITDs were more variable in congenitally deaf cats, leading to poorer ITD coding within the naturally occurring range. A signal detection model constrained by the observed physiology supports the idea that the degraded neural ITD coding resulting from deprivation of binaural experience contributes to poor ITD discrimination by human implantees. PMID:20962228

  2. Bimodal stimulus timing-dependent plasticity in primary auditory cortex is altered after noise exposure with and without tinnitus.

    PubMed

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2015-12-01

    Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit. PMID:26289461

  3. Perceptual Distortions in Pitch and Time Reveal Active Prediction and Support for an Auditory Pitch-Motion Hypothesis

    PubMed Central

    Henry, Molly J.; McAuley, J. Devin

    2013-01-01

    A number of accounts of human auditory perception assume that listeners use prior stimulus context to generate predictions about future stimulation. Here, we tested an auditory pitch-motion hypothesis that was developed from this perspective. Listeners judged either the time change (i.e., duration) or pitch change of a comparison frequency glide relative to a standard (referent) glide. Under a constant-velocity assumption, listeners were hypothesized to use the pitch velocity (Δf/Δt) of the standard glide to generate predictions about the pitch velocity of the comparison glide, leading to perceptual distortions along the to-be-judged dimension when the velocities of the two glides differed. These predictions were borne out in the pattern of relative points of subjective equality by a significant three-way interaction between the velocities of the two glides and task. In general, listeners’ judgments along the task-relevant dimension (pitch or time) were affected by expectations generated by the constant-velocity standard, but in an opposite manner for the two stimulus dimensions. When the comparison glide velocity was faster than the standard, listeners overestimated time change, but underestimated pitch change, whereas when the comparison glide velocity was slower than the standard, listeners underestimated time change, but overestimated pitch change. Perceptual distortions were least evident when the velocities of the standard and comparison glides were matched. Fits of an imputed velocity model further revealed increasingly larger distortions at faster velocities. The present findings provide support for the auditory pitch-motion hypothesis and add to a larger body of work revealing a role for active prediction in human auditory perception. PMID:23936462

  4. Censored Distributed Space-Time Coding for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Yiu, S.; Schober, R.

    2007-12-01

    We consider the application of distributed space-time coding in wireless sensor networks (WSNs). In particular, sensors use a common noncoherent distributed space-time block code (DSTBC) to forward their local decisions to the fusion center (FC) which makes the final decision. We show that the performance of distributed space-time coding is negatively affected by erroneous sensor decisions caused by observation noise. To overcome this problem of error propagation, we introduce censored distributed space-time coding where only reliable decisions are forwarded to the FC. The optimum noncoherent maximum-likelihood and a low-complexity, suboptimum generalized likelihood ratio test (GLRT) FC decision rules are derived and the performance of the GLRT decision rule is analyzed. Based on this performance analysis we derive a gradient algorithm for optimization of the local decision/censoring threshold. Numerical and simulation results show the effectiveness of the proposed censoring scheme making distributed space-time coding a prime candidate for signaling in WSNs.

  5. The GOES Time Code Service, 1974-2004: A Retrospective.

    PubMed

    Lombardi, Michael A; Hanson, D Wayne

    2005-01-01

    NIST ended its Geostationary Operational Environmental Satellites (GOES) time code service at 0 hours, 0 minutes Coordinated Universal Time (UTC) on January 1, 2005. To commemorate the end of this historically significant service, this article provides a retrospective look at the GOES service and the important role it played in the history of satellite timekeeping. PMID:27308105

  6. The GOES Time Code Service, 1974–2004: A Retrospective

    PubMed Central

    Lombardi, Michael A.; Hanson, D. Wayne

    2005-01-01

    NIST ended its Geostationary Operational Environmental Satellites (GOES) time code service at 0 hours, 0 minutes Coordinated Universal Time (UTC) on January 1, 2005. To commemorate the end of this historically significant service, this article provides a retrospective look at the GOES service and the important role it played in the history of satellite timekeeping. PMID:27308105

  7. Auditory Distance Coding in Rabbit Midbrain Neurons and Human Perception: Monaural Amplitude Modulation Depth as a Cue

    PubMed Central

    Zahorik, Pavel; Carney, Laurel H.; Bishop, Brian B.; Kuwada, Shigeyuki

    2015-01-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35–200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060

  8. Auditory distance coding in rabbit midbrain neurons and human perception: monaural amplitude modulation depth as a cue.

    PubMed

    Kim, Duck O; Zahorik, Pavel; Carney, Laurel H; Bishop, Brian B; Kuwada, Shigeyuki

    2015-04-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35-200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060

  9. The role of GABAergic inhibition in processing of interaural time difference in the owl's auditory system.

    PubMed

    Fujita, I; Konishi, M

    1991-03-01

    The barn owl uses interaural time differences (ITDs) to localize the azimuthal position of sound. ITDs are processed by an anatomically distinct pathway in the brainstem. Neuronal selectivity for ITD is generated in the nucleus laminaris (NL) and conveyed to both the anterior portion of the ventral nucleus of the lateral lemniscus (VLVa) and the central (ICc) and external (ICx) nuclei of the inferior colliculus. With tonal stimuli, neurons in all regions are found to respond maximally not only to the real ITD, but also to ITDs that differ by integer multiples of the tonal period. This phenomenon, phase ambiguity, does not occur when ICx neurons are stimulated with noise. The main aim of this study was to determine the role of GABAergic inhibition in the processing of ITDs. Selectivity for ITD is similar in the NL and VLVa and improves in the ICc and ICx. Iontophoresis of bicuculline methiodide (BMI), a selective GABAA antagonist, decreased the ITD selectivity of ICc and ICx neurons, but did not affect that of VLVa neurons. Responses of VLVa and ICc neurons to unfavorable ITDs were below the monaural response levels. BMI raised both binaural responses to unfavorable ITDs and monaural responses, though the former remained smaller than the latter. During BMI application, ICx neurons showed phase ambiguity to noise stimuli and no longer responded to a unique ITD. BMI increased the response magnitude and changed the temporal discharge patterns in the VLVa, ICc, and ICx. Iontophoretically applied GABA exerted effects opposite to those of BMI, and the effects could be antagonized with simultaneous application of BMI. These results suggest that GABAergic inhibition (1) sharpens ITD selectivity in the ICc and ICx, (2) contributes to the elimination of phase ambiguity in the ICx, and (3) controls response magnitude and temporal characteristics in the VLVa, ICc, and ICx. Through these actions, GABAergic inhibition shapes the horizontal dimension of the auditory receptive

  10. Effect of Auditory Motion Velocity on Reaction Time and Cortical Processes

    ERIC Educational Resources Information Center

    Getzmann, Stephan

    2009-01-01

    The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…

  11. The Time-Course of Auditory and Visual Distraction Effects in a New Crossmodal Paradigm

    ERIC Educational Resources Information Center

    Bendixen, Alexandra; Grimm, Sabine; Deouell, Leon Y.; Wetzel, Nicole; Madebach, Andreas; Schroger, Erich

    2010-01-01

    Vision often dominates audition when attentive processes are involved (e.g., the ventriloquist effect), yet little is known about the relative potential of the two modalities to initiate a "break through of the unattended". The present study was designed to systematically compare the capacity of task-irrelevant auditory and visual events to…

  12. A Latent Consolidation Phase in Auditory Identification Learning: Time in the Awake State Is Sufficient

    ERIC Educational Resources Information Center

    Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi

    2005-01-01

    Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…

  13. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    ERIC Educational Resources Information Center

    Casserly, Elizabeth D.; Pisoni, David B.

    2015-01-01

    Purpose: Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method: Experiments 1 (N = 37) and 2 (N =…

  14. Coding and Centering of Time in Latent Curve Models in the Presence of Interindividual Time Heterogeneity

    ERIC Educational Resources Information Center

    Blozis, Shelley A.; Cho, Young Il

    2008-01-01

    The coding of time in latent curve models has been shown to have important implications in the interpretation of growth parameters. Centering time is often done to improve interpretation but may have consequences for estimated parameters. This article studies the effects of coding and centering time when there is interindividual heterogeneity in…

  15. Method for run time hardware code profiling for algorithm acceleration

    NASA Astrophysics Data System (ADS)

    Matev, Vladimir; de la Torre, Eduardo; Riesgo, Teresa

    2009-05-01

    In this paper we propose a method for run time profiling of applications on instruction level by analysis of loops. Instead of looking for coarse grain blocks we concentrate on fine grain but still costly blocks in terms of execution times. Most code profiling is done in software by introducing code into the application under profile witch has time overhead, while in this work data for the position of a loop, loop body, size and number of executions is stored and analysed using a small non intrusive hardware block. The paper describes the system mapping to runtime reconfigurable systems. The fine grain code detector block synthesis results and its functionality verification are also presented in the paper. To demonstrate the concept MediaBench multimedia benchmark running on the chosen development platform is used.

  16. Subcortical modulation in auditory processing and auditory hallucinations.

    PubMed

    Ikuta, Toshikazu; DeRosse, Pamela; Argyelan, Miklos; Karlsgodt, Katherine H; Kingsley, Peter B; Szeszko, Philip R; Malhotra, Anil K

    2015-12-15

    Hearing perception in individuals with auditory hallucinations has not been well studied. Auditory hallucinations have previously been shown to involve primary auditory cortex activation. This activation suggests that auditory hallucinations activate the terminal of the auditory pathway as if auditory signals are submitted from the cochlea, and that a hallucinatory event is therefore perceived as hearing. The primary auditory cortex is stimulated by some unknown source that is outside of the auditory pathway. The current study aimed to assess the outcomes of stimulating the primary auditory cortex through the auditory pathway in individuals who have experienced auditory hallucinations. Sixteen patients with schizophrenia underwent functional magnetic resonance imaging (fMRI) sessions, as well as hallucination assessments. During the fMRI session, auditory stimuli were presented in one-second intervals at times when scanner noise was absent. Participants listened to auditory stimuli of sine waves (SW) (4-5.5kHz), English words (EW), and acoustically reversed English words (arEW) in a block design fashion. The arEW were employed to deliver the sound of a human voice with minimal linguistic components. Patients' auditory hallucination severity was assessed by the auditory hallucination item of the Brief Psychiatric Rating Scale (BPRS). During perception of arEW when compared with perception of SW, bilateral activation of the globus pallidus correlated with severity of auditory hallucinations. EW when compared with arEW did not correlate with auditory hallucination severity. Our findings suggest that the sensitivity of the globus pallidus to the human voice is associated with the severity of auditory hallucination. PMID:26275927

  17. Transformation from a pure time delay to a mixed time and phase delay representation in the auditory forebrain pathway.

    PubMed

    Vonderschen, Katrin; Wagner, Hermann

    2012-04-25

    Birds and mammals exploit interaural time differences (ITDs) for sound localization. Subsequent to ITD detection by brainstem neurons, ITD processing continues in parallel midbrain and forebrain pathways. In the barn owl, both ITD detection and processing in the midbrain are specialized to extract ITDs independent of frequency, which amounts to a pure time delay representation. Recent results have elucidated different mechanisms of ITD detection in mammals, which lead to a representation of small ITDs in high-frequency channels and large ITDs in low-frequency channels, resembling a phase delay representation. However, the detection mechanism does not prevent a change in ITD representation at higher processing stages. Here we analyze ITD tuning across frequency channels with pure tone and noise stimuli in neurons of the barn owl's auditory arcopallium, a nucleus at the endpoint of the forebrain pathway. To extend the analysis of ITD representation across frequency bands to a large neural population, we employed Fourier analysis for the spectral decomposition of ITD curves recorded with noise stimuli. This method was validated using physiological as well as model data. We found that low frequencies convey sensitivity to large ITDs, whereas high frequencies convey sensitivity to small ITDs. Moreover, different linear phase frequency regimes in the high-frequency and low-frequency ranges suggested an independent convergence of inputs from these frequency channels. Our results are consistent with ITD being remodeled toward a phase delay representation along the forebrain pathway. This indicates that sensory representations may undergo substantial reorganization, presumably in relation to specific behavioral output. PMID:22539852

  18. Using LDPC Code Constraints to Aid Recovery of Symbol Timing

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban

    2008-01-01

    A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation

  19. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  20. A neural mechanism for time-window separation resolves ambiguity of adaptive coding.

    PubMed

    Hildebrandt, K Jannis; Ronacher, Bernhard; Hennig, R Matthias; Benda, Jan

    2015-03-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task--namely, the reliable encoding of the pattern of an acoustic signal-but detrimental for another--the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  1. Time Shifted PN Codes for CW Lidar, Radar, and Sonar

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F. (Inventor); Prasad, Narasimha S. (Inventor); Harrison, Fenton W. (Inventor); Flood, Michael A. (Inventor)

    2013-01-01

    A continuous wave Light Detection and Ranging (CW LiDAR) system utilizes two or more laser frequencies and time or range shifted pseudorandom noise (PN) codes to discriminate between the laser frequencies. The performance of these codes can be improved by subtracting out the bias before processing. The CW LiDAR system may be mounted to an artificial satellite orbiting the earth, and the relative strength of the return signal for each frequency can be utilized to determine the concentration of selected gases or other substances in the atmosphere.

  2. EEG alpha spindles and prolonged brake reaction times during auditory distraction in an on-road driving study.

    PubMed

    Sonnleitner, Andreas; Treder, Matthias Sebastian; Simon, Michael; Willmann, Sven; Ewald, Arne; Buchner, Axel; Schrauf, Michael

    2014-01-01

    Driver distraction is responsible for a substantial number of traffic accidents. This paper describes the impact of an auditory secondary task on drivers' mental states during a primary driving task. N=20 participants performed the test procedure in a car following task with repeated forced braking on a non-public test track. Performance measures (provoked reaction time to brake lights) and brain activity (EEG alpha spindles) were analyzed to describe distracted drivers. Further, a classification approach was used to investigate whether alpha spindles can predict drivers' mental states. Results show that reaction times and alpha spindle rate increased with time-on-task. Moreover, brake reaction times and alpha spindle rate were significantly higher while driving with auditory secondary task opposed to driving only. In single-trial classification, a combination of spindle parameters yielded a median classification error of about 8% in discriminating the distracted from the alert driving. Reduced driving performance (i.e., prolonged brake reaction times) during increased cognitive load is assumed to be indicated by EEG alpha spindles, enabling the quantification of driver distraction in experiments on public roads without verbally assessing the drivers' mental states. PMID:24144496

  3. Change in Speech Perception and Auditory Evoked Potentials over Time after Unilateral Cochlear Implantation in Postlingually Deaf Adults.

    PubMed

    Purdy, Suzanne C; Kelly, Andrea S

    2016-02-01

    Speech perception varies widely across cochlear implant (CI) users and typically improves over time after implantation. There is also some evidence for improved auditory evoked potentials (shorter latencies, larger amplitudes) after implantation but few longitudinal studies have examined the relationship between behavioral and evoked potential measures after implantation in postlingually deaf adults. The relationship between speech perception and auditory evoked potentials was investigated in newly implanted cochlear implant users from the day of implant activation to 9 months postimplantation, on five occasions, in 10 adults age 27 to 57 years who had been bilaterally profoundly deaf for 1 to 30 years prior to receiving a unilateral CI24 cochlear implant. Changes over time in middle latency response (MLR), mismatch negativity, and obligatory cortical auditory evoked potentials and word and sentence speech perception scores were examined. Speech perception improved significantly over the 9-month period. MLRs varied and showed no consistent change over time. Three participants aged in their 50s had absent MLRs. The pattern of change in N1 amplitudes over the five visits varied across participants. P2 area increased significantly for 1,000- and 4,000-Hz tones but not for 250 Hz. The greatest change in P2 area occurred after 6 months of implant experience. Although there was a trend for mismatch negativity peak latency to reduce and width to increase after 3 months of implant experience, there was considerable variability and these changes were not significant. Only 60% of participants had a detectable mismatch initially; this increased to 100% at 9 months. The continued change in P2 area over the period evaluated, with a trend for greater change for right hemisphere recordings, is consistent with the pattern of incremental change in speech perception scores over time. MLR, N1, and mismatch negativity changes were inconsistent and hence P2 may be a more robust measure

  4. Cross-Modal Stimulus Conflict: The Behavioral Effects of Stimulus Input Timing in a Visual-Auditory Stroop Task

    PubMed Central

    Donohue, Sarah E.; Appelbaum, Lawrence G.; Park, Christina J.; Roberts, Kenneth C.; Woldorff, Marty G.

    2013-01-01

    Cross-modal processing depends strongly on the compatibility between different sensory inputs, the relative timing of their arrival to brain processing components, and on how attention is allocated. In this behavioral study, we employed a cross-modal audio-visual Stroop task in which we manipulated the within-trial stimulus-onset-asynchronies (SOAs) of the stimulus-component inputs, the grouping of the SOAs (blocked vs. random), the attended modality (auditory or visual), and the congruency of the Stroop color-word stimuli (congruent, incongruent, neutral) to assess how these factors interact within a multisensory context. One main result was that visual distractors produced larger incongruency effects on auditory targets than vice versa. Moreover, as revealed by both overall shorter response times (RTs) and relative shifts in the psychometric incongruency-effect functions, visual-information processing was faster and produced stronger and longer-lasting incongruency effects than did auditory. When attending to either modality, stimulus incongruency from the other modality interacted with SOA, yielding larger effects when the irrelevant distractor occurred prior to the attended target, but no interaction with SOA grouping. Finally, relative to neutral-stimuli, and across the wide range of the SOAs employed, congruency led to substantially more behavioral facilitation than did incongruency to interference, in contrast to findings that within-modality stimulus-compatibility effects tend to be more evenly split between facilitation and interference. In sum, the present findings reveal several key characteristics of how we process the stimulus compatibility of cross-modal sensory inputs, reflecting stimulus processing patterns that are critical for successfully navigating our complex multisensory world. PMID:23638149

  5. Time code dissemination experiment via the SIRIO-1 VHF transponder

    NASA Technical Reports Server (NTRS)

    Detoma, E.; Gobbo, G.; Leschiutta, S.; Pettiti, V.

    1982-01-01

    An experiment to evaluate the possibility of disseminating a time code via the SIRIO-1 satellite, by using the onboard VHF repeater is described. The precision in the synchronization of remote clocks was expected to be of the order of 0.1 to 1 ms. The RF carrier was in the VHF band, so that low cost receivers could be used and then a broader class of users could be served. An already existing repeater, even if not designed specifically for communications could be utilized; the operation of this repeater was not intended to affect any other function of the spacecraft (both the SHF repeater and the VHF telemetry link were active during the time code dissemination via the VHF transponder).

  6. Reducing EnergyPlus Run Time For Code Compliance Tools

    SciTech Connect

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.; Glazer, Jason

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and three climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.

  7. Code-Time Diversity for Direct Sequence Spread Spectrum Systems

    PubMed Central

    Hassan, A. Y.

    2014-01-01

    Time diversity is achieved in direct sequence spread spectrum by receiving different faded delayed copies of the transmitted symbols from different uncorrelated channel paths when the transmission signal bandwidth is greater than the coherence bandwidth of the channel. In this paper, a new time diversity scheme is proposed for spread spectrum systems. It is called code-time diversity. In this new scheme, N spreading codes are used to transmit one data symbol over N successive symbols interval. The diversity order in the proposed scheme equals to the number of the used spreading codes N multiplied by the number of the uncorrelated paths of the channel L. The paper represents the transmitted signal model. Two demodulators structures will be proposed based on the received signal models from Rayleigh flat and frequency selective fading channels. Probability of error in the proposed diversity scheme is also calculated for the same two fading channels. Finally, simulation results are represented and compared with that of maximal ration combiner (MRC) and multiple-input and multiple-output (MIMO) systems. PMID:24982925

  8. A novel 2D wavelength-time chaos code in optical CDMA system

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Xin, Xiangjun; Wang, Yongjun; Zhang, Lijia; Yu, Chongxiu; Meng, Nan; Wang, Houtian

    2012-11-01

    Two-dimensional wavelength-time chaos code is proposed and constructed for a synchronous optical code division multiple access system. The access performance is compared between one-dimensional chaos code, WDM/chaos code and the proposed code. Comparison shows that two-dimensional wavelength-time chaos code possesses larger capacity, better spectral efficiency and bit-error ratio than WDM/chaos combinations and one-dimensional chaos code.

  9. The topography of frequency and time representation in primate auditory cortices

    PubMed Central

    Baumann, Simon; Joly, Olivier; Rees, Adrian; Petkov, Christopher I; Sun, Li; Thiele, Alexander; Griffiths, Timothy D

    2015-01-01

    Natural sounds can be characterised by their spectral content and temporal modulation, but how the brain is organized to analyse these two critical sound dimensions remains uncertain. Using functional magnetic resonance imaging, we demonstrate a topographical representation of amplitude modulation rate in the auditory cortex of awake macaques. The representation of this temporal dimension is organized in approximately concentric bands of equal rates across the superior temporal plane in both hemispheres, progressing from high rates in the posterior core to low rates in the anterior core and lateral belt cortex. In A1 the resulting gradient of modulation rate runs approximately perpendicular to the axis of the tonotopic gradient, suggesting an orthogonal organisation of spectral and temporal sound dimensions. In auditory belt areas this relationship is more complex. The data suggest a continuous representation of modulation rate across several physiological areas, in contradistinction to a separate representation of frequency within each area. DOI: http://dx.doi.org/10.7554/eLife.03256.001 PMID:25590651

  10. Auditory temporal preparation induced by rhythmic cues during concurrent auditory working memory tasks.

    PubMed

    Cutanda, Diana; Correa, Ángel; Sanabria, Daniel

    2015-06-01

    The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. PMID:25893682

  11. Development of the auditory system

    PubMed Central

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  12. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  13. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation

    PubMed Central

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A.

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  14. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    PubMed

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  15. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. PMID:25726291

  16. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  17. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems. PMID:27421976

  18. Electrical stimulation of the auditory nerve: the coding of frequency, the perception of pitch and the development of cochlear implant speech processing strategies for profoundly deaf people.

    PubMed

    Clark, G M

    1996-09-01

    1. The development of speech processing strategies for multiple-channel cochlear implants has depended on encoding sound frequencies and intensities as temporal and spatial patterns of electrical stimulation of the auditory nerve fibres so that speech information of most importance of intelligibility could be transmitted. 2. Initial physiological studies showed that rate encoding of electrical stimulation above 200 pulses/s could not reproduce the normal response patterns in auditory neurons for acoustic stimulation in the speech frequency range above 200 Hz and suggested that place coding was appropriate for the higher frequencies. 3. Rate difference limens in the experimental animal were only similar to those for sound up to 200 Hz. 4. Rate difference limens in implant patients were similar to those obtained in the experimental animal. 5. Satisfactory rate discrimination could be made for durations of 50 and 100 ms, but not 25 ms. This made rate suitable for encoding longer duration suprasegmental speech information, but not segmental information, such as consonants. The rate of stimulation could also be perceived as pitch, discriminated at different electrode sites along the cochlea and discriminated for stimuli across electrodes. 6. Place pitch could be scaled according to the site of stimulation in the cochlea so that a frequency scale was preserved and it also had a different quality from rate pitch and was described as tonality. Place pitch could also be discriminated for the shorter durations (25 ms) required for identifying consonants. 7. The inaugural speech processing strategy encoded the second formant frequencies (concentrations of frequency energy in the mid frequency range of most importance for speech intelligibility) as place of stimulation, the voicing frequency as rate of stimulation and the intensity as current level. Our further speech processing strategies have extracted additional frequency information and coded this as place of stimulation

  19. Recursive time-varying filter banks for subband image coding

    NASA Technical Reports Server (NTRS)

    Smith, Mark J. T.; Chung, Wilson C.

    1992-01-01

    Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.

  20. Time transfer by IRIG-B time code via dedicated telephone link

    NASA Technical Reports Server (NTRS)

    Missout, G.; Beland, J.; Label, D.; Bedard, G.; Bussiere, P.

    1982-01-01

    Measurements were made of the stability of time transfer by the IRIG-B code over a dedicated telephone link on a microwave system. The short and long term Allan Variance was measured on both types of microwave system, one of which is synchronized, the other having free local oscillators. The results promise a time transfer accuracy of 10 microns. The paper also describes a prototype slave clock designed to detect interference in the IRIG-B code to ensure local time is kept during such interference.

  1. Parallel Processing of Distributed Video Coding to Reduce Decoding Time

    NASA Astrophysics Data System (ADS)

    Tonomura, Yoshihide; Nakachi, Takayuki; Fujii, Tatsuya; Kiya, Hitoshi

    This paper proposes a parallelized DVC framework that treats each bitplane independently to reduce the decoding time. Unfortunately, simple parallelization generates inaccurate bit probabilities because additional side information is not available for the decoding of subsequent bitplanes, which degrades encoding efficiency. Our solution is an effective estimation method that can calculate the bit probability as accurately as possible by index assignment without recourse to side information. Moreover, we improve the coding performance of Rate-Adaptive LDPC (RA-LDPC), which is used in the parallelized DVC framework. This proposal selects a fitting sparse matrix for each bitplane according to the syndrome rate estimation results at the encoder side. Simulations show that our parallelization method reduces the decoding time by up to 35[%] and achieves a bit rate reduction of about 10[%].

  2. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    PubMed Central

    Pisoni, David B.

    2015-01-01

    Purpose Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method Experiments 1 (N = 37) and 2 (N = 21) used pre- and posttest measures of speech and nonspeech recognition to find evidence of learning (within subject) and to compare the effects of 3 kinds of training (between subject) on the perceptual abilities of adults with normal hearing listening to simulations of cochlear implant processing. Subjects were given interactive, standard lab-based, or control training experience for 1 hr between the pre- and posttest tasks (unique sets across Experiments 1 & 2). Results Subjects receiving interactive training showed significant learning on sentence recognition in quiet task (Experiment 1), outperforming controls but not lab-trained subjects following training. Training groups did not differ significantly on any other task, even those directly involved in the interactive training experience. Conclusions Interactive training has the potential to produce learning in 1 domain (sentence recognition in quiet), but the particulars of the present training method (short duration, high complexity) may have limited benefits to this single criterion task. PMID:25674884

  3. Focal manipulations of formant trajectories reveal a role of auditory feedback in the online control of both within-syllable and between-syllable speech timing.

    PubMed

    Cai, Shanqing; Ghosh, Satrajit S; Guenther, Frank H; Perkell, Joseph S

    2011-11-01

    Within the human motor repertoire, speech production has a uniquely high level of spatiotemporal complexity. The production of running speech comprises the traversing of spatial positions with precisely coordinated articulator movements to produce 10-15 sounds/s. How does the brain use auditory feedback, namely the self-perception of produced speech sounds, in the online control of spatial and temporal parameters of multisyllabic articulation? This question has important bearings on the organizational principles of sequential actions, yet its answer remains controversial due to the long latency of the auditory feedback pathway and technical challenges involved in manipulating auditory feedback in precisely controlled ways during running speech. In this study, we developed a novel technique for introducing time-varying, focal perturbations in the auditory feedback during multisyllabic, connected speech. Manipulations of spatial and temporal parameters of the formant trajectory were tested separately on two groups of subjects as they uttered "I owe you a yo-yo." Under these perturbations, significant and specific changes were observed in both the spatial and temporal parameters of the produced formant trajectories. Compensations to spatial perturbations were bidirectional and opposed the perturbations. Furthermore, under perturbations that manipulated the timing of auditory feedback trajectory (slow-down or speed-up), significant adjustments in syllable timing were observed in the subjects' productions. These results highlight the systematic roles of auditory feedback in the online control of a highly over-learned action as connected speech articulation and provide a first look at the properties of this type of sensorimotor interaction in sequential movements. PMID:22072698

  4. Suboptimal Use of Neural Information in a Mammalian Auditory System

    PubMed Central

    Zilany, Muhammad S. A.; Huang, Nicholas J.; Abrams, Kristina S.; Idrobo, Fabio

    2014-01-01

    Establishing neural determinants of psychophysical performance requires both behavioral and neurophysiological metrics amenable to correlative analyses. It is often assumed that organisms use neural information optimally, such that any information available in a neural code that could improve behavioral performance is used. Studies have shown that detection of amplitude-modulated (AM) auditory tones by humans is correlated to neural synchrony thresholds, as recorded in rabbit at the level of the inferior colliculus, the first level of the ascending auditory pathway where neurons are tuned to AM stimuli. Behavioral thresholds in rabbit, however, are ∼10 dB higher (i.e., 3 times less sensitive) than in humans, and are better correlated to rate-based than temporal coding schemes in the auditory midbrain. The behavioral and physiological results shown here illustrate an unexpected, suboptimal utilization of available neural information that could provide new insights into the mechanisms that link neuronal function to behavior. PMID:24453321

  5. Time-Dependent, Parallel Neutral Particle Transport Code System.

    Energy Science and Technology Software Center (ESTSC)

    2009-09-10

    Version 00 PARTISN (PARallel, TIme-Dependent SN) is the evolutionary successor to CCC-547/DANTSYS. The PARTISN code package is a modular computer program package designed to solve the time-independent or dependent multigroup discrete ordinates form of the Boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, the Solver Module, and themore » Edit Module, respectively. PARTISN is the evolutionary successor to the DANTSYSTM code system package. The Input and Edit Modules in PARTISN are very similar to those in DANTSYS. However, unlike DANTSYS, the Solver Module in PARTISN contains one, two, and three-dimensional solvers in a single module. In addition to the diamond-differencing method, the Solver Module also has Adaptive Weighted Diamond-Differencing (AWDD), Linear Discontinuous (LD), and Exponential Discontinuous (ED) spatial differencing methods. The spatial mesh may consist of either a standard orthogonal mesh or a block adaptive orthogonal mesh. The Solver Module may be run in parallel for two and three dimensional problems. One can now run 1-D problems in parallel using Energy Domain Decomposition (triggered by Block 5 input keyword npeg>0). EDD can also be used in 2-D/3-D with or without our standard Spatial Domain Decomposition. Both the static (fixed source or eigenvalue) and time-dependent forms of the transport equation are solved in forward or adjoint mode. In addition, PARTISN now has a probabilistic mode for Probability of Initiation (static) and Probability of Survival (dynamic) calculations. Vacuum, reflective, periodic, white, or inhomogeneous boundary conditions are solved. General anisotropic scattering and inhomogeneous sources are permitted. PARTISN solves the transport equation on orthogonal (single level or block-structured AMR) grids in 1-D

  6. Time and Category Information in Pattern-Based Codes

    PubMed Central

    Eyherabide, Hugo Gabriel; Samengo, Inés

    2010-01-01

    Sensory stimuli are usually composed of different features (the what) appearing at irregular times (the when). Neural responses often use spike patterns to represent sensory information. The what is hypothesized to be encoded in the identity of the elicited patterns (the pattern categories), and the when, in the time positions of patterns (the pattern timing). However, this standard view is oversimplified. In the real world, the what and the when might not be separable concepts, for instance, if they are correlated in the stimulus. In addition, neuronal dynamics can condition the pattern timing to be correlated with the pattern categories. Hence, timing and categories of patterns may not constitute independent channels of information. In this paper, we assess the role of spike patterns in the neural code, irrespective of the nature of the patterns. We first define information-theoretical quantities that allow us to quantify the information encoded by different aspects of the neural response. We also introduce the notion of synergy/redundancy between time positions and categories of patterns. We subsequently establish the relation between the what and the when in the stimulus with the timing and the categories of patterns. To that aim, we quantify the mutual information between different aspects of the stimulus and different aspects of the response. This formal framework allows us to determine the precise conditions under which the standard view holds, as well as the departures from this simple case. Finally, we study the capability of different response aspects to represent the what and the when in the neural response. PMID:21151371

  7. Speech motor learning changes the neural response to both auditory and somatosensory signals

    PubMed Central

    Ito, Takayuki; Coppola, Joshua H.; Ostry, David J.

    2016-01-01

    In the present paper, we present evidence for the idea that speech motor learning is accompanied by changes to the neural coding of both auditory and somatosensory stimuli. Participants in our experiments undergo adaptation to altered auditory feedback, an experimental model of speech motor learning which like visuo-motor adaptation in limb movement, requires that participants change their speech movements and associated somatosensory inputs to correct for systematic real-time changes to auditory feedback. We measure the sensory effects of adaptation by examining changes to auditory and somatosensory event-related responses. We find that adaptation results in progressive changes to speech acoustical outputs that serve to correct for the perturbation. We also observe changes in both auditory and somatosensory event-related responses that are correlated with the magnitude of adaptation. These results indicate that sensory change occurs in conjunction with the processes involved in speech motor adaptation. PMID:27181603

  8. Application of satellite time transfer in autonomous spacecraft clocks. [binary time code

    NASA Technical Reports Server (NTRS)

    Chi, A. R.

    1979-01-01

    The conceptual design of a spacecraft clock that will provide a standard time scale for experimenters in future spacecraft., and can be sychronized to a time scale without the need for additional calibration and validation is described. The time distribution to the users is handled through onboard computers, without human intervention for extended periods. A group parallel binary code, under consideration for onboard use, is discussed. Each group in the code can easily be truncated. The autonomously operated clock not only achieves simpler procedures and shorter lead times for data processing, but also contributes to spacecraft autonomy for onboard navigation and data packetization. The clock can be used to control the sensor in a spacecraft, compare another time signal such as that from the global positioning system, and, if the cost is not a consideration, can be used on the ground in remote sites for timekeeping and control.

  9. Modeling neural adaptation in the frog auditory system

    NASA Astrophysics Data System (ADS)

    Wotton, Janine; McArthur, Kimberly; Bohara, Amit; Ferragamo, Michael; Megela Simmons, Andrea

    2005-09-01

    Extracellular recordings from the auditory midbrain, Torus semicircularis, of the leopard frog reveal a wide diversity of tuning patterns. Some cells seem to be well suited for time-based coding of signal envelope, and others for rate-based coding of signal frequency. Adaptation for ongoing stimuli plays a significant role in shaping the frequency-dependent response rate at different levels of the frog auditory system. Anuran auditory-nerve fibers are unusual in that they reveal frequency-dependent adaptation [A. L. Megela, J. Acoust. Soc. Am. 75, 1155-1162 (1984)], and therefore provide rate-based input. In order to examine the influence of these peripheral inputs on central responses, three layers of auditory neurons were modeled to examine short-term neural adaptation to pure tones and complex signals. The response of each neuron was simulated with a leaky integrate and fire model, and adaptation was implemented by means of an increasing threshold. Auditory-nerve fibers, dorsal medullary nucleus neurons, and toral cells were simulated and connected in three ascending layers. Modifying the adaptation properties of the peripheral fibers dramatically alters the response at the midbrain. [Work supported by NOHR to M.J.F.; Gustavus Presidential Scholarship to K.McA.; NIH DC05257 to A.M.S.

  10. Interference between postural control and spatial vs. non-spatial auditory reaction time tasks in older adults.

    PubMed

    Fuhrman, Susan I; Redfern, Mark S; Jennings, J Richard; Furman, Joseph M

    2015-01-01

    This study investigated whether spatial aspects of an information processing task influence dual-task interference. Two groups (Older/Young) of healthy adults participated in dual-task experiments. Two auditory information processing tasks included a frequency discrimination choice reaction time task (non-spatial task) and a lateralization choice reaction time task (spatial task). Postural tasks included combinations of standing with eyes open or eyes closed on either a fixed floor or a sway-referenced floor. Reaction times and postural sway via center of pressure were recorded. Baseline measures of reaction time and sway were subtracted from the corresponding dual-task results to calculate reaction time task costs and postural task costs. Reaction time task cost increased with eye closure (p = 0.01), sway-referenced flooring (p < 0.0001), and the spatial task (p = 0.04). Additionally, a significant (p = 0.05) task x vision x age interaction indicated that older subjects had a significant vision X task interaction whereas young subjects did not. However, when analyzed by age group, the young group showed minimal differences in interference for the spatial and non-spatial tasks with eyes open, but showed increased interference on the spatial relative to non-spatial task with eyes closed. On the contrary, older subjects demonstrated increased interference on the spatial relative to the non-spatial task with eyes open, but not with eyes closed. These findings suggest that visual-spatial interference may occur in older subjects when vision is used to maintain posture. PMID:26410669

  11. Coded acoustic wave sensors and system using time diversity

    NASA Technical Reports Server (NTRS)

    Solie, Leland P. (Inventor); Hines, Jacqueline H. (Inventor)

    2012-01-01

    An apparatus and method for distinguishing between sensors that are to be wirelessly detected is provided. An interrogator device uses different, distinct time delays in the sensing signals when interrogating the sensors. The sensors are provided with different distinct pedestal delays. Sensors that have the same pedestal delay as the delay selected by the interrogator are detected by the interrogator whereas other sensors with different pedestal delays are not sensed. Multiple sensors with a given pedestal delay are provided with different codes so as to be distinguished from one another by the interrogator. The interrogator uses a signal that is transmitted to the sensor and returned by the sensor for combination and integration with the reference signal that has been processed by a function. The sensor may be a surface acoustic wave device having a differential impulse response with a power spectral density consisting of lobes. The power spectral density of the differential response is used to determine the value of the sensed parameter or parameters.

  12. From ear to body: the auditory-motor loop in spatial cognition

    PubMed Central

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space. PMID:25249933

  13. From ear to body: the auditory-motor loop in spatial cognition.

    PubMed

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    SPATIAL MEMORY IS MAINLY STUDIED THROUGH THE VISUAL SENSORY MODALITY: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space. PMID:25249933

  14. Cues for auditory stream segregation of birdsong in budgerigars and zebra finches: Effects of location, timing, amplitude, and frequency.

    PubMed

    Dent, Micheal L; Martin, Amanda K; Flaherty, Mary M; Neilans, Erikson G

    2016-02-01

    Deciphering the auditory scene is a problem faced by many organisms. However, when faced with numerous overlapping sounds from multiple locations, listeners are still able to attribute the individual sound objects to their individual sound-producing sources. Here, the characteristics of sounds important for integrating versus segregating in birds were determined. Budgerigars and zebra finches were trained using operant conditioning procedures on an identification task to peck one key when they heard a whole zebra finch song and to peck another when they heard a zebra finch song missing a middle syllable. Once the birds were trained to a criterion performance level on those stimuli, probe trials were introduced on a small proportion of trials. The probe songs contained modifications of the incomplete training song's missing syllable. When the bird responded as if the probe was a whole song, it suggests they streamed together the altered syllable and the rest of the song. When the bird responded as if the probe was a non-whole song, it suggests they segregated the altered probe from the rest of the song. Results show that some features, such as location and intensity, are more important for segregating than other features, such as timing and frequency. PMID:26936551

  15. Spiking Neurons Learning Phase Delays: How Mammals May Develop Auditory Time-Difference Sensitivity

    NASA Astrophysics Data System (ADS)

    Leibold, Christian; van Hemmen, J. Leo

    2005-04-01

    Time differences between the two ears are an important cue for animals to azimuthally locate a sound source. The first binaural brainstem nucleus, in mammals the medial superior olive, is generally believed to perform the necessary computations. Its cells are sensitive to variations of interaural time differences of about 10 μs. The classical explanation of such a neuronal time-difference tuning is based on the physical concept of delay lines. Recent data, however, are inconsistent with a temporal delay and rather favor a phase delay. By means of a biophysical model we show how spike-timing-dependent synaptic learning explains precise interplay of excitation and inhibition and, hence, accounts for a physical realization of a phase delay.

  16. Feature Assignment in Perception of Auditory Figure

    ERIC Educational Resources Information Center

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  17. MEG dual scanning: a procedure to study real-time auditory interaction between two persons

    PubMed Central

    Baess, Pamela; Zhdanov, Andrey; Mandel, Anne; Parkkonen, Lauri; Hirvenkari, Lotta; Mäkelä, Jyrki P.; Jousmäki, Veikko; Hari, Riitta

    2012-01-01

    Social interactions fill our everyday life and put strong demands on our brain function. However, the possibilities for studying the brain basis of social interaction are still technically limited, and even modern brain imaging studies of social cognition typically monitor just one participant at a time. We present here a method to connect and synchronize two faraway neuromagnetometers. With this method, two participants at two separate sites can interact with each other through a stable real-time audio connection with minimal delay and jitter. The magnetoencephalographic (MEG) and audio recordings of both laboratories are accurately synchronized for joint offline analysis. The concept can be extended to connecting multiple MEG devices around the world. As a proof of concept of the MEG-to-MEG link, we report the results of time-sensitive recordings of cortical evoked responses to sounds delivered at laboratories separated by 5 km. PMID:22514530

  18. Development of a test for recording both visual and auditory reaction times, potentially useful for future studies in patients on opioids therapy

    PubMed Central

    Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio

    2015-01-01

    Objective Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual’s capacity to drive safely. Methods The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Results Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. Conclusion We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely. PMID:25709406

  19. The Effect of Dopaminergic Medication on Beat-Based Auditory Timing in Parkinson’s Disease

    PubMed Central

    Cameron, Daniel J.; Pickett, Kristen A.; Earhart, Gammon M.; Grahn, Jessica A.

    2016-01-01

    Parkinson’s disease (PD) adversely affects timing abilities. Beat-based timing is a mechanism that times events relative to a regular interval, such as the “beat” in musical rhythm, and is impaired in PD. It is unknown if dopaminergic medication influences beat-based timing in PD. Here, we tested beat-based timing over two sessions in participants with PD (OFF then ON dopaminergic medication) and in unmedicated control participants. People with PD and control participants completed two tasks. The first was a discrimination task in which participants compared two rhythms and determined whether they were the same or different. Rhythms either had a beat structure (metric simple rhythms) or did not (metric complex rhythms), as in previous studies. Discrimination accuracy was analyzed to test for the effects of beat structure, as well as differences between participants with PD and controls, and effects of medication (PD group only). The second task was the Beat Alignment Test (BAT), in which participants listened to music with regular tones superimposed, and responded as to whether the tones were “ON” or “OFF” the beat of the music. Accuracy was analyzed to test for differences between participants with PD and controls, and for an effect of medication in patients. Both patients and controls discriminated metric simple rhythms better than metric complex rhythms. Controls also improved at the discrimination task in the second vs. first session, whereas people with PD did not. For participants with PD, the difference in performance between metric simple and metric complex rhythms was greater (sensitivity to changes in simple rhythms increased and sensitivity to changes in complex rhythms decreased) when ON vs. OFF medication. Performance also worsened with disease severity. For the BAT, no group differences or effects of medication were found. Overall, these findings suggest that timing is impaired in PD, and that dopaminergic medication influences beat

  20. The Effect of Dopaminergic Medication on Beat-Based Auditory Timing in Parkinson's Disease.

    PubMed

    Cameron, Daniel J; Pickett, Kristen A; Earhart, Gammon M; Grahn, Jessica A

    2016-01-01

    Parkinson's disease (PD) adversely affects timing abilities. Beat-based timing is a mechanism that times events relative to a regular interval, such as the "beat" in musical rhythm, and is impaired in PD. It is unknown if dopaminergic medication influences beat-based timing in PD. Here, we tested beat-based timing over two sessions in participants with PD (OFF then ON dopaminergic medication) and in unmedicated control participants. People with PD and control participants completed two tasks. The first was a discrimination task in which participants compared two rhythms and determined whether they were the same or different. Rhythms either had a beat structure (metric simple rhythms) or did not (metric complex rhythms), as in previous studies. Discrimination accuracy was analyzed to test for the effects of beat structure, as well as differences between participants with PD and controls, and effects of medication (PD group only). The second task was the Beat Alignment Test (BAT), in which participants listened to music with regular tones superimposed, and responded as to whether the tones were "ON" or "OFF" the beat of the music. Accuracy was analyzed to test for differences between participants with PD and controls, and for an effect of medication in patients. Both patients and controls discriminated metric simple rhythms better than metric complex rhythms. Controls also improved at the discrimination task in the second vs. first session, whereas people with PD did not. For participants with PD, the difference in performance between metric simple and metric complex rhythms was greater (sensitivity to changes in simple rhythms increased and sensitivity to changes in complex rhythms decreased) when ON vs. OFF medication. Performance also worsened with disease severity. For the BAT, no group differences or effects of medication were found. Overall, these findings suggest that timing is impaired in PD, and that dopaminergic medication influences beat-based and non

  1. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    PubMed Central

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  2. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    PubMed

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  3. Auditory Reserve and the Legacy of Auditory Experience

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function. PMID:25405381

  4. One hundred ways to process time, frequency, rate and scale in the central auditory system: a pattern-recognition meta-analysis.

    PubMed

    Hemery, Edgar; Aucouturier, Jean-Julien

    2015-01-01

    The mammalian auditory system extracts features from the acoustic environment based on the responses of spatially distributed sets of neurons in the subcortical and cortical auditory structures. The characteristic responses of these neurons (linearly approximated by their spectro-temporal receptive fields, or STRFs) suggest that auditory representations are formed, as early as in the inferior colliculi, on the basis of a time, frequency, rate (temporal modulations) and scale (spectral modulations) analysis of sound. However, how these four dimensions are integrated and processed in subsequent neural networks remains unclear. In this work, we present a new methodology to generate computational insights into the functional organization of such processes. We first propose a systematic framework to explore more than a hundred different computational strategies proposed in the literature to process the output of a generic STRF model. We then evaluate these strategies on their ability to compute perceptual distances between pairs of environmental sounds. Finally, we conduct a meta-analysis of the dataset of all these algorithms' accuracies to examine whether certain combinations of dimensions and certain ways to treat such dimensions are, on the whole, more computationally effective than others. We present an application of this methodology to a dataset of ten environmental sound categories, in which the analysis reveals that (1) models are most effective when they organize STRF data into frequency groupings-which is consistent with the known tonotopic organization of receptive fields in auditory structures -, and that (2) models that treat STRF data as time series are no more effective than models that rely only on summary statistics along time-which corroborates recent experimental evidence on texture discrimination by summary statistics. PMID:26190996

  5. A High-Rate Space-Time Block Code with Full Diversity

    NASA Astrophysics Data System (ADS)

    Gao, Zhenzhen; Zhu, Shihua; Zhong, Zhimeng

    A new high-rate space-time block code (STBC) with full transmit diversity gain for four transmit antennas based on a generalized Alamouti code structure is proposed. The proposed code has lower Maximum Likelihood (ML) decoding complexity than the Double ABBA scheme does. Constellation rotation is used to maximize the diversity product. With the optimal rotated constellations, the proposed code significantly outperforms some known high-rate STBCs in the literature with similar complexity and the same spectral efficiency.

  6. Speakers' acceptance of real-time speech exchange indicates that we use auditory feedback to specify the meaning of what we say.

    PubMed

    Lind, Andreas; Hall, Lars; Breidegard, Björn; Balkenius, Christian; Johansson, Petter

    2014-06-01

    Speech is usually assumed to start with a clearly defined preverbal message, which provides a benchmark for self-monitoring and a robust sense of agency for one's utterances. However, an alternative hypothesis states that speakers often have no detailed preview of what they are about to say, and that they instead use auditory feedback to infer the meaning of their words. In the experiment reported here, participants performed a Stroop color-naming task while we covertly manipulated their auditory feedback in real time so that they said one thing but heard themselves saying something else. Under ideal timing conditions, two thirds of these semantic exchanges went undetected by the participants, and in 85% of all nondetected exchanges, the inserted words were experienced as self-produced. These findings indicate that the sense of agency for speech has a strong inferential component, and that auditory feedback of one's own voice acts as a pathway for semantic monitoring, potentially overriding other feedback loops. PMID:24777489

  7. Auditory synesthesias.

    PubMed

    Afra, Pegah

    2015-01-01

    Synesthesia is experienced when sensory stimulation of one sensory modality (the inducer) elicits an involuntary or automatic sensation in another sensory modality or different aspect of the same sensory modality (the concurrent). Auditory synesthesias (AS) occur when auditory stimuli trigger a variety of concurrents, or when non-auditory sensory stimulations trigger auditory synesthetic perception. The AS are divided into three types: developmental, acquired, and induced. Developmental AS are not a neurologic disorder but a different way of experiencing one's environment. They are involuntary and highly consistent experiences throughout one's life. Acquired AS have been reported in association with neurologic diseases that cause deafferentation of anterior optic pathways, with pathologic lesions affecting the central nervous system (CNS) outside of the optic pathways, as well as non-lesional cases associated with migraine, and epilepsy. It also has been reported with mood disorders as well as a single idiopathic case. Induced AS has been reported in experimental and postsurgical blindfolding, as well as intake of hallucinogenics or psychedelics. In this chapter the three different types of synesthesia, their characteristics, and phenomologic differences, as well as their possible neural mechanisms are discussed. PMID:25726281

  8. Auditory system

    NASA Technical Reports Server (NTRS)

    Ades, H. W.

    1973-01-01

    The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.

  9. The Perception of Auditory Motion.

    PubMed

    Carlile, Simon; Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  10. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  11. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... of on-time performance codes. (a) Each reporting carrier shall calculate an on-time performance...

  12. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... of on-time performance codes. (a) Each reporting carrier shall calculate an on-time performance...

  13. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... of on-time performance codes. (a) Each reporting carrier shall calculate an on-time performance...

  14. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... of on-time performance codes. (a) Each reporting carrier shall calculate an on-time performance...

  15. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... of on-time performance codes. (a) Each reporting carrier shall calculate an on-time performance...

  16. One hundred ways to process time, frequency, rate and scale in the central auditory system: a pattern-recognition meta-analysis

    PubMed Central

    Hemery, Edgar; Aucouturier, Jean-Julien

    2015-01-01

    The mammalian auditory system extracts features from the acoustic environment based on the responses of spatially distributed sets of neurons in the subcortical and cortical auditory structures. The characteristic responses of these neurons (linearly approximated by their spectro-temporal receptive fields, or STRFs) suggest that auditory representations are formed, as early as in the inferior colliculi, on the basis of a time, frequency, rate (temporal modulations) and scale (spectral modulations) analysis of sound. However, how these four dimensions are integrated and processed in subsequent neural networks remains unclear. In this work, we present a new methodology to generate computational insights into the functional organization of such processes. We first propose a systematic framework to explore more than a hundred different computational strategies proposed in the literature to process the output of a generic STRF model. We then evaluate these strategies on their ability to compute perceptual distances between pairs of environmental sounds. Finally, we conduct a meta-analysis of the dataset of all these algorithms' accuracies to examine whether certain combinations of dimensions and certain ways to treat such dimensions are, on the whole, more computationally effective than others. We present an application of this methodology to a dataset of ten environmental sound categories, in which the analysis reveals that (1) models are most effective when they organize STRF data into frequency groupings—which is consistent with the known tonotopic organization of receptive fields in auditory structures -, and that (2) models that treat STRF data as time series are no more effective than models that rely only on summary statistics along time—which corroborates recent experimental evidence on texture discrimination by summary statistics. PMID:26190996

  17. The Role of Animacy in the Real Time Comprehension of Mandarin Chinese: Evidence from Auditory Event-Related Brain Potentials

    ERIC Educational Resources Information Center

    Philipp, Markus; Bornkessel-Schlesewsky, Ina; Bisang, Walter; Schlesewsky, Matthias

    2008-01-01

    Two auditory ERP studies examined the role of animacy in sentence comprehension in Mandarin Chinese by comparing active and passive sentences in simple verb-final (Experiment 1) and relative clause constructions (Experiment 2). In addition to the voice manipulation (which modulated the assignment of actor and undergoer roles to the arguments),…

  18. Time-dependent recycling modeling with edge plasma transport codes

    NASA Astrophysics Data System (ADS)

    Pigarov, A.; Krasheninnikov, S.; Rognlien, T.; Taverniers, S.; Hollmann, E.

    2013-10-01

    First,we discuss extensions to Macroblob approach which allow to simulate more accurately dynamics of ELMs, pedestal and edge transport with UEDGE code. Second,we present UEDGE modeling results for H mode discharge with infrequent ELMs and large pedestal losses on DIII-D. In modeled sequence of ELMs this discharge attains a dynamic equilibrium. Temporal evolution of pedestal plasma profiles, spectral line emission, and surface temperature matching experimental data over ELM cycle is discussed. Analysis of dynamic gas balance highlights important role of material surfaces. We quantified the wall outgassing between ELMs as 3X the NBI fueling and the recycling coefficient as 0.8 for wall pumping via macroblob-wall interactions. Third,we also present results from multiphysics version of UEDGE with built-in, reduced, 1-D wall models and analyze the role of various PMI processes. Progress in framework-coupled UEDGE/WALLPSI code is discussed. Finally, implicit coupling schemes are important feature of multiphysics codes and we report on the results of parametric analysis of convergence and performance for Picard and Newton iterations in a system of coupled deterministic-stochastic ODE and proposed modifications enhancing convergence.

  19. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... disclosure of on-time performance codes. (a) Any air carrier may determine, in accordance with the...

  20. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... disclosure of on-time performance codes. (a) Any air carrier may determine, in accordance with the...

  1. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... disclosure of on-time performance codes. (a) Any air carrier may determine, in accordance with the...

  2. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... disclosure of on-time performance codes. (a) Any air carrier may determine, in accordance with the...

  3. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... disclosure of on-time performance codes. (a) Any air carrier may determine, in accordance with the...

  4. Robust Timing Synchronization for Aviation Communications, and Efficient Modulation and Coding Study for Quantum Communication

    NASA Technical Reports Server (NTRS)

    Xiong, Fugin

    2003-01-01

    One half of Professor Xiong's effort will investigate robust timing synchronization schemes for dynamically varying characteristics of aviation communication channels. The other half of his time will focus on efficient modulation and coding study for the emerging quantum communications.

  5. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  6. CoCoNuT: General relativistic hydrodynamics code with dynamical space-time evolution

    NASA Astrophysics Data System (ADS)

    Dimmelmeier, Harald; Novak, Jérôme; Cerdá-Durán, Pablo

    2012-02-01

    CoCoNuT is a general relativistic hydrodynamics code with dynamical space-time evolution. The main aim of this numerical code is the study of several astrophysical scenarios in which general relativity can play an important role, namely the collapse of rapidly rotating stellar cores and the evolution of isolated neutron stars. The code has two flavors: CoCoA, the axisymmetric (2D) magnetized version, and CoCoNuT, the 3D non-magnetized version.

  7. Perception and coding of interaural time differences with bilateral cochlear implants.

    PubMed

    Laback, Bernhard; Egger, Katharina; Majdak, Piotr

    2015-04-01

    Bilateral cochlear implantation is increasingly becoming the standard in the clinical treatment of bilateral deafness. The main motivation is to provide users of bilateral cochlear implants (CIs) access to binaural cues essential for localizing sound sources and understanding speech in environments of interfering sounds. One of those cues, interaural level differences, can be perceived well by CI users to allow some basic left versus right localization. However, interaural time differences (ITDs) which are important for localization of low-frequency sounds and spatial release from masking are not adequately represented by clinical envelope-based CI systems. Here, we first review the basic ITD sensitivity of CI users, particularly their dependence on stimulation parameters like stimulation rate and place, modulation rate, and envelope shape in single-electrode stimulation, as well as stimulation level, electrode spacing, and monaural across-electrode timing in multiple-electrode stimulation. Then, we discuss factors involved in ITD perception in electric hearing including the match between highly phase-locked electric auditory nerve response properties and binaural cell properties, the restricted stimulation of apical tonotopic pathways, channel interactions in multiple-electrode stimulation, and the onset age of binaural auditory input. Finally, we present clinically available CI stimulation strategies and experimental strategies aiming at improving listeners' access to ITD cues. This article is part of a Special Issue entitled . PMID:25456088

  8. Driving-Simulator-Based Test on the Effectiveness of Auditory Red-Light Running Vehicle Warning System Based on Time-To-Collision Sensor

    PubMed Central

    Yan, Xuedong; Xue, Qingwan; Ma, Lu; Xu, Yongcun

    2014-01-01

    The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR) collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM). The collisions avoidance related variables were measured in terms of brake reaction time (BRT), maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles. PMID:24566631

  9. Auditory spatial attention representations in the human cerebral cortex.

    PubMed

    Kong, Lingqiang; Michalka, Samantha W; Rosen, Maya L; Sheremata, Summer L; Swisher, Jascha D; Shinn-Cunningham, Barbara G; Somers, David C

    2014-03-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753

  10. Quantitative Characterization of Super-Resolution Infrared Imaging Based on Time-Varying Focal Plane Coding

    NASA Astrophysics Data System (ADS)

    Wang, X.; Yuan, Y.; Zhang, J.; Chen, Y.; Cheng, Y.

    2014-10-01

    High resolution infrared image has been the goal of an infrared imaging system. In this paper, a super-resolution infrared imaging method using time-varying coded mask is proposed based on focal plane coding and compressed sensing theory. The basic idea of this method is to set a coded mask on the focal plane of the optical system, and the same scene could be sampled many times repeatedly by using time-varying control coding strategy, the super-resolution image is further reconstructed by sparse optimization algorithm. The results of simulation are quantitatively evaluated by introducing the Peak Signal-to-Noise Ratio (PSNR) and Modulation Transfer Function (MTF), which illustrate that the effect of compressed measurement coefficient r and coded mask resolution m on the reconstructed image quality. Research results show that the proposed method will promote infrared imaging quality effectively, which will be helpful for the practical design of new type of high resolution ! infrared imaging systems.

  11. Naturalistic Stimuli Increase the Rate and Efficiency of Information Transmission by Primary Auditory Afferents

    NASA Astrophysics Data System (ADS)

    Rieke, F.; Bodnar, D. A.; Bialek, W.

    1995-12-01

    Natural sounds, especially communication sounds, have highly structured amplitude and phase spectra. We have quantified how structure in the amplitude spectrum of natural sounds affects coding in primary auditory afferents. Auditory afferents encode stimuli with naturalistic amplitude spectra dramatically better than broad-band stimuli (approximating white noise); the rate at which the spike train carries information about the stimulus is 2-6 times higher for naturalistic sounds. Furthermore, the information rates can reach 90% of the fundamental limit to information transmission set by the statistics of the spike response. These results indicate that the coding strategy of the auditory nerve is matched to the structure of natural sounds; this `tuning' allows afferent spike trains to provide higher processing centres with a more complete description of the sensory world.

  12. Intercept Centering and Time Coding in Latent Difference Score Models

    ERIC Educational Resources Information Center

    Grimm, Kevin J.

    2012-01-01

    Latent difference score (LDS) models combine benefits derived from autoregressive and latent growth curve models allowing for time-dependent influences and systematic change. The specification and descriptions of LDS models include an initial level of ability or trait plus an accumulation of changes. A limitation of this specification is that the…

  13. Further Development and Implementation of Implicit Time Marching in the CAA Code

    NASA Technical Reports Server (NTRS)

    Golubev, Vladimir V.

    2003-01-01

    The fellowship research project continued last-year work on implementing implicit time marching concepts in the Broadband Aeroacoustic System Simulator (BASS) code. This code is being developed at NASA Glenn for analysis of unsteady flow and sources of noise in propulsion systems, including jet noise and fan noise.

  14. Assessment of effectiveness of signal-code constructions in time division-multi-access satellite systems

    NASA Astrophysics Data System (ADS)

    Portnoy, S. L.; Ankudinov, D. R.

    1985-01-01

    Energy losses in TDMA satellite circuits are investigated on the basis of the model of a Gaussian memoryless channel incorporating a signal code construction. The signal code construction is a consolidated two stage construction with a modulation system as the inner stage and correcting codes as the outer stage. Signal code constructions employing Gray codes, cascade codes and M-ary block codes are considered. Real TDMA systems are analyzed on the assumptions that the calculations are made using an audio frequency equivalent of the circuit, the relay carries a single trunk, the timing and carrier frequency synchronization is ideal, the signal is transmitted in the continuous stream, and there is no noise at the input of the receiving filter. The effectiveness of a signal code construction employing cascade codes on a real satellite link incorporating MDVU-40 equipment is modeled. The method can be used to select the signal code construction in a communications channel for the required data rate, and to maximize the energy gain and attainable transmission rate over the relay trunk.

  15. Selective processing of auditory evoked responses with iterative-randomized stimulation and averaging: A strategy for evaluating the time-invariant assumption.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D

    2016-03-01

    The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed. PMID:26778545

  16. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  17. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    PubMed

    Mossbridge, Julia A; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  18. Gene-Auto: Automatic Software Code Generation for Real-Time Embedded Systems

    NASA Astrophysics Data System (ADS)

    Rugina, A.-E.; Thomas, D.; Olive, X.; Veran, G.

    2008-08-01

    This paper gives an overview of the Gene-Auto ITEA European project, which aims at building a qualified C code generator from mathematical models under Matlab-Simulink and Scilab-Scicos. The project is driven by major European industry partners, active in the real-time embedded systems domains. The Gene- Auto code generator will significantly improve the current development processes in such domains by shortening the time to market and by guaranteeing the quality of the generated code through the use of formal methods. The first version of the Gene-Auto code generator has already been released and has gone thought a validation phase on real-life case studies defined by each project partner. The validation results are taken into account in the implementation of the second version of the code generator. The partners aim at introducing the Gene-Auto results into industrial development by 2010.

  19. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  20. Auditory sequence analysis and phonological skill.

    PubMed

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E; Turton, Stuart; Griffiths, Timothy D

    2012-11-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  1. Auditory short-term memory in the primate auditory cortex.

    PubMed

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26541581

  2. Electrophysiological study of auditory development.

    PubMed

    Lippé, S; Martinez-Montes, E; Arcand, C; Lassonde, M

    2009-12-15

    Cortical auditory evoked potential (CAEP) testing, a non-invasive technique, is widely employed to study auditory brain development. The aim of this study was to investigate the development of the auditory electrophysiological signal without addressing specific abilities such as speech or music discrimination. We were interested in the temporal and spectral domains of conventional auditory evoked potentials. We analyzed cerebral responses to auditory stimulation (broadband noises) in 40 infants and children (1 month to 5 years 6 months) and 10 adults using high-density electrophysiological recording. We hypothesized that the adult auditory response has precursors that can be identified in infant and child responses. Results confirm that complex adult CAEP responses and spectral activity patterns appear after 5 years, showing decreased involvement of lower frequencies and increased involvement of higher frequencies. In addition, time-locked response to stimulus and event-related spectral pertubation across frequencies revealed alpha and beta band contributions to the CAEP of infants and toddlers before mutation to the beta and gamma band activity of the adult response. A detailed analysis of electrophysiological responses to a perceptual stimulation revealed general development patterns and developmental precursors of the adult response. PMID:19665050

  3. Time-frequency analysis of transient evoked-otoacoustic emissions in individuals with auditory neuropathy spectrum disorder.

    PubMed

    Narne, Vijaya Kumar; Prabhu, P Prashanth; Chatni, Suma

    2014-07-01

    The aim of the study was to describe and quantify the cochlear active mechanisms in individuals with Auditory Neuropathy Spectrum Disorders (ANSD). Transient Evoked Otoacoustic Emissions (TEOAEs) were recorded in 15 individuals with ANSD and 22 individuals with normal hearing. TEOAEs were analyzed by Wavelet transform method to describe and quantify the characteristics of TEOAEs in narrow-band frequency regions. It was noted that the amplitude of TEOAEs was higher and latency slightly shorter in individuals with ANSD compared to normal hearing individuals at low and mid frequencies. The increased amplitude and reduced latencies of TEOAEs in ANSD group could be attributed to the efferent system damage, especially at low and mid frequencies seen in individuals with ANSD. Thus, wavelet analysis of TEOAEs proves to be another important tool to understand the patho-physiology in individuals with ANSD. PMID:24768764

  4. Change in the coding of interaural time difference along the tonotopic axis of the chicken nucleus laminaris.

    PubMed

    Palanca-Castan, Nicolas; Köppl, Christine

    2015-01-01

    Interaural time differences (ITDs) are an important cue for the localization of sounds in azimuthal space. Both birds and mammals have specialized, tonotopically organized nuclei in the brain stem for the processing of ITD: medial superior olive in mammals and nucleus laminaris (NL) in birds. The specific way in which ITDs are derived was long assumed to conform to a delay-line model in which arrays of systematically arranged cells create a representation of auditory space with different cells responding maximally to specific ITDs. This model was supported by data from barn owl NL taken from regions above 3 kHz and from chicken above 1 kHz. However, data from mammals often do not show defining features of the Jeffress model such as a systematic topographic representation of best ITDs or the presence of axonal delay lines, and an alternative has been proposed in which neurons are not topographically arranged with respect to ITD and coding occurs through the assessment of the overall response of two large neuron populations, one in each hemisphere. Modeling studies have suggested that the presence of different coding systems could be related to the animal's head size and frequency range rather than their phylogenetic group. Testing this hypothesis requires data from across the tonotopic range of both birds and mammals. The aim of this study was to obtain in vivo recordings from neurons in the low-frequency range (<1000 Hz) of chicken NL. Our data argues for the presence of a modified Jeffress system that uses the slopes of ITD-selective response functions instead of their peaks to topographically represent ITD at mid- to high frequencies. At low frequencies, below several 100 Hz, the data did not support any current model of ITD coding. This is different to what was previously shown in the barn owl and suggests that constraints in optimal ITD processing may be associated with the particular demands on sound localization determined by the animal's ecological niche in

  5. Change in the coding of interaural time difference along the tonotopic axis of the chicken nucleus laminaris

    PubMed Central

    Palanca-Castan, Nicolas; Köppl, Christine

    2015-01-01

    Interaural time differences (ITDs) are an important cue for the localization of sounds in azimuthal space. Both birds and mammals have specialized, tonotopically organized nuclei in the brain stem for the processing of ITD: medial superior olive in mammals and nucleus laminaris (NL) in birds. The specific way in which ITDs are derived was long assumed to conform to a delay-line model in which arrays of systematically arranged cells create a representation of auditory space with different cells responding maximally to specific ITDs. This model was supported by data from barn owl NL taken from regions above 3 kHz and from chicken above 1 kHz. However, data from mammals often do not show defining features of the Jeffress model such as a systematic topographic representation of best ITDs or the presence of axonal delay lines, and an alternative has been proposed in which neurons are not topographically arranged with respect to ITD and coding occurs through the assessment of the overall response of two large neuron populations, one in each hemisphere. Modeling studies have suggested that the presence of different coding systems could be related to the animal’s head size and frequency range rather than their phylogenetic group. Testing this hypothesis requires data from across the tonotopic range of both birds and mammals. The aim of this study was to obtain in vivo recordings from neurons in the low-frequency range (<1000 Hz) of chicken NL. Our data argues for the presence of a modified Jeffress system that uses the slopes of ITD-selective response functions instead of their peaks to topographically represent ITD at mid- to high frequencies. At low frequencies, below several 100 Hz, the data did not support any current model of ITD coding. This is different to what was previously shown in the barn owl and suggests that constraints in optimal ITD processing may be associated with the particular demands on sound localization determined by the animal’s ecological niche

  6. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response

    PubMed Central

    Nuttall, Helen E.; Moore, David R.; Barry, Johanna G.; Krumbholz, Katrin

    2015-01-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. PMID:25787954

  7. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex.

    PubMed

    Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  8. Real-time transmission of digital video using variable-length coding

    NASA Astrophysics Data System (ADS)

    Bizon, Thomas P.; Shalkhauser, Mary Jo; Whyte, Wayne A., Jr.

    1993-03-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  9. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  10. Investigate Methods to Decrease Compilation Time-AX-Program Code Group Computer Science R& D Project

    SciTech Connect

    Cottom, T

    2003-06-11

    Large simulation codes can take on the order of hours to compile from scratch. In Kull, which uses generic programming techniques, a significant portion of the time is spent generating and compiling template instantiations. I would like to investigate methods that would decrease the overall compilation time for large codes. These would be methods which could then be applied, hopefully, as standard practice to any large code. Success is measured by the overall decrease in wall clock time a developer spends waiting for an executable. Analyzing the make system of a slow to build project can benefit all developers on the project. Taking the time to analyze the number of processors used over the life of the build and restructuring the system to maximize the parallelization can significantly reduce build times. Distributing the build across multiple machines with the same configuration can increase the number of available processors for building and can help evenly balance the load. Becoming familiar with compiler options can have its benefits as well. The time improvements of the sum can be significant. Initial compilation time for Kull on OSF1 was {approx} 3 hours. Final time on OSF1 after completion is 16 minutes. Initial compilation time for Kull on AIX was {approx} 2 hours. Final time on AIX after completion is 25 minutes. Developers now spend 3 hours less waiting for a Kull executable on OSF1, and 2 hours less on AIX platforms. In the eyes of many Kull code developers, the project was a huge success.

  11. Real-time minimal bit error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  12. Digital coherent detection research on Brillouin optical time domain reflectometry with simplex pulse codes

    NASA Astrophysics Data System (ADS)

    Hao, Yun-Qi; Ye, Qing; Pan, Zheng-Qing; Cai, Hai-Wen; Qu, Rong-Hui

    2014-11-01

    The digital coherent detection technique has been investigated without any frequency-scanning device in the Brillouin optical time domain reflectometry (BOTDR), where the simplex pulse codes are applied in the sensing system. The time domain signal of every code sequence is collected by the data acquisition card (DAQ). A shift-averaging technique is applied in the frequency domain for the reason that the local oscillator (LO) in the coherent detection is fix-frequency deviated from the primary source. With the 31-bit simplex code, the signal-to-noise ratio (SNR) has 3.5-dB enhancement with the same single pulse traces, accordant with the theoretical analysis. The frequency fluctuation for simplex codes is 14.01 MHz less than that for a single pulse as to 4-m spatial resolution. The results are believed to be beneficial for the BOTDR performance improvement.

  13. Auditory nerve disease and auditory neuropathy spectrum disorders.

    PubMed

    Kaga, Kimitaka

    2016-02-01

    In 1996, a new type of bilateral hearing disorder was discerned and published almost simultaneously by Kaga et al. [1] and Starr et al. [2]. Although the pathophysiology of this disorder as reported by each author was essentially identical, Kaga used the term "auditory nerve disease" and Starr used the term "auditory neuropathy". Auditory neuropathy (AN) in adults is an acquired disorder characterized by mild-to-moderate pure-tone hearing loss, poor speech discrimination, and absence of the auditory brainstem response (ABR) all in the presence of normal cochlear outer hair cell function as indicated by normal distortion product otoacoustic emissions (DPOAEs) and evoked summating potentials (SPs) by electrocochleography (ECoG). A variety of processes and etiologies are thought to be involved in its pathophysiology including mutations of the OTOF and/or OPA1 genes. Most of the subsequent reports in the literature discuss the various auditory profiles of patients with AN [3,4] and in this report we present the profiles of an additional 17 cases of adult AN. Cochlear implants are useful for the reacquisition of hearing in adult AN although hearing aids are ineffective. In 2008, the new term of Auditory Neuropathy Spectrum Disorders (ANSD) was proposed by the Colorado Children's Hospital group following a comprehensive study of newborn hearing test results. When ABRs were absent and DPOAEs were present in particular cases during newborn screening they were classified as ANSD. In 2013, our group in the Tokyo Medical Center classified ANSD into three types by following changes in ABRs and DPOAEs over time with development. In Type I, there is normalization of hearing over time, Type II shows a change into profound hearing loss and Type III is true auditory neuropathy (AN). We emphasize that, in adults, ANSD is not the same as AN. PMID:26209259

  14. A New Quaternion Design for Space-Time-Polarization Block Code with Full Diversity

    NASA Astrophysics Data System (ADS)

    Ma, Huanfei; Kan, Haibin; Imai, Hideki

    Construction of quaternion design for Space-Time-Polarization Block Codes (STPBCs) is a hot but difficult topic. This letter introduces a novel way to construct high dimensional quaternion designs based on any existing low dimensional quaternion orthogonal designs(QODs) for STPBC, while preserving the merits of the original QODs such as full diversity and simple decoding. Furthermore, it also provides a specific schema to reach full diversity and maximized code gain by signal constellation rotation on the polarization plane.

  15. Fast minimum-redundancy prefix coding for real-time space data compression

    NASA Astrophysics Data System (ADS)

    Huang, Bormin

    2007-09-01

    The minimum-redundancy prefix-free code problem is to determine an array l = {l I ,..., f n} of n integer codeword lengths, given an array f = {f I ,..., f n} of n symbol occurrence frequencies, such that the Kraft-McMillan inequality [equation] holds and the number of the total coded bits [equation] is minimized. Previous minimum-redundancy prefix-free code based on Huffman's greedy algorithm solves this problem in O (n) time if the input array f is sorted; but in O (n log n) time if f is unsorted. In this paper a fast algorithm is proposed to solve this problem in linear time if f is unsorted. It is suitable for real-time applications in satellite communication and consumer electronics. We also develop its VLSI architecture that consists of four modules, namely, the frequency table builder, the codeword length table builder, the codeword table builder, and the input-to-codeword mapper.

  16. Development of a variable time-step transient NEW code: SPANDEX

    SciTech Connect

    Aviles, B.N. )

    1993-01-01

    This paper describes a three-dimensional, variable time-step transient multigroup diffusion theory code, SPANDEX (space-time nodal expansion method). SPANDEX is based on the static nodal expansion method (NEM) code, NODEX (Ref. 1), and employs a nonlinear algorithm and a fifth-order expansion of the transverse-integrated fluxes. The time integration scheme in SPANDEX is a fourth-order implicit generalized Runge-Kutta method (GRK) with on-line error control and variable time-step selection. This Runge-Kutta method has been applied previously to point kinetics and one-dimensional finite difference transient analysis. This paper describes the application of the Runge-Kutta method to three-dimensional reactor transient analysis in a multigroup NEM code.

  17. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF...

  18. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF...

  19. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF...

  20. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF...

  1. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF...

  2. Bat's auditory system: Corticofugal feedback and plasticity

    NASA Astrophysics Data System (ADS)

    Suga, Nobuo

    2001-05-01

    The auditory system of the mustached bat consists of physiologically distinct subdivisions for processing different types of biosonar information. It was found that the corticofugal (descending) auditory system plays an important role in improving and adjusting auditory signal processing. Repetitive acoustic stimulation, cortical electrical stimulation or auditory fear conditioning evokes plastic changes of the central auditory system. The changes are based upon egocentric selection evoked by focused positive feedback associated with lateral inhibition. Focal electric stimulation of the auditory cortex evokes short-term changes in the auditory cortex and subcortical auditory nuclei. An increase in a cortical acetylcholine level during the electric stimulation changes the cortical changes from short-term to long-term. There are two types of plastic changes (reorganizations): centripetal best frequency shifts for expanded reorganization of a neural frequency map and centrifugal best frequency shifts for compressed reorganization of the map. Which changes occur depends on the balance between inhibition and facilitation. Expanded reorganization has been found in different sensory systems and different species of mammals, whereas compressed reorganization has been thus far found only in the auditory subsystems highly specialized for echolocation. The two types of reorganizations occur in both the frequency and time domains. [Work supported by NIDCO DC00175.

  3. Rejection positivity predicts trial-to-trial reaction times in an auditory selective attention task: a computational analysis of inhibitory control

    PubMed Central

    Chen, Sufen; Melara, Robert D.

    2014-01-01

    A series of computer simulations using variants of a formal model of attention (Melara and Algom, 2003) probed the role of rejection positivity (RP), a slow-wave electroencephalographic (EEG) component, in the inhibitory control of distraction. Behavioral and EEG data were recorded as participants performed auditory selective attention tasks. Simulations that modulated processes of distractor inhibition accounted well for reaction-time (RT) performance, whereas those that modulated target excitation did not. A model that incorporated RP from actual EEG recordings in estimating distractor inhibition was superior in predicting changes in RT as a function of distractor salience across conditions. A model that additionally incorporated momentary fluctuations in EEG as the source of trial-to-trial variation in performance precisely predicted individual RTs within each condition. The results lend support to the linking proposition that RP controls the speed of responding to targets through the inhibitory control of distractors. PMID:25191244

  4. Neural processing of auditory signals in the time domain: delay-tuned coincidence detectors in the mustached bat.

    PubMed

    Suga, Nobuo

    2015-06-01

    The central auditory system produces combination-sensitive neurons tuned to a specific combination of multiple signal elements. Some of these neurons act as coincidence detectors with delay lines for the extraction of spectro-temporal information from sounds. "Delay-tuned" neurons of mustached bats are tuned to a combination of up to four signal elements with a specific delay between them and form a delay map. They are produced in the inferior colliculus by the coincidence of the rebound response following glycinergic inhibition to the first harmonic of a biosonar pulse with the short-latency response to the 2nd-4th harmonics of its echo. Compared with collicular delay-tuned neurons, thalamic and cortical ones respond more to pulse-echo pairs than individual sounds. Cortical delay-tuned neurons are clustered in the three separate areas. They interact with each other through a circuit mediating positive feedback and lateral inhibition for adjustment and improvement of the delay tuning of cortical and subcortical neurons. The current article reviews the mechanisms for delay tuning and the response properties of collicular, thalamic and cortical delay-tuned neurons in relation to hierarchical signal processing. PMID:25752443

  5. Performance of asynchronous fiber-optic code division multiple access system based on three-dimensional wavelength/time/space codes and its link analysis.

    PubMed

    Singh, Jaswinder

    2010-03-10

    A novel family of three-dimensional (3-D) wavelength/time/space codes for asynchronous optical code-division-multiple-access (CDMA) systems with "zero" off-peak autocorrelation and "unity" cross correlation is reported. Antipodal signaling and differential detection is employed in the system. A maximum of [(W x T+1) x W] codes are generated for unity cross correlation, where W and T are the number of wavelengths and time chips used in the code and are prime. The conditions for violation of the cross-correlation constraint are discussed. The expressions for number of generated codes are determined for various code dimensions. It is found that the maximum number of codes are generated for S < or = min(W,T), where W and T are prime and S is the number of space channels. The performance of these codes is compared to the earlier reported two-dimensional (2-D)/3-D codes for asynchronous systems. The codes have a code-set-size to code-size ratio greater than W/S. For instance, with a code size of 2065 (59 x 7 x 5), a total of 12,213 users can be supported, and 130 simultaneous users at a bit-error rate (BER) of 10(-9). An arrayed-waveguide-grating-based reconfigurable encoder/decoder design for 2-D implementation for the 3-D codes is presented so that the need for multiple star couplers and fiber ribbons is eliminated. The hardware requirements of the coders used for various modulation/detection schemes are given. The effect of insertion loss in the coders is shown to be significantly reduced with loss compensation by using an amplifier after encoding. An optical CDMA system for four users is simulated and the results presented show the improvement in performance with the use of loss compensation. PMID:20220892

  6. Neural code alterations and abnormal time patterns in Parkinson’s disease

    NASA Astrophysics Data System (ADS)

    Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo

    2015-04-01

    Objective. The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson’s disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. Approach. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. Main results. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson’s disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. Significance. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson’s disease.

  7. Interaural intensity and latency difference in the dolphin's auditory system.

    PubMed

    Popov, V V; Supin AYa

    1991-12-01

    Binaural hearing mechanisms were measured in dolphins (Inia geoffrensis) by recording the auditory nerve evoked response from the body surface. The azimuthal position of a sound source at 10-15 degrees from the longitudinal axis elicited interaural intensity disparity up to 20 dB and interaural latency difference as large as 250 microseconds. The latter was many times greater than the acoustical interaural time delay. This latency difference seems to be caused by the intensity disparity. The latency difference seems to be an effective way of coding of intensity disparity. PMID:1816509

  8. Timing group delay and differential code bias corrections for BeiDou positioning

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Zhang, Xiaohong; Wang, Jinling

    2015-05-01

    This article first clearly figures out the relationship between parameters of timing group delay (TGD) and differential code bias (DCB) for BDS, and demonstrates the equivalence of TGD and DCB correction models combining theory with practice. The TGD/DCB correction models have been extended to various occasions for BDS positioning, and such models have been evaluated by real triple-frequency datasets. To test the effectiveness of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both standard point positioning (SPP) and precise point positioning (PPP) tests are carried out for BDS signals with different schemes. Furthermore, the influence of differential code biases on BDS positioning estimates such as coordinates, receiver clock biases, tropospheric delays and carrier phase ambiguities is investigated comprehensively. Comparative analysis show that the unmodeled differential code biases degrade the performance of BDS SPP by a factor of two or more, whereas the estimates of PPP are subject to varying degrees of influences. For SPP, the accuracy of dual-frequency combinations is slightly worse than that of single-frequency, and they are much more sensitive to the differential code biases, particularly for the B2B3 combination. For PPP, the uncorrected differential code biases are mostly absorbed into the receiver clock bias and carrier phase ambiguities and thus resulting in a much longer convergence time. Even though the influence of the differential code biases could be mitigated over time and comparable positioning accuracy could be achieved after convergence, it is suggested to properly handle with the differential code biases since it is vital for PPP convergence and integer ambiguity resolution.

  9. AUDITORY CORTICAL PLASTICITY: DOES IT PROVIDE EVIDENCE FOR COGNITIVE PROCESSING IN THE AUDITORY CORTEX?

    PubMed Central

    Irvine, Dexter R. F.

    2007-01-01

    The past 20 years have seen substantial changes in our view of the nature of the processing carried out in auditory cortex. Some processing of a cognitive nature, previously attributed to higher order “association” areas, is now considered to take place in auditory cortex itself. One argument adduced in support of this view is the evidence indicating a remarkable degree of plasticity in the auditory cortex of adult animals. Such plasticity has been demonstrated in a wide range of paradigms, in which auditory input or the behavioural significance of particular inputs is manipulated. Changes over the same time period in our conceptualization of the receptive fields of cortical neurons, and well-established mechanisms for use-related changes in synaptic function, can account for many forms of auditory cortical plasticity. On the basis of a review of auditory cortical plasticity and its probable mechanisms, it is argued that only plasticity associated with learning tasks provides a strong case for cognitive processing in auditory cortex. Even in this case the evidence is indirect, in that it has not yet been established that the changes in auditory cortex are necessary for behavioural learning and memory. Although other lines of evidence provide convincing support for cognitive processing in auditory cortex, that provided by auditory cortical plasticity remains equivocal. PMID:17303356

  10. Development of the N1-P2 auditory evoked response to amplitude rise time and rate of formant transition of speech sounds.

    PubMed

    Carpenter, Allen L; Shahin, Antoine J

    2013-06-01

    We investigated the development of weighting strategies for acoustic cues by examining the morphology of the N1-P2 auditory evoked potential (AEP) to changes in amplitude rise time (ART) and rate of formant transition (RFT) of consonant-vowel (CV) pairs in 4-6-year olds and adults. In the AEP session, individuals listened passively to the CVs /ba/, /wa/, and a /ba/ with a superimposed slower-rising /wa/ envelope (/ba/(wa)). In the behavioral session, individuals listened to the same stimuli and judged whether they heard a /ba/ or /wa/. We hypothesized that a developmental shift in weighting strategies should be reflected in a change in the morphology of the N1-P2 AEP. In 6-year olds and adults, the N1-P2 amplitude at the vertex reflected a change in RFT but not in ART. In contrast, in the 4-5-year olds, the vertex N1-P2 did not show specificity to changes in ART or RFT. In all groups, the N1-P2 amplitude at channel C4 (right hemisphere) reflected a change in ART but not in RFT. Behaviorally, 6-year olds and adults predominately utilized RFT cues (classified /ba/(wa) as /ba/) during phonetic judgments, as opposed to 4-5-year olds which utilized both cues equally. Our findings suggest that both ART and RFT are encoded in the auditory cortex, but an N1-P2 shift toward the vertex following age 4-5 indicates a shift toward an adult-like weighting strategy, such that, to utilize RFT to a greater extent. PMID:23570734