Science.gov

Sample records for auditory time coding

  1. Coding space-time stimulus dynamics in auditory brain maps

    PubMed Central

    Wang, Yunyan; Gutfreund, Yoram; Peña, José L.

    2014-01-01

    Sensory maps are often distorted representations of the environment, where ethologically-important ranges are magnified. The implication of a biased representation extends beyond increased acuity for having more neurons dedicated to a certain range. Because neurons are functionally interconnected, non-uniform representations influence the processing of high-order features that rely on comparison across areas of the map. Among these features are time-dependent changes of the auditory scene generated by moving objects. How sensory representation affects high order processing can be approached in the map of auditory space of the owl's midbrain, where locations in the front are over-represented. In this map, neurons are selective not only to location but also to location over time. The tuning to space over time leads to direction selectivity, which is also topographically organized. Across the population, neurons tuned to peripheral space are more selective to sounds moving into the front. The distribution of direction selectivity can be explained by spatial and temporal integration on the non-uniform map of space. Thus, the representation of space can induce biased computation of a second-order stimulus feature. This phenomenon is likely observed in other sensory maps and may be relevant for behavior. PMID:24782781

  2. Precise Feature Based Time Scales and Frequency Decorrelation Lead to a Sparse Auditory Code

    PubMed Central

    Chen, Chen; Read, Heather L.; Escabí, Monty A.

    2012-01-01

    Sparse redundancy reducing codes have been proposed as efficient strategies for representing sensory stimuli. A prevailing hypothesis suggests that sensory representations shift from dense redundant codes in the periphery to selective sparse codes in cortex. We propose an alternative framework where sparseness and redundancy depend on sensory integration time scales and demonstrate that the central nucleus of the inferior colliculus (ICC) of cats encodes sound features by precise sparse spike trains. Direct comparisons with auditory cortical neurons demonstrate that ICC responses were sparse and uncorrelated as long as the spike train time scales were matched to the sensory integration time scales relevant to ICC neurons. Intriguingly, correlated spiking in the ICC was substantially lower than predicted by linear or nonlinear models and strictly observed for neurons with best frequencies within a “critical band,” the hallmark of perceptual frequency resolution in mammals. This is consistent with a sparse asynchronous code throughout much of the ICC and a complementary correlation code within a critical band that may allow grouping of perceptually relevant cues. PMID:22723685

  3. Refractoriness enhances temporal coding by auditory nerve fibers.

    PubMed

    Avissar, Michael; Wittig, John H; Saunders, James C; Parsons, Thomas D

    2013-05-01

    A universal property of spiking neurons is refractoriness, a transient decrease in discharge probability immediately following an action potential (spike). The refractory period lasts only one to a few milliseconds, but has the potential to affect temporal coding of acoustic stimuli by auditory neurons, which are capable of submillisecond spike-time precision. Here this possibility was investigated systematically by recording spike times from chicken auditory nerve fibers in vivo while stimulating with repeated pure tones at characteristic frequency. Refractory periods were tightly distributed, with a mean of 1.58 ms. A statistical model was developed to recapitulate each fiber's responses and then used to predict the effect of removing the refractory period on a cell-by-cell basis for two largely independent facets of temporal coding: faithful entrainment of interspike intervals to the stimulus frequency and precise synchronization of spike times to the stimulus phase. The ratio of the refractory period to the stimulus period predicted the impact of refractoriness on entrainment and synchronization. For ratios less than ∼0.9, refractoriness enhanced entrainment and this enhancement was often accompanied by an increase in spike-time precision. At higher ratios, little or no change in entrainment or synchronization was observed. Given the tight distribution of refractory periods, the ability of refractoriness to improve temporal coding is restricted to neurons responding to low-frequency stimuli. Enhanced encoding of low frequencies likely affects sound localization and pitch perception in the auditory system, as well as perception in nonauditory sensory modalities, because all spiking neurons exhibit refractoriness. PMID:23637161

  4. Changing Auditory Time with Prismatic Goggles

    ERIC Educational Resources Information Center

    Magnani, Barbara; Pavani, Francesco; Frassinetti, Francesca

    2012-01-01

    The aim of the present study was to explore the spatial organization of auditory time and the effects of the manipulation of spatial attention on such a representation. In two experiments, we asked 28 adults to classify the duration of auditory stimuli as "short" or "long". Stimuli were tones of high or low pitch, delivered left or right of the…

  5. How the owl resolves auditory coding ambiguity.

    PubMed

    Mazer, J A

    1998-09-01

    The barn owl (Tyto alba) uses interaural time difference (ITD) cues to localize sounds in the horizontal plane. Low-order binaural auditory neurons with sharp frequency tuning act as narrow-band coincidence detectors; such neurons respond equally well to sounds with a particular ITD and its phase equivalents and are said to be phase ambiguous. Higher-order neurons with broad frequency tuning are unambiguously selective for single ITDs in response to broad-band sounds and show little or no response to phase equivalents. Selectivity for single ITDs is thought to arise from the convergence of parallel, narrow-band frequency channels that originate in the cochlea. ITD tuning to variable bandwidth stimuli was measured in higher-order neurons of the owl's inferior colliculus to examine the rules that govern the relationship between frequency channel convergence and the resolution of phase ambiguity. Ambiguity decreased as stimulus bandwidth increased, reaching a minimum at 2-3 kHz. Two independent mechanisms appear to contribute to the elimination of ambiguity: one suppressive and one facilitative. The integration of information carried by parallel, distributed processing channels is a common theme of sensory processing that spans both modality and species boundaries. The principles underlying the resolution of phase ambiguity and frequency channel convergence in the owl may have implications for other sensory systems, such as electrolocation in electric fish and the computation of binocular disparity in the avian and mammalian visual systems. PMID:9724807

  6. Central auditory conduction time in the rat.

    PubMed

    Shaw, N A

    1990-01-01

    Central conduction time is the time for an afferent volley to traverse the central pathways of a sensory system. In the present study, central auditory conduction time (CACT) was calculated for the rat, the first such formal measurement in any animal. Brainstem auditory evoked potentials (BAEPs) were recorded simultaneously with the primary response of the auditory cortex (P1). The latency of wave II of the BAEP, which arises in the cochlear nucleus, was subtracted from that of P1. This yielded a mean CACT of 6.6 ms. The results confirm a previous theoretical estimate that CACT in the rat is at least twice as long as central somatosensory conduction time. PMID:2311700

  7. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities.

    PubMed

    Deneux, Thomas; Kempf, Alexandre; Daret, Aurélie; Ponsot, Emmanuel; Bathellier, Brice

    2016-01-01

    Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. PMID:27580932

  8. Auditory Inspection Time, Intelligence and Pitch Discrimination.

    ERIC Educational Resources Information Center

    Deary, Ian J.; And Others

    1989-01-01

    An auditory inspection time (AIT) test, pitch discrimination tests, and verbal and non-verbal mental ability tests were administered to 59 undergraduates and 119 12-year-old school children. Results indicate that AIT correlations with intelligence are due to AIT being an index of information intake speeds. (TJH)

  9. Codes for sound-source location in nontonotopic auditory cortex.

    PubMed

    Middlebrooks, J C; Xu, L; Eddins, A C; Green, D M

    1998-08-01

    We evaluated two hypothetical codes for sound-source location in the auditory cortex. The topographical code assumed that single neurons are selective for particular locations and that sound-source locations are coded by the cortical location of small populations of maximally activated neurons. The distributed code assumed that the responses of individual neurons can carry information about locations throughout 360 degrees of azimuth and that accurate sound localization derives from information that is distributed across large populations of such panoramic neurons. We recorded from single units in the anterior ectosylvian sulcus area (area AES) and in area A2 of alpha-chloralose-anesthetized cats. Results obtained in the two areas were essentially equivalent. Noise bursts were presented from loudspeakers spaced in 20 degrees intervals of azimuth throughout 360 degrees of the horizontal plane. Spike counts of the majority of units were modulated >50% by changes in sound-source azimuth. Nevertheless, sound-source locations that produced greater than half-maximal spike counts often spanned >180 degrees of azimuth. The spatial selectivity of units tended to broaden and, often, to shift in azimuth as sound pressure levels (SPLs) were increased to a moderate level. We sometimes saw systematic changes in spatial tuning along segments of electrode tracks as long as 1.5 mm but such progressions were not evident at higher sound levels. Moderate-level sounds presented anywhere in the contralateral hemifield produced greater than half-maximal activation of nearly all units. These results are not consistent with the hypothesis of a topographic code. We used an artificial-neural-network algorithm to recognize spike patterns and, thereby, infer the locations of sound sources. Network input consisted of spike density functions formed by averages of responses to eight stimulus repetitions. Information carried in the responses of single units permitted reasonable estimates of sound

  10. Improving Hearing Performance Using Natural Auditory Coding Strategies

    NASA Astrophysics Data System (ADS)

    Rattay, Frank

    Sound transfer from the human ear to the brain is based on three quite different neural coding principles when the continuous temporal auditory source signal is sent as binary code in excellent quality via 30,000 nerve fibers per ear. Cochlear implants are well-accepted neural prostheses for people with sensory hearing loss, but currently the devices are inspired only by the tonotopic principle. According to this principle, every sound frequency is mapped to a specific place along the cochlea. By electrical stimulation, the frequency content of the acoustic signal is distributed via few contacts of the prosthesis to corresponding places and generates spikes there. In contrast to the natural situation, the artificially evoked information content in the auditory nerve is quite poor, especially because the richness of the temporal fine structure of the neural pattern is replaced by a firing pattern that is strongly synchronized with an artificial cycle duration. Improvement in hearing performance is expected by involving more of the ingenious strategies developed during evolution.

  11. Diverse cortical codes for scene segmentation in primate auditory cortex

    PubMed Central

    Semple, Malcolm N.

    2015-01-01

    The temporal coherence of amplitude fluctuations is a critical cue for segmentation of complex auditory scenes. The auditory system must accurately demarcate the onsets and offsets of acoustic signals. We explored how and how well the timing of onsets and offsets of gated tones are encoded by auditory cortical neurons in awake rhesus macaques. Temporal features of this representation were isolated by presenting otherwise identical pure tones of differing durations. Cortical response patterns were diverse, including selective encoding of onset and offset transients, tonic firing, and sustained suppression. Spike train classification methods revealed that many neurons robustly encoded tone duration despite substantial diversity in the encoding process. Excellent discrimination performance was achieved by neurons whose responses were primarily phasic at tone offset and by those that responded robustly while the tone persisted. Although diverse cortical response patterns converged on effective duration discrimination, this diversity significantly constrained the utility of decoding models referenced to a spiking pattern averaged across all responses or averaged within the same response category. Using maximum likelihood-based decoding models, we demonstrated that the spike train recorded in a single trial could support direct estimation of stimulus onset and offset. Comparisons between different decoding models established the substantial contribution of bursts of activity at sound onset and offset to demarcating the temporal boundaries of gated tones. Our results indicate that relatively few neurons suffice to provide temporally precise estimates of such auditory “edges,” particularly for models that assume and exploit the heterogeneity of neural responses in awake cortex. PMID:25695655

  12. Brain-Generated Estradiol Drives Long-Term Optimization of Auditory Coding to Enhance the Discrimination of Communication Signals

    PubMed Central

    Tremere, Liisa A.; Pinaud, Raphael

    2011-01-01

    Auditory processing and hearing-related pathologies are heavily influenced by steroid hormones in a variety of vertebrate species including humans. The hormone estradiol has been recently shown to directly modulate the gain of central auditory neurons, in real-time, by controlling the strength of inhibitory transmission via a non-genomic mechanism. The functional relevance of this modulation, however, remains unknown. Here we show that estradiol generated in the songbird homologue of the mammalian auditory association cortex, rapidly enhances the effectiveness of the neural coding of complex, learned acoustic signals in awake zebra finches. Specifically, estradiol increases mutual information rates, coding efficiency and the neural discrimination of songs. These effects are mediated by estradiol’s modulation of both rate and temporal coding of auditory signals. Interference with the local action or production of estradiol in the auditory forebrain of freely-behaving animals disrupts behavioral responses to songs, but not to other behaviorally-relevant communication signals. Our findings directly show that estradiol is a key regulator of auditory function in the adult vertebrate brain. PMID:21368039

  13. Subthreshold resonance properties contribute to the efficient coding of auditory spatial cues.

    PubMed

    Remme, Michiel W H; Donato, Roberta; Mikiel-Hunter, Jason; Ballestero, Jimena A; Foster, Simon; Rinzel, John; McAlpine, David

    2014-06-01

    Neurons in the medial superior olive (MSO) and lateral superior olive (LSO) of the auditory brainstem code for sound-source location in the horizontal plane, extracting interaural time differences (ITDs) from the stimulus fine structure and interaural level differences (ILDs) from the stimulus envelope. Here, we demonstrate a postsynaptic gradient in temporal processing properties across the presumed tonotopic axis; neurons in the MSO and the low-frequency limb of the LSO exhibit fast intrinsic electrical resonances and low input impedances, consistent with their processing of ITDs in the temporal fine structure. Neurons in the high-frequency limb of the LSO show low-pass electrical properties, indicating they are better suited to extracting information from the slower, modulated envelopes of sounds. Using a modeling approach, we assess ITD and ILD sensitivity of the neural filters to natural sounds, demonstrating that the transformation in temporal processing along the tonotopic axis contributes to efficient extraction of auditory spatial cues. PMID:24843153

  14. State-Dependent Population Coding in Primary Auditory Cortex

    PubMed Central

    Pachitariu, Marius; Lyamzin, Dmitry R.; Sahani, Maneesh

    2015-01-01

    Sensory function is mediated by interactions between external stimuli and intrinsic cortical dynamics that are evident in the modulation of evoked responses by cortical state. A number of recent studies across different modalities have demonstrated that the patterns of activity in neuronal populations can vary strongly between synchronized and desynchronized cortical states, i.e., in the presence or absence of intrinsically generated up and down states. Here we investigated the impact of cortical state on the population coding of tones and speech in the primary auditory cortex (A1) of gerbils, and found that responses were qualitatively different in synchronized and desynchronized cortical states. Activity in synchronized A1 was only weakly modulated by sensory input, and the spike patterns evoked by tones and speech were unreliable and constrained to a small range of patterns. In contrast, responses to tones and speech in desynchronized A1 were temporally precise and reliable across trials, and different speech tokens evoked diverse spike patterns with extremely weak noise correlations, allowing responses to be decoded with nearly perfect accuracy. Restricting the analysis of synchronized A1 to activity within up states yielded similar results, suggesting that up states are not equivalent to brief periods of desynchronization. These findings demonstrate that the representational capacity of A1 depends strongly on cortical state, and suggest that cortical state should be considered as an explicit variable in all studies of sensory processing. PMID:25653363

  15. Distinct Subthreshold Mechanisms Underlying Rate-Coding Principles in Primate Auditory Cortex.

    PubMed

    Gao, Lixia; Kostlan, Kevin; Wang, Yunyan; Wang, Xiaoqin

    2016-08-17

    A key computational principle for encoding time-varying signals in auditory and somatosensory cortices of monkeys is the opponent model of rate coding by two distinct populations of neurons. However, the subthreshold mechanisms that give rise to this computation have not been revealed. Because the rate-coding neurons are only observed in awake conditions, it is especially challenging to probe their underlying cellular mechanisms. Using a novel intracellular recording technique that we developed in awake marmosets, we found that the two types of rate-coding neurons in auditory cortex exhibited distinct subthreshold responses. While the positive-monotonic neurons (monotonically increasing firing rate with increasing stimulus repetition frequency) displayed sustained depolarization at high repetition frequency, the negative-monotonic neurons (opposite trend) instead exhibited hyperpolarization at high repetition frequency but sustained depolarization at low repetition frequency. The combination of excitatory and inhibitory subthreshold events allows the cortex to represent time-varying signals through these two opponent neuronal populations. PMID:27478016

  16. Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant

    PubMed Central

    Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa

    2015-01-01

    Introduction  The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. Objective  To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. Methods  This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. Results  There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. Conclusion  There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied. PMID:27413409

  17. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study.

    PubMed

    Stekelenburg, Jeroen J; Vroomen, Jean

    2015-11-11

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. PMID:25641042

  18. Norepinephrine Modulates Coding of Complex Vocalizations in the Songbird Auditory Cortex Independent of Local Neuroestrogen Synthesis.

    PubMed

    Ikeda, Maaya Z; Jeon, Sung David; Cowell, Rosemary A; Remage-Healey, Luke

    2015-06-24

    The catecholamine norepinephrine plays a significant role in auditory processing. Most studies to date have examined the effects of norepinephrine on the neuronal response to relatively simple stimuli, such as tones and calls. It is less clear how norepinephrine shapes the detection of complex syntactical sounds, as well as the coding properties of sensory neurons. Songbirds provide an opportunity to understand how auditory neurons encode complex, learned vocalizations, and the potential role of norepinephrine in modulating the neuronal computations for acoustic communication. Here, we infused norepinephrine into the zebra finch auditory cortex and performed extracellular recordings to study the modulation of song representations in single neurons. Consistent with its proposed role in enhancing signal detection, norepinephrine decreased spontaneous activity and firing during stimuli, yet it significantly enhanced the auditory signal-to-noise ratio. These effects were all mimicked by clonidine, an α-2 receptor agonist. Moreover, a pattern classifier analysis indicated that norepinephrine enhanced the ability of single neurons to accurately encode complex auditory stimuli. Because neuroestrogens are also known to enhance auditory processing in the songbird brain, we tested the hypothesis that norepinephrine actions depend on local estrogen synthesis. Neither norepinephrine nor adrenergic receptor antagonist infusion into the auditory cortex had detectable effects on local estradiol levels. Moreover, pretreatment with fadrozole, a specific aromatase inhibitor, did not block norepinephrine's neuromodulatory effects. Together, these findings indicate that norepinephrine enhances signal detection and information encoding for complex auditory stimuli by suppressing spontaneous "noise" activity and that these actions are independent of local neuroestrogen synthesis. PMID:26109659

  19. Differential Coding of Conspecific Vocalizations in the Ventral Auditory Cortical Stream

    PubMed Central

    Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.

    2014-01-01

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  20. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  1. Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH).

    PubMed

    Tierney, Adam; Kraus, Nina

    2014-01-01

    Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills. PMID:25505879

  2. Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH)

    PubMed Central

    Tierney, Adam; Kraus, Nina

    2014-01-01

    Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills. PMID:25505879

  3. Flexible information coding in human auditory cortex during perception, imagery, and STM of complex sounds.

    PubMed

    Linke, Annika C; Cusack, Rhodri

    2015-07-01

    Auditory cortex is the first cortical region of the human brain to process sounds. However, it has recently been shown that its neurons also fire in the absence of direct sensory input, during memory maintenance and imagery. This has commonly been taken to reflect neural coding of the same acoustic information as during the perception of sound. However, the results of the current study suggest that the type of information encoded in auditory cortex is highly flexible. During perception and memory maintenance, neural activity patterns are stimulus specific, reflecting individual sound properties. Auditory imagery of the same sounds evokes similar overall activity in auditory cortex as perception. However, during imagery abstracted, categorical information is encoded in the neural patterns, particularly when individuals are experiencing more vivid imagery. This highlights the necessity to move beyond traditional "brain mapping" inference in human neuroimaging, which assumes common regional activation implies similar mental representations. PMID:25603030

  4. The effect of real-time auditory feedback on learning new characters.

    PubMed

    Danna, Jérémy; Fontaine, Maureen; Paz-Villagrán, Vietminh; Gondre, Charles; Thoret, Etienne; Aramaki, Mitsuko; Kronland-Martinet, Richard; Ystad, Sølvi; Velay, Jean-Luc

    2015-10-01

    The present study investigated the effect of handwriting sonification on graphomotor learning. Thirty-two adults, distributed in two groups, learned four new characters with their non-dominant hand. The experimental design included a pre-test, a training session, and two post-tests, one just after the training sessions and another 24h later. Two characters were learned with and two without real-time auditory feedback (FB). The first group first learned the two non-sonified characters and then the two sonified characters whereas the reverse order was adopted for the second group. Results revealed that auditory FB improved the speed and fluency of handwriting movements but reduced, in the short-term only, the spatial accuracy of the trace. Transforming kinematic variables into sounds allows the writer to perceive his/her movement in addition to the written trace and this might facilitate handwriting learning. However, there were no differential effects of auditory FB, neither long-term nor short-term for the subjects who first learned the characters with auditory FB. We hypothesize that the positive effect on the handwriting kinematics was transferred to characters learned without FB. This transfer effect of the auditory FB is discussed in light of the Theory of Event Coding. PMID:25533208

  5. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    PubMed

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  6. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    PubMed Central

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  7. Odors Bias Time Perception in Visual and Auditory Modalities.

    PubMed

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  8. Odors Bias Time Perception in Visual and Auditory Modalities

    PubMed Central

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  9. Auditory training improves neural timing in the human brainstem.

    PubMed

    Russo, Nicole M; Nicol, Trent G; Zecker, Steven G; Hayes, Erin A; Kraus, Nina

    2005-01-01

    The auditory brainstem response reflects neural encoding of the acoustic characteristic of a speech syllable with remarkable precision. Some children with learning impairments demonstrate abnormalities in this preconscious measure of neural encoding especially in background noise. This study investigated whether auditory training targeted to remediate perceptually-based learning problems would alter the neural brainstem encoding of the acoustic sound structure of speech in such children. Nine subjects, clinically diagnosed with a language-based learning problem (e.g., dyslexia), worked with auditory perceptual training software. Prior to beginning and within three months after completing the training program, brainstem responses to the syllable /da/ were recorded in quiet and background noise. Subjects underwent additional auditory neurophysiological, perceptual, and cognitive testing. Ten control subjects, who did not participate in any remediation program, underwent the same battery of tests at time intervals equivalent to the trained subjects. Transient and sustained (frequency-following response) components of the brainstem response were evaluated. The primary pathway afferent volley -- neural events occurring earlier than 11 ms after stimulus onset -- did not demonstrate plasticity. However, quiet-to-noise inter-response correlations of the sustained response ( approximately 11-50 ms) increased significantly in the trained children, reflecting improved stimulus encoding precision, whereas control subjects did not exhibit this change. Thus, auditory training can alter the preconscious neural encoding of complex sounds by improving neural synchrony in the auditory brainstem. Additionally, several measures of brainstem response timing were related to changes in cortical physiology, as well as perceptual, academic, and cognitive measures from pre- to post-training. PMID:15474654

  10. Time-sharing visual and auditory tracking tasks

    NASA Technical Reports Server (NTRS)

    Tsang, Pamela S.; Vidulich, Michael A.

    1987-01-01

    An experiment is described which examined the benefits of distributing the input demands of two tracking tasks as a function of task integrality. Visual and auditory compensatory tracking tasks were utilized. Results indicate that presenting the two tracking signals in two input modalities did not improve time-sharing efficiency. This was attributed to the difficulty insensitivity phenomenon.

  11. Speech Compensation for Time-Scale-Modified Auditory Feedback

    ERIC Educational Resources Information Center

    Ogane, Rintaro; Honda, Masaaki

    2014-01-01

    Purpose: The purpose of this study was to examine speech compensation in response to time-scale-modified auditory feedback during the transition of the semivowel for a target utterance of /ija/. Method: Each utterance session consisted of 10 control trials in the normal feedback condition followed by 20 perturbed trials in the modified auditory…

  12. The Time Course of Neural Changes Underlying Auditory Perceptual Learning

    PubMed Central

    Atienza, Mercedes; Cantero, Jose L.; Dominguez-Marin, Elena

    2002-01-01

    Improvement in perception takes place within the training session and from one session to the next. The present study aims at determining the time course of perceptual learning as revealed by changes in auditory event-related potentials (ERPs) reflecting preattentive processes. Subjects were trained to discriminate two complex auditory patterns in a single session. ERPs were recorded just before and after training, while subjects read a book and ignored stimulation. ERPs showed a negative wave called mismatch negativity (MMN)—which indexes automatic detection of a change in a homogeneous auditory sequence—just after subjects learned to consciously discriminate the two patterns. ERPs were recorded again 12, 24, 36, and 48 h later, just before testing performance on the discrimination task. Additional behavioral and neurophysiological changes were found several hours after the training session: an enhanced P2 at 24 h followed by shorter reaction times, and an enhanced MMN at 36 h. These results indicate that gains in performance on the discrimination of two complex auditory patterns are accompanied by different learning-dependent neurophysiological events evolving within different time frames, supporting the hypothesis that fast and slow neural changes underlie the acquisition of improved perception. PMID:12075002

  13. Burst Firing is a Neural Code in an Insect Auditory System

    PubMed Central

    Eyherabide, Hugo G.; Rokem, Ariel; Herz, Andreas V. M.; Samengo, Inés

    2008-01-01

    Various classes of neurons alternate between high-frequency discharges and silent intervals. This phenomenon is called burst firing. To analyze burst activity in an insect system, grasshopper auditory receptor neurons were recorded in vivo for several distinct stimulus types. The experimental data show that both burst probability and burst characteristics are strongly influenced by temporal modulations of the acoustic stimulus. The tendency to burst, hence, is not only determined by cell-intrinsic processes, but also by their interaction with the stimulus time course. We study this interaction quantitatively and observe that bursts containing a certain number of spikes occur shortly after stimulus deflections of specific intensity and duration. Our findings suggest a sparse neural code where information about the stimulus is represented by the number of spikes per burst, irrespective of the detailed interspike-interval structure within a burst. This compact representation cannot be interpreted as a firing-rate code. An information-theoretical analysis reveals that the number of spikes per burst reliably conveys information about the amplitude and duration of sound transients, whereas their time of occurrence is reflected by the burst onset time. The investigated neurons encode almost half of the total transmitted information in burst activity. PMID:18946533

  14. Interactive coding of visual spatial frequency and auditory amplitude-modulation rate.

    PubMed

    Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Mossbridge, Julia; Suzuki, Satoru

    2012-03-01

    Spatial frequency is a fundamental visual feature coded in primary visual cortex, relevant for perceiving textures, objects, hierarchical structures, and scenes, as well as for directing attention and eye movements. Temporal amplitude-modulation (AM) rate is a fundamental auditory feature coded in primary auditory cortex, relevant for perceiving auditory objects, scenes, and speech. Spatial frequency and temporal AM rate are thus fundamental building blocks of visual and auditory perception. Recent results suggest that crossmodal interactions are commonplace across the primary sensory cortices and that some of the underlying neural associations develop through consistent multisensory experience such as audio-visually perceiving speech, gender, and objects. We demonstrate that people consistently and absolutely (rather than relatively) match specific auditory AM rates to specific visual spatial frequencies. We further demonstrate that this crossmodal mapping allows amplitude-modulated sounds to guide attention to and modulate awareness of specific visual spatial frequencies. Additional results show that the crossmodal association is approximately linear, based on physical spatial frequency, and generalizes to tactile pulses, suggesting that the association develops through multisensory experience during manual exploration of surfaces. PMID:22326023

  15. Development of Visuo-Auditory Integration in Space and Time

    PubMed Central

    Gori, Monica; Sandini, Giulio; Burr, David

    2012-01-01

    Adults integrate multisensory information optimally (e.g., Ernst and Banks, 2002) while children do not integrate multisensory visual-haptic cues until 8–10 years of age (e.g., Gori et al., 2008). Before that age strong unisensory dominance occurs for size and orientation visual-haptic judgments, possibly reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. Within the framework of the cross-sensory calibration hypothesis, we investigate visual-auditory integration in both space and time with child-friendly spatial and temporal bisection tasks. Unimodal and bimodal (conflictual and not) audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. On the contrary, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (for PSEs), and bimodal thresholds higher than the Bayesian prediction. Only in the adult group did bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behavior develops late. We suggest that the visual dominance for space and the auditory dominance for time could reflect a cross-sensory comparison of vision in the spatial visuo-audio task and a cross-sensory comparison of audition in the temporal visuo-audio task. PMID:23060759

  16. Auditory Inspection Time and Intelligence: What Is the Direction of Causation?

    ERIC Educational Resources Information Center

    Deary, Ian J.

    1995-01-01

    Tested three competing structural equation models concerning auditory inspection time (AIT) and cognitive ability. Found that auditory inspection times near age 11 correlate most strongly with later high IQ. (ET)

  17. Sound coding in the auditory nerve of gerbils.

    PubMed

    Huet, Antoine; Batrel, Charlène; Tang, Yong; Desmadryl, Gilles; Wang, Jing; Puel, Jean-Luc; Bourien, Jérôme

    2016-08-01

    Gerbils possess a very specialized cochlea in which the low-frequency inner hair cells (IHCs) are contacted by auditory nerve fibers (ANFs) having a high spontaneous rate (SR), whereas high frequency IHCs are innervated by ANFs with a greater SR-based diversity. This specificity makes this animal a unique model to investigate, in the same cochlea, the functional role of different pools of ANFs. The distribution of the characteristic frequencies of fibers shows a clear bimodal shape (with a first mode around 1.5 kHz and a second around 12 kHz) and a notch in the histogram near 3.5 kHz. Whereas the mean thresholds did not significantly differ in the two frequency regions, the shape of the rate-intensity functions does vary significantly with the fiber characteristic frequency. Above 3.5 kHz, the sound-driven rate is greater and the slope of the rate-intensity function is steeper. Interestingly, high-SR fibers show a very good synchronized onset response in quiet (small first-spike latency jitter) but a weak response under noisy conditions. The low-SR fibers exhibit the opposite behavior, with poor onset synchronization in quiet but a robust response in noise. Finally, the greater vulnerability of low-SR fibers to various injuries including noise- and age-related hearing loss is discussed with regard to patients with poor speech intelligibility in noisy environments. Together, these results emphasize the need to perform relevant clinical tests to probe the distribution of ANFs in humans, and develop appropriate techniques of rehabilitation. This article is part of a Special Issue entitled . PMID:27220483

  18. Getting back on the beat: links between auditory-motor integration and precise auditory processing at fast time scales.

    PubMed

    Tierney, Adam; Kraus, Nina

    2016-03-01

    The auditory system is unique in its ability to precisely detect the timing of perceptual events and use this information to update motor plans, a skill that is crucial for language. However, the characteristics of the auditory system that enable this temporal precision are only beginning to be understood. Previous work has shown that participants who can tap consistently to a metronome have neural responses to sound with greater phase coherence from trial to trial. We hypothesized that this relationship is driven by a link between the updating of motor output by auditory feedback and neural precision. Moreover, we hypothesized that neural phase coherence at both fast time scales (reflecting subcortical processing) and slow time scales (reflecting cortical processing) would be linked to auditory-motor timing integration. To test these hypotheses, we asked participants to synchronize to a pacing stimulus, and then changed either the tempo or the timing of the stimulus to assess whether they could rapidly adapt. Participants who could rapidly and accurately resume synchronization had neural responses to sound with greater phase coherence. However, this precise timing was limited to the time scale of 10 ms (100 Hz) or faster; neural phase coherence at slower time scales was unrelated to performance on this task. Auditory-motor adaptation therefore specifically depends upon consistent auditory processing at fast, but not slow, time scales. PMID:26750313

  19. Studies in auditory timing: 1. Simple patterns.

    PubMed

    Hirsh, I J; Monahan, C B; Grant, K W; Singh, P G

    1990-03-01

    Listeners' accuracy in discriminating one temporal pattern from another was measured in three psychophysical experiments. When the standard pattern consisted of equally timed (isochronic) brief tones, whose interonset intervals (IOIs) were 50, 100, or 200 msec, the accuracy in detecting an asynchrony or deviation of one tone in the sequence was about as would be predicted from older research on the discrimination of single time intervals (6%-8% at an IOI of 200 msec, 11%-12% at an IOI of 100 msec, and almost 20% at an IOI of 50 msec). In a series of 6 or 10 tones, this accuracy was independent of position of delay for IOIs of 100 and 200 msec. At 50 msec, however, accuracy depended on position, being worst in initial positions and best in final positions. When one tone in a series of six has a frequency different from the others, there is some evidence (at IOI = 200 msec) that interval discrimination is relatively poorer for the tone with the different frequency. Similarly, even if all tones have the same frequency but one interval in the series is made twice as long as the others, temporal discrimination is poorer for the tones bordering the longer interval, although this result is dependent on tempo or IOI. Results with these temporally more complex patterns may be interpreted in part by applying the relative Weber ratio to the intervals before and after the delayed tone. Alternatively, these experiments may show the influence of accent on the temporal discrimination of individual tones. PMID:2326145

  20. Studies in auditory timing: 2. Rhythm patterns.

    PubMed

    Monahan, C B; Hirsh, I J

    1990-03-01

    Listeners discriminated between 6-tone rhythmic patterns that differed only in the delay of the temporal position of one of the tones. On each trial, feedback was given and the subject's performance determined the amount of delay on the next trial. The 6 tones of the patterns marked off 5 intervals. In the first experiment, patterns comprised 3 "short" and 2 "long" intervals: 12121, 21121, and so forth, where the long (2) was twice the length of a short (1). In the second experiment, patterns were the complements of the patterns in the first experiment and comprised 2 shorts and 3 longs: 21212, 12212, and so forth. Each pattern was tested 45 times (5 positions of the delayed tone x 3 tempos x 3 replications). Consistent with previous work on simple interval discrimination, absolute discrimination (delta t in milliseconds) was poorer the longer the intervals (i.e., the slower the tempo). Measures of relative discrimination (delta t/t, where t was the short interval, the long interval, or the average of 2 intervals surrounding the delayed tone) were better the slower the tempo. Beyond these global results, large interactions of pattern with position of the delayed tone and tempo suggest that different models of performance are needed to explain behavior at the different tempos. A Weber's law model fit the slow-tempo data better than did a model based on positions of "natural accent" (Povel & Essens, 1985). PMID:2326146

  1. Seasonal plasticity of precise spike timing in the avian auditory system.

    PubMed

    Caras, Melissa L; Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A

    2015-02-25

    Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843

  2. Seasonal Plasticity of Precise Spike Timing in the Avian Auditory System

    PubMed Central

    Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A.

    2015-01-01

    Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843

  3. GOES satellite time code dissemination

    NASA Technical Reports Server (NTRS)

    Beehler, R. E.

    1983-01-01

    The GOES time code system, the performance achieved to date, and some potential improvements in the future are discussed. The disseminated time code is originated from a triply redundant set of atomic standards, time code generators and related equipment maintained by NBS at NOAA's Wallops Island, VA satellite control facility. It is relayed by two GOES satellites located at 75 W and 135 W longitude on a continuous basis to users within North and South America (with overlapping coverage) and well out into the Atlantic and Pacific ocean areas. Downlink frequencies are near 468 MHz. The signals from both satellites are monitored and controlled from the NBS labs at Boulder, CO with additional monitoring input from geographically separated receivers in Washington, D.C. and Hawaii. Performance experience with the received time codes for periods ranging from several years to one day is discussed. Results are also presented for simultaneous, common-view reception by co-located receivers and by receivers separated by several thousand kilometers.

  4. A real-time auditory feedback system for retraining gait.

    PubMed

    Maulucci, Ruth A; Eckhouse, Richard H

    2011-01-01

    Stroke is the third leading cause of death in the United States and the principal cause of major long-term disability, incurring substantial distress as well as medical cost. Abnormal and inefficient gait patterns are widespread in survivors of stroke, yet gait is a major determinant of independent living. It is not surprising, therefore, that improvement of walking function is the most commonly stated priority of the survivors. Although many such individuals achieve the goal of walking, the caliber of their walking performance often limits endurance and quality of life. The ultimate goal of the research presented here is to use real-time auditory feedback to retrain gait in patients with chronic stroke. The strategy is to convert the motion of the foot into an auditory signal, and then use this auditory signal as feedback to inform the subject of the existence as well as the magnitude of error during walking. The initial stage of the project is described in this paper. The design and implementation of the new feedback method for lower limb training is explained. The question of whether the patient is physically capable of handling such training is explored. PMID:22255509

  5. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex.

    PubMed

    Trapeau, Régis; Schönwiesner, Marc

    2015-09-01

    The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere. PMID:26054873

  6. Auditory Stimuli Coding by Postsynaptic Potential and Local Field Potential Features

    PubMed Central

    de Assis, Juliana M.; Santos, Mikaelle O.; de Assis, Francisco M.

    2016-01-01

    The relation between physical stimuli and neurophysiological responses, such as action potentials (spikes) and Local Field Potentials (LFP), has recently been experimented in order to explain how neurons encode auditory information. However, none of these experiments presented analyses with postsynaptic potentials (PSPs). In the present study, we have estimated information values between auditory stimuli and amplitudes/latencies of PSPs and LFPs in anesthetized rats in vivo. To obtain these values, a new method of information estimation was used. This method produced more accurate estimates than those obtained by using the traditional binning method; a fact that was corroborated by simulated data. The traditional binning method could not certainly impart such accuracy even when adjusted by quadratic extrapolation. We found that the information obtained from LFP amplitude variation was significantly greater than the information obtained from PSP amplitude variation. This confirms the fact that LFP reflects the action of many PSPs. Results have shown that the auditory cortex codes more information of stimuli frequency with slow oscillations in groups of neurons than it does with slow oscillations in neurons separately. PMID:27513950

  7. Auditory reafferences: the influence of real-time feedback on movement control

    PubMed Central

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes. PMID:25688230

  8. The neural code for auditory space depends on sound frequency and head size in an optimal manner.

    PubMed

    Harper, Nicol S; Scott, Brian H; Semple, Malcolm N; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  9. The Neural Code for Auditory Space Depends on Sound Frequency and Head Size in an Optimal Manner

    PubMed Central

    Harper, Nicol S.; Scott, Brian H.; Semple, Malcolm N.; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)–the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron’s maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  10. Impairment of Auditory-Motor Timing and Compensatory Reorganization after Ventral Premotor Cortex Stimulation

    PubMed Central

    Kornysheva, Katja; Schubotz, Ricarda I.

    2011-01-01

    Integrating auditory and motor information often requires precise timing as in speech and music. In humans, the position of the ventral premotor cortex (PMv) in the dorsal auditory stream renders this area a node for auditory-motor integration. Yet, it remains unknown whether the PMv is critical for auditory-motor timing and which activity increases help to preserve task performance following its disruption. 16 healthy volunteers participated in two sessions with fMRI measured at baseline and following rTMS (rTMS) of either the left PMv or a control region. Subjects synchronized left or right finger tapping to sub-second beat rates of auditory rhythms in the experimental task, and produced self-paced tapping during spectrally matched auditory stimuli in the control task. Left PMv rTMS impaired auditory-motor synchronization accuracy in the first sub-block following stimulation (p<0.01, Bonferroni corrected), but spared motor timing and attention to task. Task-related activity increased in the homologue right PMv, but did not predict the behavioral effect of rTMS. In contrast, anterior midline cerebellum revealed most pronounced activity increase in less impaired subjects. The present findings suggest a critical role of the left PMv in feed-forward computations enabling accurate auditory-motor timing, which can be compensated by activity modulations in the cerebellum, but not in the homologue region contralateral to stimulation. PMID:21738657

  11. Fractionated Reaction Time Responses to Auditory and Electrocutaneous Stimuli.

    ERIC Educational Resources Information Center

    Beehler, Pamela J. Hoyes; Kamen, Gary

    1986-01-01

    An investigation was conducted to equate auditory and electrocutaneous stimuli. These equated stimuli were used in a second investigation examining neuromotor responses to stimuli of varying intensity. Results are provided. (Author/MT)

  12. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    PubMed

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control. PMID:26835556

  13. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations

    PubMed Central

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems. PMID:26292285

  14. Bat auditory cortex – model for general mammalian auditory computation or special design solution for active time perception?

    PubMed

    Kössl, Manfred; Hechavarria, Julio; Voss, Cornelia; Schaefer, Markus; Vater, Marianne

    2015-03-01

    Audition in bats serves passive orientation, alerting functions and communication as it does in other vertebrates. In addition, bats have evolved echolocation for orientation and prey detection and capture. This put a selective pressure on the auditory system in regard to echolocation-relevant temporal computation and frequency analysis. The present review attempts to evaluate in which respect the processing modules of bat auditory cortex (AC) are a model for typical mammalian AC function or are designed for echolocation-unique purposes. We conclude that, while cortical area arrangement and cortical frequency processing does not deviate greatly from that of other mammals, the echo delay time-sensitive dorsal cortex regions contain special designs for very powerful time perception. Different bat species have either a unique chronotopic cortex topography or a distributed salt-and-pepper representation of echo delay. The two designs seem to enable similar behavioural performance. PMID:25728173

  15. Time-dependent Neural Processing of Auditory Feedback during Voice Pitch Error Detection

    PubMed Central

    Behroozmand, Roozbeh; Liu, Hanjun; Larson, Charles R.

    2012-01-01

    The neural responses to sensory consequences of a self-produced motor act are suppressed compared with those in response to a similar but externally generated stimulus. Previous studies in the somatosensory and auditory systems have shown that the motor-induced suppression of the sensory mechanisms is sensitive to delays between the motor act and the onset of the stimulus. The present study investigated time-dependent neural processing of auditory feedback in response to self-produced vocalizations. ERPs were recorded in response to normal and pitch-shifted voice auditory feedback during active vocalization and passive listening to the playback of the same vocalizations. The pitch-shifted stimulus was delivered to the subjects’ auditory feedback after a randomly chosen time delay between the vocal onset and the stimulus presentation. Results showed that the neural responses to delayed feedback perturbations were significantly larger than those in response to the pitch-shifted stimulus occurring at vocal onset. Active vocalization was shown to enhance neural responsiveness to feedback alterations only for nonzero delays compared with passive listening to the playback. These findings indicated that the neural mechanisms of auditory feedback processing are sensitive to timing between the vocal motor commands and the incoming auditory feedback. Time-dependent neural processing of auditory feedback may be an important feature of the audio-vocal integration system that helps to improve the feedback-based monitoring and control of voice structure through vocal error detection and correction. PMID:20146608

  16. Predicted effects of sensorineural hearing loss on across-fiber envelope coding in the auditory nervea

    PubMed Central

    Swaminathan, Jayaganesh; Heinz, Michael G.

    2011-01-01

    Cross-channel envelope correlations are hypothesized to influence speech intelligibility, particularly in adverse conditions. Acoustic analyses suggest speech envelope correlations differ for syllabic and phonemic ranges of modulation frequency. The influence of cochlear filtering was examined here by predicting cross-channel envelope correlations in different speech modulation ranges for normal and impaired auditory-nerve (AN) responses. Neural cross-correlation coefficients quantified across-fiber envelope coding in syllabic (0–5 Hz), phonemic (5–64 Hz), and periodicity (64–300 Hz) modulation ranges. Spike trains were generated from a physiologically based AN model. Correlations were also computed using the model with selective hair-cell damage. Neural predictions revealed that envelope cross-correlation decreased with increased characteristic-frequency separation for all modulation ranges (with greater syllabic-envelope correlation than phonemic or periodicity). Syllabic envelope was highly correlated across many spectral channels, whereas phonemic and periodicity envelopes were correlated mainly between adjacent channels. Outer-hair-cell impairment increased the degree of cross-channel correlation for phonemic and periodicity ranges for speech in quiet and in noise, thereby reducing the number of independent neural information channels for envelope coding. In contrast, outer-hair-cell impairment was predicted to decrease cross-channel correlation for syllabic envelopes in noise, which may partially account for the reduced ability of hearing-impaired listeners to segregate speech in complex backgrounds. PMID:21682421

  17. Rate and synchronization measures of periodicity coding in cat primary auditory cortex.

    PubMed

    Eggermont, J J

    1991-11-01

    Periodicity coding was studied in primary auditory cortex of the ketamine anesthetized cat by simultaneously recording with two electrodes from up to 6 neural units in response to one second long click trains presented once per 3 s. Trains with click rates of 1, 2, 4, 8, 16 and 32/s were used and the responses of the single units were quantified by both rate measures (entrainment and rate modulation transfer function, rMTF) and synchronization measures (vector strength VS and temporal modulation transfer functions, tMTF). The rate measures resulted in low-pass functions of click rate and the synchrony measures resulted in band-pass functions of click rate. Limiting rates (-6 dB point of maximum response) were in the range of 3-24 Hz depending on the measure used. Best modulating frequencies were in the range of 5-8 Hz again depending on the synchrony measure used. It appeared that especially the VS was highly sensitive to spontaneous firing rate, duration of the post click suppression and the size of the rebound response after the suppression. These factors were dominantly responsible for the band-pass character of the VS-rate function and the peak VS frequency was nearly identical to the inverse of the suppression period. It is concluded that the use of the VS and to a lesser extent also the tMTF as the sole measure for the characterization of periodicity coding is not recommended in cases where there is a strong suppression of spontaneous activity. The combination of entrainment and tMTF appeared to characterize the periodicity coding in an unambiguous way. PMID:1769910

  18. Auditory Spatial Coding Flexibly Recruits Anterior, but Not Posterior, Visuotopic Parietal Cortex.

    PubMed

    Michalka, Samantha W; Rosen, Maya L; Kong, Lingqiang; Shinn-Cunningham, Barbara G; Somers, David C

    2016-03-01

    Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2-4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands. PMID:26656996

  19. Auditory Spatial Coding Flexibly Recruits Anterior, but Not Posterior, Visuotopic Parietal Cortex

    PubMed Central

    Michalka, Samantha W.; Rosen, Maya L.; Kong, Lingqiang; Shinn-Cunningham, Barbara G.; Somers, David C.

    2016-01-01

    Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2–4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands. PMID:26656996

  20. Impaired timing adjustments in response to time-varying auditory perturbation during connected speech production in persons who stutter.

    PubMed

    Cai, Shanqing; Beal, Deryk S; Ghosh, Satrajit S; Guenther, Frank H; Perkell, Joseph S

    2014-02-01

    Auditory feedback (AF), the speech signal received by a speaker's own auditory system, contributes to the online control of speech movements. Recent studies based on AF perturbation provided evidence for abnormalities in the integration of auditory error with ongoing articulation and phonation in persons who stutter (PWS), but stopped short of examining connected speech. This is a crucial limitation considering the importance of sequencing and timing in stuttering. In the current study, we imposed time-varying perturbations on AF while PWS and fluent participants uttered a multisyllabic sentence. Two distinct types of perturbations were used to separately probe the control of the spatial and temporal parameters of articulation. While PWS exhibited only subtle anomalies in the AF-based spatial control, their AF-based fine-tuning of articulatory timing was substantially weaker than normal, especially in early parts of the responses, indicating slowness in the auditory-motor integration for temporal control. PMID:24486601

  1. Speech enhancement for listeners with hearing loss based on a model for vowel coding in the auditory midbrain.

    PubMed

    Rao, Akshay; Carney, Laurel H

    2014-07-01

    A novel signal-processing strategy is proposed to enhance speech for listeners with hearing loss. The strategy focuses on improving vowel perception based on a recent hypothesis for vowel coding in the auditory system. Traditionally, studies of neural vowel encoding have focused on the representation of formants (peaks in vowel spectra) in the discharge patterns of the population of auditory-nerve (AN) fibers. A recent hypothesis focuses instead on vowel encoding in the auditory midbrain, and suggests a robust representation of formants. AN fiber discharge rates are characterized by pitch-related fluctuations having frequency-dependent modulation depths. Fibers tuned to frequencies near formants exhibit weaker pitch-related fluctuations than those tuned to frequencies between formants. Many auditory midbrain neurons show tuning to amplitude modulation frequency in addition to audio frequency. According to the auditory midbrain vowel encoding hypothesis, the response map of a population of midbrain neurons tuned to modulations near voice pitch exhibits minima near formant frequencies, due to the lack of strong pitch-related fluctuations at their inputs. This representation is robust over the range of noise conditions in which speech intelligibility is also robust for normal-hearing listeners. Based on this hypothesis, a vowel-enhancement strategy has been proposed that aims to restore vowel encoding at the level of the auditory midbrain. The signal processing consists of pitch tracking, formant tracking, and formant enhancement. The novel formant-tracking method proposed here estimates the first two formant frequencies by modeling characteristics of the auditory periphery, such as saturated discharge rates of AN fibers and modulation tuning properties of auditory midbrain neurons. The formant enhancement stage aims to restore the representation of formants at the level of the midbrain by increasing the dominance of a single harmonic near each formant and saturating

  2. Speech Enhancement for Listeners with Hearing Loss Based on a Model for Vowel Coding in the Auditory Midbrain

    PubMed Central

    Rao, Akshay; Carney, Laurel H.

    2015-01-01

    A novel signal-processing strategy is proposed to enhance speech for listeners with hearing loss. The strategy focuses on improving vowel perception based on a recent hypothesis for vowel coding in the auditory system. Traditionally, studies of neural vowel encoding have focused on the representation of formants (peaks in vowel spectra) in the discharge patterns of the population of auditory-nerve (AN) fibers. A recent hypothesis focuses instead on vowel encoding in the auditory midbrain, and suggests a robust representation of formants. AN fiber discharge rates are characterized by pitch-related fluctuations having frequency-dependent modulation depths. Fibers tuned to frequencies near formants exhibit weaker pitch-related fluctuations than those tuned to frequencies between formants. Many auditory midbrain neurons show tuning to amplitude modulation frequency in addition to audio frequency. According to the auditory midbrain vowel encoding hypothesis, the response-map of a population of midbrain neurons tuned to modulations near voice-pitch exhibits minima near formant frequencies, due to the lack of strong pitch-related fluctuations at their inputs. This representation is robust over the range of noise conditions in which speech intelligibility is also robust for normal-hearing listeners. Based on this hypothesis, a vowel-enhancement strategy has been proposed that aims to restore vowel-encoding at the level of the auditory midbrain. The signal-processing consists of pitch tracking, formant-tracking and formant enhancement. The novel formant-tracking method proposed here estimates the first two formant frequencies by modeling characteristics of the auditory periphery, such as saturated discharge-rates of AN fibers and modulation tuning properties of auditory midbrain neurons. The formant enhancement stage aims to restore the representation of formants at the level of the midbrain by increasing the dominance of a single harmonic near each formant and saturating

  3. Using Spatial Manipulation to Examine Interactions between Visual and Auditory Encoding of Pitch and Time

    PubMed Central

    McLachlan, Neil M.; Greco, Loretta J.; Toner, Emily C.; Wilson, Sarah J.

    2010-01-01

    Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians. PMID:21833287

  4. Using Spatial Manipulation to Examine Interactions between Visual and Auditory Encoding of Pitch and Time.

    PubMed

    McLachlan, Neil M; Greco, Loretta J; Toner, Emily C; Wilson, Sarah J

    2010-01-01

    Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians. PMID:21833287

  5. Plasticity in the neural coding of auditory space in the mammalian brain

    NASA Astrophysics Data System (ADS)

    King, Andrew J.; Parsons, Carl H.; Moore, David R.

    2000-10-01

    Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the "cocktail party effect") are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy.

  6. Temporal envelope of time-compressed speech represented in the human auditory cortex

    PubMed Central

    Nourski, Kirill V.; Reale, Richard A.; Oya, Hiroyuki; Kawasaki, Hiroto; Kovach, Christopher K.; Chen, Haiming; Howard, Matthew A.; Brugge, John F.

    2010-01-01

    Speech comprehension relies on temporal cues contained in the speech envelope, and the auditory cortex has been implicated as playing a critical role in encoding this temporal information. We investigated auditory cortical responses to speech stimuli in subjects undergoing invasive electrophysiological monitoring for pharmacologically refractory epilepsy. Recordings were made from multi-contact electrodes implanted in Heschl’s gyrus (HG). Speech sentences, time-compressed from 0.75 to 0.20 of natural speaking rate, elicited average evoked potentials (AEPs) and increases in event-related band power (ERBP) of cortical high frequency (70–250 Hz) activity. Cortex of posteromedial HG, the presumed core of human auditory cortex, represented the envelope of speech stimuli in the AEP and ERBP. Envelope-following in ERBP, but not in AEP, was evident in both language dominant and non-dominant hemispheres for relatively high degrees of compression where speech was not comprehensible. Compared to posteromedial HG, responses from anterolateral HG — an auditory belt field — exhibited longer latencies, lower amplitudes and little or no time locking to the speech envelope. The ability of the core auditory cortex to follow the temporal speech envelope over a wide range of speaking rates leads us to conclude that such capacity in itself is not a limiting factor for speech comprehension. PMID:20007480

  7. Brain Correlates of Early Auditory Processing Are Attenuated by Expectations for Time and Pitch

    ERIC Educational Resources Information Center

    Lange, Kathrin

    2009-01-01

    The present study investigated how auditory processing is modulated by expectations for time and pitch by analyzing reaction times and event-related potentials (ERPs). In two experiments, tone sequences were presented to the participants, who had to discriminate whether the last tone of the sequence contained a short gap or was continuous…

  8. Multiplicative auditory spatial receptive fields created by a hierarchy of population codes.

    PubMed

    Fischer, Brian J; Anderson, Charles H; Peña, José Luis

    2009-01-01

    A multiplicative combination of tuning to interaural time difference (ITD) and interaural level difference (ILD) contributes to the generation of spatially selective auditory neurons in the owl's midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl's inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl's inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system. PMID:19956693

  9. The shape of ears to come: dynamic coding of auditory space.

    PubMed

    King, A J.; Schnupp, J W.H.; Doubell, T P.

    2001-06-01

    In order to pinpoint the location of a sound source, we make use of a variety of spatial cues that arise from the direction-dependent manner in which sounds interact with the head, torso and external ears. Accurate sound localization relies on the neural discrimination of tiny differences in the values of these cues and requires that the brain circuits involved be calibrated to the cues experienced by each individual. There is growing evidence that the capacity for recalibrating auditory localization continues well into adult life. Many details of how the brain represents auditory space and of how those representations are shaped by learning and experience remain elusive. However, it is becoming increasingly clear that the task of processing auditory spatial information is distributed over different regions of the brain, some working hierarchically, others independently and in parallel, and each apparently using different strategies for encoding sound source location. PMID:11390297

  10. Frequency tuning and intensity coding of sound in the auditory periphery of the lake sturgeon, Acipenser fulvescens

    PubMed Central

    Meyer, Michaela; Fay, Richard R.; Popper, Arthur N.

    2010-01-01

    Acipenser fulvescens, the lake sturgeon, belongs to one of the few extant non-teleost ray-finned (bony) fishes. The sturgeons (family Acipenseridae) have a phylogenetic history that dates back about 250 million years. The study reported here is the first investigation of peripheral coding strategies for spectral analysis in the auditory system in a non-teleost bony fish. We used a shaker system to simulate the particle motion component of sound during electrophysiological recordings of isolated single units from the eighth nerve innervating the saccule and lagena. Background activity and response characteristics of saccular and lagenar afferents (such as thresholds, response–level functions and temporal firing) resembled the ones found in teleosts. The distribution of best frequencies also resembled data in teleosts (except for Carassius auratus, goldfish) tested with the same stimulation method. The saccule and lagena in A. fulvescens contain otoconia, in contrast to the solid otoliths found in teleosts, however, this difference in otolith structure did not appear to affect threshold, frequency tuning, intensity- or temporal responses of auditory afferents. In general, the physiological characteristics common to A. fulvescens, teleosts and land vertebrates reflect important functions of the auditory system that may have been conserved throughout the evolution of vertebrates. PMID:20400642

  11. Developing and Selecting Auditory Warnings for a Real-Time Behavioral Intervention

    PubMed Central

    Bellettiere, John; Hughes, Suzanne C.; Liles, Sandy; Boman-Davis, Marie; Klepeis, Neil; Blumberg, Elaine; Mills, Jeff; Berardi, Vincent; Obayashi, Saori; Allen, T. Tracy; Hovell, Melbourne F.

    2015-01-01

    Real-time sensing and computing technologies are increasingly used in the delivery of real-time health behavior interventions. Auditory signals play a critical role in many of these interventions, impacting not only behavioral response but also treatment adherence and participant retention. Yet, few behavioral interventions that employ auditory feedback report the characteristics of sounds used and even fewer design signals specifically for their intervention. This paper describes a four-step process used in developing and selecting auditory warnings for a behavioral trial designed to reduce indoor secondhand smoke exposure. In step one, relevant information was gathered from ergonomic and behavioral science literature to assist a panel of research assistants in developing criteria for intervention-specific auditory feedback. In step two, multiple sounds were identified through internet searches and modified in accordance with the developed criteria, and two sounds were selected that best met those criteria. In step three, a survey was conducted among 64 persons from the primary sampling frame of the larger behavioral trial to compare the relative aversiveness of sounds, determine respondents' reported behavioral reactions to those signals, and assess participant's preference between sounds. In the final step, survey results were used to select the appropriate sound for auditory warnings. Ultimately, a single-tone pulse, 500 milliseconds (ms) in length that repeats every 270 ms for 3 cycles was chosen for the behavioral trial. The methods described herein represent one example of steps that can be followed to develop and select auditory feedback tailored for a given behavioral intervention. PMID:25745633

  12. Auditory Imagery Shapes Movement Timing and Kinematics: Evidence from a Musical Task

    ERIC Educational Resources Information Center

    Keller, Peter E.; Dalla Bella, Simone; Koch, Iring

    2010-01-01

    The role of anticipatory auditory imagery in music-like sequential action was investigated by examining timing accuracy and kinematics using a motion capture system. Musicians responded to metronomic pacing signals by producing three unpaced taps on three vertically aligned keys at the given tempo. Taps triggered tones in two out of three blocked…

  13. Reaction Time and Accuracy in Individuals with Aphasia during Auditory Vigilance Tasks

    ERIC Educational Resources Information Center

    Laures, Jacqueline S.

    2005-01-01

    Research indicates that attentional deficits exist in aphasic individuals. However, relatively little is known about auditory vigilance performance in individuals with aphasia. The current study explores reaction time (RT) and accuracy in 10 aphasic participants and 10 nonbrain-damaged controls during linguistic and nonlinguistic auditory…

  14. Auditory and motor contributions to the timing of melodies under cognitive load.

    PubMed

    Maes, Pieter-Jan; Giacofci, Madison; Leman, Marc

    2015-10-01

    Current theoretical models and empirical research suggest that sensorimotor control and feedback processes may guide time perception and production. In the current study, we investigated the role of motor control and auditory feedback in an interval-production task performed under heightened cognitive load. We hypothesized that general associative learning mechanisms enable the calibration of time against patterns of dynamic change in motor control processes and auditory feedback information. In Experiment 1, we applied a dual-task interference paradigm consisting of a finger-tapping (continuation) task in combination with a working memory task. Participants (nonmusicians) had to either perform or avoid arm movements between successive key presses (continuous vs. discrete). Auditory feedback from a key press (a piano tone) filled either the complete duration of the target interval or only a small part (long vs. short). Results suggested that both continuous movement control and long piano feedback tones contributed to regular timing production. In Experiment 2, we gradually adjusted the duration of the long auditory feedback tones throughout the duration of a trial. The results showed that a gradual shortening of tones throughout time increased the rate at which participants performed tone onsets. Overall, our findings suggest that the human perceptual-motor system may be important in guiding temporal behavior under cognitive load. PMID:26098119

  15. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar.

    PubMed

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-01-01

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power. PMID:27483261

  16. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses

    PubMed Central

    Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli

    2015-01-01

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in

  17. Time coded distribution via broadcasting stations

    NASA Technical Reports Server (NTRS)

    Leschiutta, S.; Pettiti, V.; Detoma, E.

    1979-01-01

    The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.

  18. Real-time neural coding of memory.

    PubMed

    Tsien, Joe Z

    2007-01-01

    Recent identification of network-level functional coding units, termed neural cliques, in the hippocampus has allowed real-time patterns of memory traces to be mathematically described, intuitively visualized, and dynamically deciphered. Any given episodic event is represented and encoded by the activation of a set of neural clique assemblies that are organized in a categorical and hierarchical manner. This hierarchical feature-encoding pyramid is invariantly composed of the general feature-encoding clique at the bottom, sub-general feature-encoding cliques in the middle, and highly specific feature-encoding cliques at the top. This hierarchical and categorical organization of neural clique assemblies provides the network-level mechanism the capability of not only achieving vast storage capacity, but also generating commonalities from the individual behavioral episodes and converting them to the abstract concepts and generalized knowledge that are essential for intelligence and adaptive behaviors. Furthermore, activation patterns of the neural clique assemblies can be mathematically converted to strings of binary codes that would permit universal categorizations of the brain's internal representations across individuals and species. Such universal brain codes can also potentially facilitate the unprecedented brain-machine interface communications. PMID:17925242

  19. Working memory for time intervals in auditory rhythmic sequences

    PubMed Central

    Teki, Sundeep; Griffiths, Timothy D.

    2014-01-01

    The brain can hold information about multiple objects in working memory. It is not known, however, whether intervals of time can be stored in memory as distinct items. Here, we developed a novel paradigm to examine temporal memory where listeners were required to reproduce the duration of a single probed interval from a sequence of intervals. We demonstrate that memory performance significantly varies as a function of temporal structure (better memory in regular vs. irregular sequences), interval size (better memory for sub- vs. supra-second intervals), and memory load (poor memory for higher load). In contrast memory performance is invariant to attentional cueing. Our data represent the first systematic investigation of temporal memory in sequences that goes beyond previous work based on single intervals. The results support the emerging hypothesis that time intervals are allocated a working memory resource that varies with the amount of other temporal information in a sequence. PMID:25477849

  20. Uncertainty in visual and auditory series is coded by modality-general and modality-specific neural systems.

    PubMed

    Nastase, Samuel; Iacovella, Vittorio; Hasson, Uri

    2014-04-01

    Coding for the degree of disorder in a temporally unfolding sensory input allows for optimized encoding of these inputs via information compression and predictive processing. Prior neuroimaging work has examined sensitivity to statistical regularities within single sensory modalities and has associated this function with the hippocampus, anterior cingulate, and lateral temporal cortex. Here we investigated to what extent sensitivity to input disorder, quantified by Markov entropy, is subserved by modality-general or modality-specific neural systems when participants are not required to monitor the input. Participants were presented with rapid (3.3 Hz) auditory and visual series varying over four levels of entropy, while monitoring an infrequently changing fixation cross. For visual series, sensitivity to the magnitude of disorder was found in early visual cortex, the anterior cingulate, and the intraparietal sulcus. For auditory series, sensitivity was found in inferior frontal, lateral temporal, and supplementary motor regions implicated in speech perception and sequencing. Ventral premotor and central cingulate cortices were identified as possible candidates for modality-general uncertainty processing, exhibiting marginal sensitivity to disorder in both modalities. The right temporal pole differentiated the highest and lowest levels of disorder in both modalities, but did not show general sensitivity to the parametric manipulation of disorder. Our results indicate that neural sensitivity to input disorder relies largely on modality-specific systems embedded in extended sensory cortices, though uncertainty-related processing in frontal regions may be driven by both input modalities. PMID:23408389

  1. Code for Calculating Regional Seismic Travel Time

    SciTech Connect

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    2009-07-10

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minus predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.

  2. Code for Calculating Regional Seismic Travel Time

    Energy Science and Technology Software Center (ESTSC)

    2009-07-10

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forwardmore » travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minus predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less

  3. Using Reaction Time and Equal Latency Contours to Derive Auditory Weighting Functions in Sea Lions and Dolphins.

    PubMed

    Finneran, James J; Mulsow, Jason; Schlundt, Carolyn E

    2016-01-01

    Subjective loudness measurements are used to create equal-loudness contours and auditory weighting functions for human noise-mitigation criteria; however, comparable direct measurements of subjective loudness with animal subjects are difficult to conduct. In this study, simple reaction time to pure tones was measured as a proxy for subjective loudness in a Tursiops truncatus and Zalophus californianus. Contours fit to equal reaction-time curves were then used to estimate the shapes of auditory weighting functions. PMID:26610970

  4. Auditory attention to frequency and time: an analogy to visual local–global stimuli

    PubMed Central

    Justus, Timothy; List, Alexandra

    2007-01-01

    Two priming experiments demonstrated exogenous attentional persistence to the fundamental auditory dimensions of frequency (Experiment 1) and time (Experiment 2). In a divided-attention task, participants responded to an independent dimension, the identification of three-tone sequence patterns, for both prime and probe stimuli. The stimuli were specifically designed to parallel the local–global hierarchical letter stimuli of [Navon D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353–383] and the task was designed to parallel subsequent work in visual attention using Navon stimuli [Robertson, L. C. (1996). Attentional persistence for features of hierarchical patterns. Journal of Experimental Psychology: General, 125, 227–249; Ward, L. M. (1982). Determinants of attention to local and global features of visual forms. Journal of Experimental Psychology: Human Perception and Performance, 8, 562–581]. The results are discussed in terms of previous work in auditory attention and previous approaches to auditory local–global processing. PMID:16297675

  5. Some optimal partial-unit-memory codes. [time-invariant binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Lauer, G. S.

    1979-01-01

    A class of time-invariant binary convolutional codes is defined, called partial-unit-memory codes. These codes are optimal in the sense of having maximum free distance for given values of R, k (the number of encoder inputs), and mu (the number of encoder memory cells). Optimal codes are given for rates R = 1/4, 1/3, 1/2, and 2/3, with mu not greater than 4 and k not greater than mu + 3, whenever such a code is better than previously known codes. An infinite class of optimal partial-unit-memory codes is also constructed based on equidistant block codes.

  6. Neural mechanisms underlying auditory feedback control of speech

    PubMed Central

    Reilly, Kevin J.; Guenther, Frank H.

    2013-01-01

    The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557

  7. Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Squires, N.; Galambos, R.

    1975-01-01

    Short latency (under 10 msec) responses elicited by bursts of white noise were recorded from the scalps of human subjects. Response alterations produced by changes in the noise burst duration (on-time), inter-burst interval (off-time), and onset and offset shapes were analyzed. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise time but was unaffected by changes in fall time. Increases in stimulus duration, and therefore in loudness, resulted in a systematic increase in latency. This was probably due to response recovery processes, since the effect was eliminated with increases in stimulus off-time. The amplitude of wave V was insensitive to changes in signal rise and fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It was concluded that wave V of the human auditory brainstem evoked response is solely an onset response.

  8. Coding of sound direction in the auditory periphery of the lake sturgeon, Acipenser fulvescens

    PubMed Central

    Popper, Arthur N.; Fay, Richard R.

    2012-01-01

    The lake sturgeon, Acipenser fulvescens, belongs to one of the few extant nonteleost ray-finned fishes and diverged from the main vertebrate lineage about 250 million years ago. The aim of this study was to use this species to explore the peripheral neural coding strategies for sound direction and compare these results to modern bony fishes (teleosts). Extracellular recordings were made from afferent neurons innervating the saccule and lagena of the inner ear while the fish was stimulated using a shaker system. Afferents were highly directional and strongly phase locked to the stimulus. Directional response profiles resembled cosine functions, and directional preferences occurred at a wide range of stimulus intensities (spanning at least 60 dB re 1 nm displacement). Seventy-six percent of afferents were directionally selective for stimuli in the vertical plane near 90° (up down) and did not respond to horizontal stimulation. Sixty-two percent of afferents responsive to horizontal stimulation had their best axis in azimuths near 0° (front back). These findings suggest that in the lake sturgeon, in contrast to teleosts, the saccule and lagena may convey more limited information about the direction of a sound source, raising the possibility that this species uses a different mechanism for localizing sound. For azimuth, a mechanism could involve the utricle or perhaps the computation of arrival time differences. For elevation, behavioral strategies such as directing the head to maximize input to the area of best sensitivity may be used. Alternatively, the lake sturgeon may have a more limited ability for sound source localization compared with teleosts. PMID:22031776

  9. The time-course of distractor processing in auditory spatial negative priming.

    PubMed

    Möller, Malte; Mayr, Susanne; Buchner, Axel

    2016-09-01

    The spatial negative priming effect denotes slowed-down and sometimes more error-prone responding to a location that previously contained a distractor as compared with a previously unoccupied location. In vision, this effect has been attributed to the inhibition of irrelevant locations, and recently, of their task-assigned responses. Interestingly, auditory versions of the task did not yield evidence for inhibitory processing of task-irrelevant events which might suggest modality-specific distractor processing in vision and audition. Alternatively, the inhibitory processes may differ in how they develop over time. If this were the case, the absence of inhibitory after-effects might be due to an inappropriate timing of successive presentations in previous auditory spatial negative priming tasks. Specifically, the distractor may not yet have been inhibited or inhibition may already have dissipated at the time performance is assessed. The present study was conducted to test these alternatives. Participants indicated the location of a target sound in the presence of a concurrent distractor sound. Performance was assessed between two successive prime-probe presentations. The time between the prime response and the probe sounds (response-stimulus interval, RSI) was systematically varied between three groups (600, 1250, 1900 ms). For all RSI groups, the results showed no evidence for inhibitory distractor processing but conformed to the predictions of the feature mismatching hypothesis. The results support the assumption that auditory distractor processing does not recruit an inhibitory mechanism but involves the integration of spatial and sound identity features into common representations. PMID:26233234

  10. Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Squires, N.; Galambos, R.

    1975-01-01

    Short latency (under 10 msec) evoked responses elicited by bursts of white noise were recorded from the scalp of human subjects. Response alterations produced by changes in the noise burst duration (on-time) inter-burst interval (off-time), and onset and offset shapes are reported and evaluated. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise-time but was unaffected by changes in fall-time. The amplitude of wave V was insensitive to changes in signal rise-and-fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It is concluded that wave V of the human auditory brainstem evoked response is solely an onset response.

  11. Improved temporal coding of sinusoids in electric stimulation of the auditory nerve using desynchronizing pulse trains

    NASA Astrophysics Data System (ADS)

    Litvak, Leonid M.; Delgutte, Bertrand; Eddington, Donald K.

    2003-10-01

    Rubinstein et al. [Hearing Res. 127, 108-118 (1999)] suggested that the representation of electric stimulus waveforms in the temporal discharge patterns of auditory-nerve fiber (ANF) might be improved by introducing an ongoing, high-rate, desynchronizing pulse train (DPT). To test this hypothesis, activity of ANFs was studied in acutely deafened, anesthetized cats in response to 10-min-long, 5-kpps electric pulse trains that were sinusoidally modulated for 400 ms every second. Two classes of responses to sinusoidal modulations of the DPT were observed. Fibers that only responded transiently to the unmodulated DPT showed hyper synchronization and narrow dynamic ranges to sinusoidal modulators, much as responses to electric sinusoids presented without a DPT. In contrast, fibers that exhibited sustained responses to the DPT were sensitive to modulation depths as low as 0.25% for a modulation frequency of 417 Hz. Over a 20-dB range of modulation depths, responses of these fibers resembled responses to tones in a healthy ear in both discharge rate and synchronization index. This range is much wider than the dynamic range typically found with electrical stimulation without a DPT, and comparable to the dynamic range for acoustic stimulation. These results suggest that a stimulation strategy that uses small signals superimposed upon a large DPT to encode sounds may evoke temporal discharge patterns in some ANFs that resemble responses to sound in a healthy ear.

  12. The Visual and Auditory Reaction Time of Adolescents with Respect to Their Academic Achievements

    ERIC Educational Resources Information Center

    Taskin, Cengiz

    2016-01-01

    The aim of this study was to examine in visual and auditory reaction time of adolescents with respect to their academic achievement level. Five hundred adolescent children from the Turkey, (age=15.24±0.78 years; height=168.80±4.89 cm; weight=65.24±4.30 kg) for two hundred fifty male and (age=15.28±0.74; height=160.40±5.77 cm; weight=55.32±4.13 kg)…

  13. System Measures Errors Between Time-Code Signals

    NASA Technical Reports Server (NTRS)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  14. Audiological changes over time in adolescents and young adults with auditory neuropathy spectrum disorder.

    PubMed

    Chandan, Hunsur Suresh; Prabhu, Prashanth

    2015-07-01

    Auditory neuropathy spectrum disorder (ANSD) describes a condition in which a patient's otoacoustic emissions (OAE) are (or were at one time) present and auditory brainstem responses (ABR) are abnormal or absent. ANSD is also diagnosed based on the presence of cochlear microphonics and abnormal or absent ABRs with or without abnormalities of OAE. We noted the changes in audiological characteristics over time with respect to pure tone thresholds, OAEs and Speech Identification Scores (SIS) in seven individuals with ANSD. The results indicated that all the individuals with ANSD had decreased SIS over time, whereas there was subsequent reduction in pure tone thresholds only in nine out of fourteen ears. There was absence of OAEs for two individuals in both ears during the follow-up evaluations. There was no regular pattern of changes in pure tone thresholds or SIS across all individuals. This indicates that there may be gradual worsening of hearing abilities in individuals with ANSD. Thus, regular follow-up and monitoring of audiological changes are necessary for individuals with ANSD. Also, longitudinal studies need to be done to further add evidence to the audiological changes over time in individuals with ANSD. PMID:25577995

  15. A grouped binary time code for telemetry and space applications

    NASA Technical Reports Server (NTRS)

    Chi, A. R.

    1979-01-01

    A computer oriented time code designed for users with various time resolution requirements is presented. It is intended as a time code for spacecraft and ground applications where direct code compatibility with automatic data processing equipment is of primary consideration. The principal features of this time code are: byte oriented format, selectable resolution options (from seconds to nanoseconds); and long ambiguity period. The time code is compatible with the new data handling and management concepts such as the NASA End-to-End Data System and the Telemetry Data Packetization format.

  16. Auditory Processing of Amplitude Envelope Rise Time in Adults Diagnosed with Developmental Dyslexia

    ERIC Educational Resources Information Center

    Pasquini, Elisabeth S.; Corriveau, Kathleen H.; Goswami, Usha

    2007-01-01

    Studies of basic (nonspeech) auditory processing in adults thought to have developmental dyslexia have yielded a variety of data. Yet there has been little consensus regarding the explanatory value of auditory processing in accounting for reading difficulties. Recently, however, a number of studies of basic auditory processing in children with…

  17. Auditory Time-Interval Perception as Causal Inference on Sound Sources

    PubMed Central

    Sawai, Ken-ichi; Sato, Yoshiyuki; Aihara, Kazuyuki

    2012-01-01

    Perception of a temporal pattern in a sub-second time scale is fundamental to conversation, music perception, and other kinds of sound communication. However, its mechanism is not fully understood. A simple example is hearing three successive sounds with short time intervals. The following misperception of the latter interval is known: underestimation of the latter interval when the former is a little shorter or much longer than the latter, and overestimation of the latter when the former is a little longer or much shorter than the latter. Although this misperception of auditory time intervals for simple stimuli might be a cue to understanding the mechanism of time-interval perception, there exists no model that comprehensively explains it. Considering a previous experiment demonstrating that illusory perception does not occur for stimulus sounds with different frequencies, it might be plausible to think that the underlying mechanism of time-interval perception involves a causal inference on sound sources: herein, different frequencies provide cues for different causes. We construct a Bayesian observer model of this time-interval perception. We introduce a probabilistic variable representing the causality of sounds in the model. As prior knowledge, the observer assumes that a single sound source produces periodic and short time intervals, which is consistent with several previous works. We conducted numerical simulations and confirmed that our model can reproduce the misperception of auditory time intervals. A similar phenomenon has also been reported in visual and tactile modalities, though the time ranges for these are wider. This suggests the existence of a common mechanism for temporal pattern perception over modalities. This is because these different properties can be interpreted as a difference in time resolutions, given that the time resolutions for vision and touch are lower than those for audition. PMID:23226136

  18. Norm-Based Coding of Voice Identity in Human Auditory Cortex

    PubMed Central

    Latinus, Marianne; McAleer, Phil; Bestelmeyer, Patricia E.G.; Belin, Pascal

    2013-01-01

    Summary Listeners exploit small interindividual variations around a generic acoustical structure to discriminate and identify individuals from their voice—a key requirement for social interactions. The human brain contains temporal voice areas (TVA) [1] involved in an acoustic-based representation of voice identity [2–6], but the underlying coding mechanisms remain unknown. Indirect evidence suggests that identity representation in these areas could rely on a norm-based coding mechanism [4, 7–11]. Here, we show by using fMRI that voice identity is coded in the TVA as a function of acoustical distance to two internal voice prototypes (one male, one female)—approximated here by averaging a large number of same-gender voices by using morphing [12]. Voices more distant from their prototype are perceived as more distinctive and elicit greater neuronal activity in voice-sensitive cortex than closer voices—a phenomenon not merely explained by neuronal adaptation [13, 14]. Moreover, explicit manipulations of distance-to-mean by morphing voices toward (or away from) their prototype elicit reduced (or enhanced) neuronal activity. These results indicate that voice-sensitive cortex integrates relevant acoustical features into a complex representation referenced to idealized male and female voice prototypes. More generally, they shed light on remarkable similarities in cerebral representations of facial and vocal identity. PMID:23707425

  19. Effect of red bull energy drink on auditory reaction time and maximal voluntary contraction.

    PubMed

    Goel, Vartika; Manjunatha, S; Pai, Kirtana M

    2014-01-01

    The use of "Energy Drinks" (ED) is increasing in India. Students specially use these drinks to rejuvenate after strenuous exercises or as a stimulant during exam times. The most common ingredient in EDs is caffeine and a popular ED available and commonly used is Red Bull, containing 80 mg of caffeine in 250 ml bottle. The primary aim of this study was to investigate the effects of Red Bull energy drink on Auditory reaction time and Maximal voluntary contraction. A homogeneous group containing twenty medical students (10 males, 10 females) participated in a crossover study in which they were randomized to supplement with Red Bull (2 mg/kg body weight of caffeine) or isoenergetic isovolumetric noncaffeinated control drink (a combination of Appy Fizz, Cranberry juice and soda) separated by 7 days. Maximal voluntary contraction (MVC) was recorded as the highest of the 3 values of maximal isometric force generated from the dominant hand using hand grip dynamometer (Biopac systems). Auditory reaction time (ART) was the average of 10 values of the time interval between the click sound and response by pressing the push button using hand held switch (Biopac systems). The energy and control drinks after one hour of consumption significantly reduced the Auditory reaction time in males (ED 232 ± 59 Vs 204 ± 34 s and Control 223 ± 57 Vs 210 ± 51 s; p < 0.05) as well as in females (ED 227 ± 56 Vs 214 ± 48 s and Control 224 ± 45 Vs 215 ± 36 s; p < 0.05) but had no effect on MVC in either sex (males ED 381 ± 37 Vs 371 ± 36 and Control 375 ± 61 Vs 363 ± 36 Newton, females ED 227 ± 23 Vs 227 ± 32 and Control 234 ± 46 Vs 228 ± 37 Newton). When compared across the gender groups, there was no significant difference between males and females in the effects of any of the drinks on the ART but there was an overall significantly lower MVC in females compared to males. Both energy drink and the control drink significantly improve the reaction time but may not have any effect

  20. Effect of red bull energy drink on auditory reaction time and maximal voluntary contraction.

    PubMed

    Goel, Vartika; Manjunatha, S; Pai, Kirtana M

    2014-01-01

    The use of "Energy Drinks" (ED) is increasing in India. Students specially use these drinks to rejuvenate after strenuous exercises or as a stimulant during exam times. The most common ingredient in EDs is caffeine and a popular ED available and commonly used is Red Bull, containing 80 mg of caffeine in 250 ml bottle. The primary aim of this study was to investigate the effects of Red Bull energy drink on Auditory reaction time and Maximal voluntary contraction. A homogeneous group containing twenty medical students (10 males, 10 females) participated in a crossover study in which they were randomized to supplement with Red Bull (2 mg/kg body weight of caffeine) or isoenergetic isovolumetric noncaffeinated control drink (a combination of Appy Fizz, Cranberry juice and soda) separated by 7 days. Maximal voluntary contraction (MVC) was recorded as the highest of the 3 values of maximal isometric force generated from the dominant hand using hand grip dynamometer (Biopac systems). Auditory reaction time (ART) was the average of 10 values of the time interval between the click sound and response by pressing the push button using hand held switch (Biopac systems). The energy and control drinks after one hour of consumption significantly reduced the Auditory reaction time in males (ED 232 ± 59 Vs 204 ± 34 s and Control 223 ± 57 Vs 210 ± 51 s; p < 0.05) as well as in females (ED 227 ± 56 Vs 214 ± 48 s and Control 224 ± 45 Vs 215 ± 36 s; p < 0.05) but had no effect on MVC in either sex (males ED 381 ± 37 Vs 371 ± 36 and Control 375 ± 61 Vs 363 ± 36 Newton, females ED 227 ± 23 Vs 227 ± 32 and Control 234 ± 46 Vs 228 ± 37 Newton). When compared across the gender groups, there was no significant difference between males and females in the effects of any of the drinks on the ART but there was an overall significantly lower MVC in females compared to males. Both energy drink and the control drink significantly improve the reaction time but may not have any effect

  1. Ability of primary auditory cortical neurons to detect amplitude modulation with rate and temporal codes: neurometric analysis

    PubMed Central

    Johnson, Jeffrey S.; Yin, Pingbo; O'Connor, Kevin N.

    2012-01-01

    Amplitude modulation (AM) is a common feature of natural sounds, and its detection is biologically important. Even though most sounds are not fully modulated, the majority of physiological studies have focused on fully modulated (100% modulation depth) sounds. We presented AM noise at a range of modulation depths to awake macaque monkeys while recording from neurons in primary auditory cortex (A1). The ability of neurons to detect partial AM with rate and temporal codes was assessed with signal detection methods. On average, single-cell synchrony was as or more sensitive than spike count in modulation detection. Cells are less sensitive to modulation depth if tested away from their best modulation frequency, particularly for temporal measures. Mean neural modulation detection thresholds in A1 are not as sensitive as behavioral thresholds, but with phase locking the most sensitive neurons are more sensitive, suggesting that for temporal measures the lower-envelope principle cannot account for thresholds. Three methods of preanalysis pooling of spike trains (multiunit, similar to convergence from a cortical column; within cell, similar to convergence of cells with matched response properties; across cell, similar to indiscriminate convergence of cells) all result in an increase in neural sensitivity to modulation depth for both temporal and rate codes. For the across-cell method, pooling of a few dozen cells can result in detection thresholds that approximate those of the behaving animal. With synchrony measures, indiscriminate pooling results in sensitive detection of modulation frequencies between 20 and 60 Hz, suggesting that differences in AM response phase are minor in A1. PMID:22422997

  2. The Dynamics of Disruption from Altered Auditory Feedback: Further Evidence for a Dissociation of Sequencing and Timing

    ERIC Educational Resources Information Center

    Pfordresher, Peter Q.; Kulpa, J. D.

    2011-01-01

    Three experiments were designed to test whether perception and action are coordinated in a way that distinguishes sequencing from timing (Pfordresher, 2003). Each experiment incorporated a trial design in which altered auditory feedback (AAF) was presented for varying lengths of time and then withdrawn. Experiments 1 and 2 included AAF that…

  3. SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code

    SciTech Connect

    Hua, D; Fowler, T

    2004-06-15

    A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.

  4. Code extraction from encoded signal in time-spreading optical code division multiple access.

    PubMed

    Si, Zhijian; Yin, Feifei; Xin, Ming; Chen, Hongwei; Chen, Minghua; Xie, Shizhong

    2010-01-15

    A vulnerability that allows eavesdroppers to extract the code from the waveform of the noiselike encoded signal of an isolated user in a standard time-spreading optical code division multiple access communication system using bipolar phase code is experimentally demonstrated. The principle is based on fine structure in the encoded signal. Each dip in the waveform corresponds to a transition of the bipolar code. Eavesdroppers can get the code by analyzing the chip numbers between any two transitions; then a decoder identical to the legal user's can be fabricated, and they can get the properly decoded signal. PMID:20081977

  5. Effects of location and timing of co-activated neurons in the auditory midbrain on cortical activity: implications for a new central auditory prosthesis

    NASA Astrophysics Data System (ADS)

    Straka, Małgorzata M.; McMahon, Melissa; Markovitz, Craig D.; Lim, Hubert H.

    2014-08-01

    Objective. An increasing number of deaf individuals are being implanted with central auditory prostheses, but their performance has generally been poorer than for cochlear implant users. The goal of this study is to investigate stimulation strategies for improving hearing performance with a new auditory midbrain implant (AMI). Previous studies have shown that repeated electrical stimulation of a single site in each isofrequency lamina of the central nucleus of the inferior colliculus (ICC) causes strong suppressive effects in elicited responses within the primary auditory cortex (A1). Here we investigate if improved cortical activity can be achieved by co-activating neurons with different timing and locations across an ICC lamina and if this cortical activity varies across A1. Approach. We electrically stimulated two sites at different locations across an isofrequency ICC lamina using varying delays in ketamine-anesthetized guinea pigs. We recorded and analyzed spike activity and local field potentials across different layers and locations of A1. Results. Co-activating two sites within an isofrequency lamina with short inter-pulse intervals (<5 ms) could elicit cortical activity that is enhanced beyond a linear summation of activity elicited by the individual sites. A significantly greater extent of normalized cortical activity was observed for stimulation of the rostral-lateral region of an ICC lamina compared to the caudal-medial region. We did not identify any location trends across A1, but the most cortical enhancement was observed in supragranular layers, suggesting further integration of the stimuli through the cortical layers. Significance. The topographic organization identified by this study provides further evidence for the presence of functional zones across an ICC lamina with locations consistent with those identified by previous studies. Clinically, these results suggest that co-activating different neural populations in the rostral-lateral ICC rather

  6. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    ERIC Educational Resources Information Center

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  7. Adaptation to visual or auditory time intervals modulates the perception of visual apparent motion

    PubMed Central

    Zhang, Huihui; Chen, Lihan; Zhou, Xiaolin

    2012-01-01

    It is debated whether sub-second timing is subserved by a centralized mechanism or by the intrinsic properties of task-related neural activity in specific modalities (Ivry and Schlerf, 2008). By using a temporal adaptation task, we investigated whether adapting to different time intervals conveyed through stimuli in different modalities (i.e., frames of a visual Ternus display, visual blinking discs, or auditory beeps) would affect the subsequent implicit perception of visual timing, i.e., inter-stimulus interval (ISI) between two frames in a Ternus display. The Ternus display can induce two percepts of apparent motion (AM), depending on the ISI between the two frames: “element motion” for short ISIs, in which the endmost disc is seen as moving back and forth while the middle disc at the overlapping or central position remains stationary; “group motion” for longer ISIs, in which both discs appear to move in a manner of lateral displacement as a whole. In Experiment 1, participants adapted to either the typical “element motion” (ISI = 50 ms) or the typical “group motion” (ISI = 200 ms). In Experiments 2 and 3, participants adapted to a time interval of 50 or 200 ms through observing a series of two paired blinking discs at the center of the screen (Experiment 2) or hearing a sequence of two paired beeps (with pitch 1000 Hz). In Experiment 4, participants adapted to sequences of paired beeps with either low pitches (500 Hz) or high pitches (5000 Hz). After adaptation in each trial, participants were presented with a Ternus probe in which the ISI between the two frames was equal to the transitional threshold of the two types of motions, as determined by a pretest. Results showed that adapting to the short time interval in all the situations led to more reports of “group motion” in the subsequent Ternus probes; adapting to the long time interval, however, caused no aftereffect for visual adaptation but significantly more reports of group motion for

  8. Asynchrony adaptation reveals neural population code for audio-visual timing

    PubMed Central

    Roach, Neil W.; Heron, James; Whitaker, David; McGraw, Paul V.

    2011-01-01

    The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexible—adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects. PMID:20961905

  9. Real-time pseudocolor coding thermal ghost imaging.

    PubMed

    Duan, Deyang; Xia, Yunjie

    2014-01-01

    In this work, a color ghost image of a black-and-white object is obtained by a real-time pseudocolor coding technique that includes equal spatial frequency pseudocolor coding and equal density pseudocolor coding. This method makes the black-and-white ghost image more conducive to observation. Furthermore, since the ghost imaging comes from the intensity cross-correlations of the two beams, ghost imaging with the real-time pseudocolor coding technique is better than classical optical imaging with the same technique in overcoming the effects of light interference. PMID:24561954

  10. Precise inhibition is essential for microsecond interaural time difference coding

    NASA Astrophysics Data System (ADS)

    Brand, Antje; Behrend, Oliver; Marquardt, Torsten; McAlpine, David; Grothe, Benedikt

    2002-05-01

    Microsecond differences in the arrival time of a sound at the two ears (interaural time differences, ITDs) are the main cue for localizing low-frequency sounds in space. Traditionally, ITDs are thought to be encoded by an array of coincidence-detector neurons, receiving excitatory inputs from the two ears via axons of variable length (`delay lines'), to create a topographic map of azimuthal auditory space. Compelling evidence for the existence of such a map in the mammalian lTD detector, the medial superior olive (MSO), however, is lacking. Equally puzzling is the role of a-temporally very precise-glycine-mediated inhibitory input to MSO neurons. Using in vivo recordings from the MSO of the Mongolian gerbil, we found the responses of ITD-sensitive neurons to be inconsistent with the idea of a topographic map of auditory space. Moreover, local application of glycine and its antagonist strychnine by iontophoresis (through glass pipette electrodes, by means of an electric current) revealed that precisely timed glycine-controlled inhibition is a critical part of the mechanism by which the physiologically relevant range of ITDs is encoded in the MSO. A computer model, simulating the response of a coincidence-detector neuron with bilateral excitatory inputs and a temporally precise contralateral inhibitory input, supports this conclusion.

  11. Secular Slowing of Auditory Simple Reaction Time in Sweden (1959–1985)

    PubMed Central

    Madison, Guy; Woodley of Menie, Michael A.; Sänger, Justus

    2016-01-01

    There are indications that simple reaction time might have slowed in Western populations, based on both cohort- and multi-study comparisons. A possible limitation of the latter method in particular is measurement error stemming from methods variance, which results from the fact that instruments and experimental conditions change over time and between studies. We therefore set out to measure the simple auditory reaction time (SRT) of 7,081 individuals (2,997 males and 4,084 females) born in Sweden 1959–1985 (subjects were aged between 27 and 54 years at time of measurement). Depending on age cut-offs and adjustment for aging related slowing of SRT, the data indicate that SRT has increased by between 3 and 16 ms in the 27 birth years covered in the present sample. This slowing is unlikely to be explained by attrition, which was evaluated by comparing the general intelligence × birth-year interactions and standard deviations for both male participants and dropouts, utilizing military conscript cognitive ability data. The present result is consistent with previous studies employing alternative methods, and may indicate the operation of several synergistic factors, such as recent micro-evolutionary trends favoring lower g in Sweden and the effects of industrially produced neurotoxic substances on peripheral nerve conduction velocity. PMID:27588000

  12. Secular Slowing of Auditory Simple Reaction Time in Sweden (1959-1985).

    PubMed

    Madison, Guy; Woodley Of Menie, Michael A; Sänger, Justus

    2016-01-01

    There are indications that simple reaction time might have slowed in Western populations, based on both cohort- and multi-study comparisons. A possible limitation of the latter method in particular is measurement error stemming from methods variance, which results from the fact that instruments and experimental conditions change over time and between studies. We therefore set out to measure the simple auditory reaction time (SRT) of 7,081 individuals (2,997 males and 4,084 females) born in Sweden 1959-1985 (subjects were aged between 27 and 54 years at time of measurement). Depending on age cut-offs and adjustment for aging related slowing of SRT, the data indicate that SRT has increased by between 3 and 16 ms in the 27 birth years covered in the present sample. This slowing is unlikely to be explained by attrition, which was evaluated by comparing the general intelligence × birth-year interactions and standard deviations for both male participants and dropouts, utilizing military conscript cognitive ability data. The present result is consistent with previous studies employing alternative methods, and may indicate the operation of several synergistic factors, such as recent micro-evolutionary trends favoring lower g in Sweden and the effects of industrially produced neurotoxic substances on peripheral nerve conduction velocity. PMID:27588000

  13. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System

    PubMed Central

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L.; Wennekers, Thomas; Chicca, Elisabetta

    2011-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems. PMID:22347163

  14. Neural Basis of the Time Window for Subjective Motor-Auditory Integration

    PubMed Central

    Toida, Koichi; Ueno, Kanako; Shimada, Sotaro

    2016-01-01

    Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs) elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN) and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2) and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms), and hence reduction in the feeling of authorship of the sound (the sense of agency). In contrast, the enhanced-P2 was most prominent in short-delay (≤200 ms) conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components. PMID:26779000

  15. Hearing aid gain prescriptions balance restoration of auditory nerve mean-rate and spike-timing representations of speech.

    PubMed

    Dinath, Faheem; Bruce, Ian C

    2008-01-01

    Linear and nonlinear amplification schemes for hearing aids have thus far been developed and evaluated based on perceptual criteria such as speech intelligibility, sound comfort, and loudness equalization. Finding amplification schemes that optimize all of these perceptual metrics has proven difficult. Using a physiological model, Bruce et al. [1] investigated the effects of single-band gain adjustments to linear amplification prescriptions. Optimal gain adjustments for model auditory-nerve fiber responses to speech sentences from the TIMIT database were dependent on whether the error metric included the spike timing information (i.e., a time-resolution of several microseconds) or the mean firing rates (i.e., a time-resolution of several milliseconds). Results showed that positive gain adjustments are required to optimize the mean firing rate responses, whereas negative gain adjustments tend to optimize spike timing information responses. In this paper we examine the results in more depth using a similar optimization scheme applied to a synthetic vowel /E/. It is found that negative gain adjustments (i.e., below the linear gain prescriptions) minimize the spread of synchrony and deviation of the phase response to vowel formants in responses containing spike-timing information. In contrast, positive gain adjustments (i.e., above the linear gain prescriptions) normalize the distribution of mean discharge rates in the auditory nerve responses. Thus, linear amplification prescriptions appear to find a balance between restoring the spike-timing and mean-rate information in auditory-nerve responses. PMID:19163029

  16. Average discharge rate representation of voice onset time in the chinchilla auditory nerve

    SciTech Connect

    Sinex, D.G.; McDonald, L.P.

    1988-05-01

    Responses of chinchilla auditory-nerve fibers to synthesized stop consonants differing in voice onset time (VOT) were obtained. The syllables, heard as /ga/--/ka/ or /da/--/ta/, were similar to those previously used by others in psychophysical experiments with human and with chinchilla subjects. Average discharge rates of neurons tuned to the frequency region near the first formant generally increased at the onset of voicing, for VOTs longer than 20 ms. These rate increases were closely related to spectral amplitude changes associated with the onset of voicing and with the activation of the first formant; as a result, they provided accurate information about VOT. Neurons tuned to frequency regions near the second and third formants did not encode VOT in their average discharge rates. Modulations in the average rates of these neurons reflected spectral variations that were independent of VOT. The results are compared to other measurements of the peripheral encoding of speech sounds and to psychophysical observations suggesting that syllables with large variations in VOT are heard as belonging to one of only two phonemic categories.

  17. Spike timing precision changes with spike rate adaptation in the owl's auditory space map.

    PubMed

    Keller, Clifford H; Takahashi, Terry T

    2015-10-01

    Spike rate adaptation (SRA) is a continuing change of responsiveness to ongoing stimuli, which is ubiquitous across species and levels of sensory systems. Under SRA, auditory responses to constant stimuli change over time, relaxing toward a long-term rate often over multiple timescales. With more variable stimuli, SRA causes the dependence of spike rate on sound pressure level to shift toward the mean level of recent stimulus history. A model based on subtractive adaptation (Benda J, Hennig RM. J Comput Neurosci 24: 113-136, 2008) shows that changes in spike rate and level dependence are mechanistically linked. Space-specific neurons in the barn owl's midbrain, when recorded under ketamine-diazepam anesthesia, showed these classical characteristics of SRA, while at the same time exhibiting changes in spike timing precision. Abrupt level increases of sinusoidally amplitude-modulated (SAM) noise initially led to spiking at higher rates with lower temporal precision. Spike rate and precision relaxed toward their long-term values with a time course similar to SRA, results that were also replicated by the subtractive model. Stimuli whose amplitude modulations (AMs) were not synchronous across carrier frequency evoked spikes in response to stimulus envelopes of a particular shape, characterized by the spectrotemporal receptive field (STRF). Again, abrupt stimulus level changes initially disrupted the temporal precision of spiking, which then relaxed along with SRA. We suggest that shifts in latency associated with stimulus level changes may differ between carrier frequency bands and underlie decreased spike precision. Thus SRA is manifest not simply as a change in spike rate but also as a change in the temporal precision of spiking. PMID:26269555

  18. Alamouti-type polarization-time coding in coded-modulation schemes with coherent detection.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-01

    We present the Almouti-type polarization-time (PT) coding scheme suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The PT-decoder is found it to be similar to the Alamouti combiner. We also describe how to determine the symbols log-likelihood ratios in the presence of laser phase noise. We show that the proposed scheme is able to compensate even 800 ps of differential group delay, for the system operating at 10 Gb/s, with negligible penalty. The proposed scheme outperforms equal-gain combining polarization diversity OFDM scheme. However, the polarization diversity coded-OFDM and PT-coding based coded-OFDM schemes perform comparable. The proposed scheme has the potential of doubling the spectral efficiency compared to polarization diversity schemes. PMID:18773025

  19. Coding for Communication Channels with Dead-Time Constraints

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2004-01-01

    Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM

  20. Neuronal code for extended time in the hippocampus

    PubMed Central

    Mankin, Emily A.; Sparks, Fraser T.; Slayyeh, Begum; Sutherland, Robert J.; Leutgeb, Stefan; Leutgeb, Jill K.

    2012-01-01

    The time when an event occurs can become part of autobiographical memories. In brain structures that support such memories, a neural code should exist that represents when or how long ago events occurred. Here we describe a neuronal coding mechanism in hippocampus that can be used to represent the recency of an experience over intervals of hours to days. When the same event is repeated after such time periods, the activity patterns of hippocampal CA1 cell populations progressively differ with increasing temporal distances. Coding for space and context is nonetheless preserved. Compared with CA1, the firing patterns of hippocampal CA3 cell populations are highly reproducible, irrespective of the time interval, and thus provide a stable memory code over time. Therefore, the neuronal activity patterns in CA1 but not CA3 include a code that can be used to distinguish between time intervals on an extended scale, consistent with behavioral studies showing that the CA1 area is selectively required for temporal coding over such periods. PMID:23132944

  1. Neural coding of interaural time differences with bilateral cochlear implants: effects of congenital deafness.

    PubMed

    Hancock, Kenneth E; Noel, Victor; Ryugo, David K; Delgutte, Bertrand

    2010-10-20

    Human bilateral cochlear implant users do poorly on tasks involving interaural time differences (ITD), a cue that provides important benefits to the normal hearing, especially in challenging acoustic environments, yet the precision of neural ITD coding in acutely deafened, bilaterally implanted cats is essentially normal (Smith and Delgutte, 2007a). One explanation for this discrepancy is that the extended periods of binaural deprivation typically experienced by cochlear implant users degrades neural ITD sensitivity, by either impeding normal maturation of the neural circuitry or altering it later in life. To test this hypothesis, we recorded from single units in inferior colliculus of two groups of bilaterally implanted, anesthetized cats that contrast maximally in binaural experience: acutely deafened cats, which had normal binaural hearing until experimentation, and congenitally deaf white cats, which received no auditory inputs until the experiment. Rate responses of only half as many neurons showed significant ITD sensitivity to low-rate pulse trains in congenitally deaf cats compared with acutely deafened cats. For neurons that were ITD sensitive, ITD tuning was broader and best ITDs were more variable in congenitally deaf cats, leading to poorer ITD coding within the naturally occurring range. A signal detection model constrained by the observed physiology supports the idea that the degraded neural ITD coding resulting from deprivation of binaural experience contributes to poor ITD discrimination by human implantees. PMID:20962228

  2. Perceptual Distortions in Pitch and Time Reveal Active Prediction and Support for an Auditory Pitch-Motion Hypothesis

    PubMed Central

    Henry, Molly J.; McAuley, J. Devin

    2013-01-01

    A number of accounts of human auditory perception assume that listeners use prior stimulus context to generate predictions about future stimulation. Here, we tested an auditory pitch-motion hypothesis that was developed from this perspective. Listeners judged either the time change (i.e., duration) or pitch change of a comparison frequency glide relative to a standard (referent) glide. Under a constant-velocity assumption, listeners were hypothesized to use the pitch velocity (Δf/Δt) of the standard glide to generate predictions about the pitch velocity of the comparison glide, leading to perceptual distortions along the to-be-judged dimension when the velocities of the two glides differed. These predictions were borne out in the pattern of relative points of subjective equality by a significant three-way interaction between the velocities of the two glides and task. In general, listeners’ judgments along the task-relevant dimension (pitch or time) were affected by expectations generated by the constant-velocity standard, but in an opposite manner for the two stimulus dimensions. When the comparison glide velocity was faster than the standard, listeners overestimated time change, but underestimated pitch change, whereas when the comparison glide velocity was slower than the standard, listeners underestimated time change, but overestimated pitch change. Perceptual distortions were least evident when the velocities of the standard and comparison glides were matched. Fits of an imputed velocity model further revealed increasingly larger distortions at faster velocities. The present findings provide support for the auditory pitch-motion hypothesis and add to a larger body of work revealing a role for active prediction in human auditory perception. PMID:23936462

  3. Bimodal stimulus timing-dependent plasticity in primary auditory cortex is altered after noise exposure with and without tinnitus.

    PubMed

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2015-12-01

    Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit. PMID:26289461

  4. Censored Distributed Space-Time Coding for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Yiu, S.; Schober, R.

    2007-12-01

    We consider the application of distributed space-time coding in wireless sensor networks (WSNs). In particular, sensors use a common noncoherent distributed space-time block code (DSTBC) to forward their local decisions to the fusion center (FC) which makes the final decision. We show that the performance of distributed space-time coding is negatively affected by erroneous sensor decisions caused by observation noise. To overcome this problem of error propagation, we introduce censored distributed space-time coding where only reliable decisions are forwarded to the FC. The optimum noncoherent maximum-likelihood and a low-complexity, suboptimum generalized likelihood ratio test (GLRT) FC decision rules are derived and the performance of the GLRT decision rule is analyzed. Based on this performance analysis we derive a gradient algorithm for optimization of the local decision/censoring threshold. Numerical and simulation results show the effectiveness of the proposed censoring scheme making distributed space-time coding a prime candidate for signaling in WSNs.

  5. The GOES Time Code Service, 1974–2004: A Retrospective

    PubMed Central

    Lombardi, Michael A.; Hanson, D. Wayne

    2005-01-01

    NIST ended its Geostationary Operational Environmental Satellites (GOES) time code service at 0 hours, 0 minutes Coordinated Universal Time (UTC) on January 1, 2005. To commemorate the end of this historically significant service, this article provides a retrospective look at the GOES service and the important role it played in the history of satellite timekeeping. PMID:27308105

  6. The GOES Time Code Service, 1974-2004: A Retrospective.

    PubMed

    Lombardi, Michael A; Hanson, D Wayne

    2005-01-01

    NIST ended its Geostationary Operational Environmental Satellites (GOES) time code service at 0 hours, 0 minutes Coordinated Universal Time (UTC) on January 1, 2005. To commemorate the end of this historically significant service, this article provides a retrospective look at the GOES service and the important role it played in the history of satellite timekeeping. PMID:27308105

  7. Auditory Distance Coding in Rabbit Midbrain Neurons and Human Perception: Monaural Amplitude Modulation Depth as a Cue

    PubMed Central

    Zahorik, Pavel; Carney, Laurel H.; Bishop, Brian B.; Kuwada, Shigeyuki

    2015-01-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35–200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060

  8. Auditory distance coding in rabbit midbrain neurons and human perception: monaural amplitude modulation depth as a cue.

    PubMed

    Kim, Duck O; Zahorik, Pavel; Carney, Laurel H; Bishop, Brian B; Kuwada, Shigeyuki

    2015-04-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35-200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060

  9. The role of GABAergic inhibition in processing of interaural time difference in the owl's auditory system.

    PubMed

    Fujita, I; Konishi, M

    1991-03-01

    The barn owl uses interaural time differences (ITDs) to localize the azimuthal position of sound. ITDs are processed by an anatomically distinct pathway in the brainstem. Neuronal selectivity for ITD is generated in the nucleus laminaris (NL) and conveyed to both the anterior portion of the ventral nucleus of the lateral lemniscus (VLVa) and the central (ICc) and external (ICx) nuclei of the inferior colliculus. With tonal stimuli, neurons in all regions are found to respond maximally not only to the real ITD, but also to ITDs that differ by integer multiples of the tonal period. This phenomenon, phase ambiguity, does not occur when ICx neurons are stimulated with noise. The main aim of this study was to determine the role of GABAergic inhibition in the processing of ITDs. Selectivity for ITD is similar in the NL and VLVa and improves in the ICc and ICx. Iontophoresis of bicuculline methiodide (BMI), a selective GABAA antagonist, decreased the ITD selectivity of ICc and ICx neurons, but did not affect that of VLVa neurons. Responses of VLVa and ICc neurons to unfavorable ITDs were below the monaural response levels. BMI raised both binaural responses to unfavorable ITDs and monaural responses, though the former remained smaller than the latter. During BMI application, ICx neurons showed phase ambiguity to noise stimuli and no longer responded to a unique ITD. BMI increased the response magnitude and changed the temporal discharge patterns in the VLVa, ICc, and ICx. Iontophoretically applied GABA exerted effects opposite to those of BMI, and the effects could be antagonized with simultaneous application of BMI. These results suggest that GABAergic inhibition (1) sharpens ITD selectivity in the ICc and ICx, (2) contributes to the elimination of phase ambiguity in the ICx, and (3) controls response magnitude and temporal characteristics in the VLVa, ICc, and ICx. Through these actions, GABAergic inhibition shapes the horizontal dimension of the auditory receptive

  10. Coding and Centering of Time in Latent Curve Models in the Presence of Interindividual Time Heterogeneity

    ERIC Educational Resources Information Center

    Blozis, Shelley A.; Cho, Young Il

    2008-01-01

    The coding of time in latent curve models has been shown to have important implications in the interpretation of growth parameters. Centering time is often done to improve interpretation but may have consequences for estimated parameters. This article studies the effects of coding and centering time when there is interindividual heterogeneity in…

  11. Method for run time hardware code profiling for algorithm acceleration

    NASA Astrophysics Data System (ADS)

    Matev, Vladimir; de la Torre, Eduardo; Riesgo, Teresa

    2009-05-01

    In this paper we propose a method for run time profiling of applications on instruction level by analysis of loops. Instead of looking for coarse grain blocks we concentrate on fine grain but still costly blocks in terms of execution times. Most code profiling is done in software by introducing code into the application under profile witch has time overhead, while in this work data for the position of a loop, loop body, size and number of executions is stored and analysed using a small non intrusive hardware block. The paper describes the system mapping to runtime reconfigurable systems. The fine grain code detector block synthesis results and its functionality verification are also presented in the paper. To demonstrate the concept MediaBench multimedia benchmark running on the chosen development platform is used.

  12. Effect of Auditory Motion Velocity on Reaction Time and Cortical Processes

    ERIC Educational Resources Information Center

    Getzmann, Stephan

    2009-01-01

    The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…

  13. The Time-Course of Auditory and Visual Distraction Effects in a New Crossmodal Paradigm

    ERIC Educational Resources Information Center

    Bendixen, Alexandra; Grimm, Sabine; Deouell, Leon Y.; Wetzel, Nicole; Madebach, Andreas; Schroger, Erich

    2010-01-01

    Vision often dominates audition when attentive processes are involved (e.g., the ventriloquist effect), yet little is known about the relative potential of the two modalities to initiate a "break through of the unattended". The present study was designed to systematically compare the capacity of task-irrelevant auditory and visual events to…

  14. A Latent Consolidation Phase in Auditory Identification Learning: Time in the Awake State Is Sufficient

    ERIC Educational Resources Information Center

    Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi

    2005-01-01

    Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…

  15. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    ERIC Educational Resources Information Center

    Casserly, Elizabeth D.; Pisoni, David B.

    2015-01-01

    Purpose: Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method: Experiments 1 (N = 37) and 2 (N =…

  16. Subcortical modulation in auditory processing and auditory hallucinations.

    PubMed

    Ikuta, Toshikazu; DeRosse, Pamela; Argyelan, Miklos; Karlsgodt, Katherine H; Kingsley, Peter B; Szeszko, Philip R; Malhotra, Anil K

    2015-12-15

    Hearing perception in individuals with auditory hallucinations has not been well studied. Auditory hallucinations have previously been shown to involve primary auditory cortex activation. This activation suggests that auditory hallucinations activate the terminal of the auditory pathway as if auditory signals are submitted from the cochlea, and that a hallucinatory event is therefore perceived as hearing. The primary auditory cortex is stimulated by some unknown source that is outside of the auditory pathway. The current study aimed to assess the outcomes of stimulating the primary auditory cortex through the auditory pathway in individuals who have experienced auditory hallucinations. Sixteen patients with schizophrenia underwent functional magnetic resonance imaging (fMRI) sessions, as well as hallucination assessments. During the fMRI session, auditory stimuli were presented in one-second intervals at times when scanner noise was absent. Participants listened to auditory stimuli of sine waves (SW) (4-5.5kHz), English words (EW), and acoustically reversed English words (arEW) in a block design fashion. The arEW were employed to deliver the sound of a human voice with minimal linguistic components. Patients' auditory hallucination severity was assessed by the auditory hallucination item of the Brief Psychiatric Rating Scale (BPRS). During perception of arEW when compared with perception of SW, bilateral activation of the globus pallidus correlated with severity of auditory hallucinations. EW when compared with arEW did not correlate with auditory hallucination severity. Our findings suggest that the sensitivity of the globus pallidus to the human voice is associated with the severity of auditory hallucination. PMID:26275927

  17. Using LDPC Code Constraints to Aid Recovery of Symbol Timing

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban

    2008-01-01

    A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation

  18. Transformation from a pure time delay to a mixed time and phase delay representation in the auditory forebrain pathway.

    PubMed

    Vonderschen, Katrin; Wagner, Hermann

    2012-04-25

    Birds and mammals exploit interaural time differences (ITDs) for sound localization. Subsequent to ITD detection by brainstem neurons, ITD processing continues in parallel midbrain and forebrain pathways. In the barn owl, both ITD detection and processing in the midbrain are specialized to extract ITDs independent of frequency, which amounts to a pure time delay representation. Recent results have elucidated different mechanisms of ITD detection in mammals, which lead to a representation of small ITDs in high-frequency channels and large ITDs in low-frequency channels, resembling a phase delay representation. However, the detection mechanism does not prevent a change in ITD representation at higher processing stages. Here we analyze ITD tuning across frequency channels with pure tone and noise stimuli in neurons of the barn owl's auditory arcopallium, a nucleus at the endpoint of the forebrain pathway. To extend the analysis of ITD representation across frequency bands to a large neural population, we employed Fourier analysis for the spectral decomposition of ITD curves recorded with noise stimuli. This method was validated using physiological as well as model data. We found that low frequencies convey sensitivity to large ITDs, whereas high frequencies convey sensitivity to small ITDs. Moreover, different linear phase frequency regimes in the high-frequency and low-frequency ranges suggested an independent convergence of inputs from these frequency channels. Our results are consistent with ITD being remodeled toward a phase delay representation along the forebrain pathway. This indicates that sensory representations may undergo substantial reorganization, presumably in relation to specific behavioral output. PMID:22539852

  19. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  20. A neural mechanism for time-window separation resolves ambiguity of adaptive coding.

    PubMed

    Hildebrandt, K Jannis; Ronacher, Bernhard; Hennig, R Matthias; Benda, Jan

    2015-03-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task--namely, the reliable encoding of the pattern of an acoustic signal-but detrimental for another--the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  1. Time Shifted PN Codes for CW Lidar, Radar, and Sonar

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F. (Inventor); Prasad, Narasimha S. (Inventor); Harrison, Fenton W. (Inventor); Flood, Michael A. (Inventor)

    2013-01-01

    A continuous wave Light Detection and Ranging (CW LiDAR) system utilizes two or more laser frequencies and time or range shifted pseudorandom noise (PN) codes to discriminate between the laser frequencies. The performance of these codes can be improved by subtracting out the bias before processing. The CW LiDAR system may be mounted to an artificial satellite orbiting the earth, and the relative strength of the return signal for each frequency can be utilized to determine the concentration of selected gases or other substances in the atmosphere.

  2. EEG alpha spindles and prolonged brake reaction times during auditory distraction in an on-road driving study.

    PubMed

    Sonnleitner, Andreas; Treder, Matthias Sebastian; Simon, Michael; Willmann, Sven; Ewald, Arne; Buchner, Axel; Schrauf, Michael

    2014-01-01

    Driver distraction is responsible for a substantial number of traffic accidents. This paper describes the impact of an auditory secondary task on drivers' mental states during a primary driving task. N=20 participants performed the test procedure in a car following task with repeated forced braking on a non-public test track. Performance measures (provoked reaction time to brake lights) and brain activity (EEG alpha spindles) were analyzed to describe distracted drivers. Further, a classification approach was used to investigate whether alpha spindles can predict drivers' mental states. Results show that reaction times and alpha spindle rate increased with time-on-task. Moreover, brake reaction times and alpha spindle rate were significantly higher while driving with auditory secondary task opposed to driving only. In single-trial classification, a combination of spindle parameters yielded a median classification error of about 8% in discriminating the distracted from the alert driving. Reduced driving performance (i.e., prolonged brake reaction times) during increased cognitive load is assumed to be indicated by EEG alpha spindles, enabling the quantification of driver distraction in experiments on public roads without verbally assessing the drivers' mental states. PMID:24144496

  3. Change in Speech Perception and Auditory Evoked Potentials over Time after Unilateral Cochlear Implantation in Postlingually Deaf Adults.

    PubMed

    Purdy, Suzanne C; Kelly, Andrea S

    2016-02-01

    Speech perception varies widely across cochlear implant (CI) users and typically improves over time after implantation. There is also some evidence for improved auditory evoked potentials (shorter latencies, larger amplitudes) after implantation but few longitudinal studies have examined the relationship between behavioral and evoked potential measures after implantation in postlingually deaf adults. The relationship between speech perception and auditory evoked potentials was investigated in newly implanted cochlear implant users from the day of implant activation to 9 months postimplantation, on five occasions, in 10 adults age 27 to 57 years who had been bilaterally profoundly deaf for 1 to 30 years prior to receiving a unilateral CI24 cochlear implant. Changes over time in middle latency response (MLR), mismatch negativity, and obligatory cortical auditory evoked potentials and word and sentence speech perception scores were examined. Speech perception improved significantly over the 9-month period. MLRs varied and showed no consistent change over time. Three participants aged in their 50s had absent MLRs. The pattern of change in N1 amplitudes over the five visits varied across participants. P2 area increased significantly for 1,000- and 4,000-Hz tones but not for 250 Hz. The greatest change in P2 area occurred after 6 months of implant experience. Although there was a trend for mismatch negativity peak latency to reduce and width to increase after 3 months of implant experience, there was considerable variability and these changes were not significant. Only 60% of participants had a detectable mismatch initially; this increased to 100% at 9 months. The continued change in P2 area over the period evaluated, with a trend for greater change for right hemisphere recordings, is consistent with the pattern of incremental change in speech perception scores over time. MLR, N1, and mismatch negativity changes were inconsistent and hence P2 may be a more robust measure

  4. Cross-Modal Stimulus Conflict: The Behavioral Effects of Stimulus Input Timing in a Visual-Auditory Stroop Task

    PubMed Central

    Donohue, Sarah E.; Appelbaum, Lawrence G.; Park, Christina J.; Roberts, Kenneth C.; Woldorff, Marty G.

    2013-01-01

    Cross-modal processing depends strongly on the compatibility between different sensory inputs, the relative timing of their arrival to brain processing components, and on how attention is allocated. In this behavioral study, we employed a cross-modal audio-visual Stroop task in which we manipulated the within-trial stimulus-onset-asynchronies (SOAs) of the stimulus-component inputs, the grouping of the SOAs (blocked vs. random), the attended modality (auditory or visual), and the congruency of the Stroop color-word stimuli (congruent, incongruent, neutral) to assess how these factors interact within a multisensory context. One main result was that visual distractors produced larger incongruency effects on auditory targets than vice versa. Moreover, as revealed by both overall shorter response times (RTs) and relative shifts in the psychometric incongruency-effect functions, visual-information processing was faster and produced stronger and longer-lasting incongruency effects than did auditory. When attending to either modality, stimulus incongruency from the other modality interacted with SOA, yielding larger effects when the irrelevant distractor occurred prior to the attended target, but no interaction with SOA grouping. Finally, relative to neutral-stimuli, and across the wide range of the SOAs employed, congruency led to substantially more behavioral facilitation than did incongruency to interference, in contrast to findings that within-modality stimulus-compatibility effects tend to be more evenly split between facilitation and interference. In sum, the present findings reveal several key characteristics of how we process the stimulus compatibility of cross-modal sensory inputs, reflecting stimulus processing patterns that are critical for successfully navigating our complex multisensory world. PMID:23638149

  5. Time code dissemination experiment via the SIRIO-1 VHF transponder

    NASA Technical Reports Server (NTRS)

    Detoma, E.; Gobbo, G.; Leschiutta, S.; Pettiti, V.

    1982-01-01

    An experiment to evaluate the possibility of disseminating a time code via the SIRIO-1 satellite, by using the onboard VHF repeater is described. The precision in the synchronization of remote clocks was expected to be of the order of 0.1 to 1 ms. The RF carrier was in the VHF band, so that low cost receivers could be used and then a broader class of users could be served. An already existing repeater, even if not designed specifically for communications could be utilized; the operation of this repeater was not intended to affect any other function of the spacecraft (both the SHF repeater and the VHF telemetry link were active during the time code dissemination via the VHF transponder).

  6. Reducing EnergyPlus Run Time For Code Compliance Tools

    SciTech Connect

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.; Glazer, Jason

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and three climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.

  7. Code-Time Diversity for Direct Sequence Spread Spectrum Systems

    PubMed Central

    Hassan, A. Y.

    2014-01-01

    Time diversity is achieved in direct sequence spread spectrum by receiving different faded delayed copies of the transmitted symbols from different uncorrelated channel paths when the transmission signal bandwidth is greater than the coherence bandwidth of the channel. In this paper, a new time diversity scheme is proposed for spread spectrum systems. It is called code-time diversity. In this new scheme, N spreading codes are used to transmit one data symbol over N successive symbols interval. The diversity order in the proposed scheme equals to the number of the used spreading codes N multiplied by the number of the uncorrelated paths of the channel L. The paper represents the transmitted signal model. Two demodulators structures will be proposed based on the received signal models from Rayleigh flat and frequency selective fading channels. Probability of error in the proposed diversity scheme is also calculated for the same two fading channels. Finally, simulation results are represented and compared with that of maximal ration combiner (MRC) and multiple-input and multiple-output (MIMO) systems. PMID:24982925

  8. A novel 2D wavelength-time chaos code in optical CDMA system

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Xin, Xiangjun; Wang, Yongjun; Zhang, Lijia; Yu, Chongxiu; Meng, Nan; Wang, Houtian

    2012-11-01

    Two-dimensional wavelength-time chaos code is proposed and constructed for a synchronous optical code division multiple access system. The access performance is compared between one-dimensional chaos code, WDM/chaos code and the proposed code. Comparison shows that two-dimensional wavelength-time chaos code possesses larger capacity, better spectral efficiency and bit-error ratio than WDM/chaos combinations and one-dimensional chaos code.

  9. The topography of frequency and time representation in primate auditory cortices

    PubMed Central

    Baumann, Simon; Joly, Olivier; Rees, Adrian; Petkov, Christopher I; Sun, Li; Thiele, Alexander; Griffiths, Timothy D

    2015-01-01

    Natural sounds can be characterised by their spectral content and temporal modulation, but how the brain is organized to analyse these two critical sound dimensions remains uncertain. Using functional magnetic resonance imaging, we demonstrate a topographical representation of amplitude modulation rate in the auditory cortex of awake macaques. The representation of this temporal dimension is organized in approximately concentric bands of equal rates across the superior temporal plane in both hemispheres, progressing from high rates in the posterior core to low rates in the anterior core and lateral belt cortex. In A1 the resulting gradient of modulation rate runs approximately perpendicular to the axis of the tonotopic gradient, suggesting an orthogonal organisation of spectral and temporal sound dimensions. In auditory belt areas this relationship is more complex. The data suggest a continuous representation of modulation rate across several physiological areas, in contradistinction to a separate representation of frequency within each area. DOI: http://dx.doi.org/10.7554/eLife.03256.001 PMID:25590651

  10. Auditory temporal preparation induced by rhythmic cues during concurrent auditory working memory tasks.

    PubMed

    Cutanda, Diana; Correa, Ángel; Sanabria, Daniel

    2015-06-01

    The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. PMID:25893682

  11. Development of the auditory system

    PubMed Central

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  12. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  13. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation

    PubMed Central

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A.

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  14. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    PubMed

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  15. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. PMID:25726291

  16. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  17. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems. PMID:27421976

  18. Electrical stimulation of the auditory nerve: the coding of frequency, the perception of pitch and the development of cochlear implant speech processing strategies for profoundly deaf people.

    PubMed

    Clark, G M

    1996-09-01

    1. The development of speech processing strategies for multiple-channel cochlear implants has depended on encoding sound frequencies and intensities as temporal and spatial patterns of electrical stimulation of the auditory nerve fibres so that speech information of most importance of intelligibility could be transmitted. 2. Initial physiological studies showed that rate encoding of electrical stimulation above 200 pulses/s could not reproduce the normal response patterns in auditory neurons for acoustic stimulation in the speech frequency range above 200 Hz and suggested that place coding was appropriate for the higher frequencies. 3. Rate difference limens in the experimental animal were only similar to those for sound up to 200 Hz. 4. Rate difference limens in implant patients were similar to those obtained in the experimental animal. 5. Satisfactory rate discrimination could be made for durations of 50 and 100 ms, but not 25 ms. This made rate suitable for encoding longer duration suprasegmental speech information, but not segmental information, such as consonants. The rate of stimulation could also be perceived as pitch, discriminated at different electrode sites along the cochlea and discriminated for stimuli across electrodes. 6. Place pitch could be scaled according to the site of stimulation in the cochlea so that a frequency scale was preserved and it also had a different quality from rate pitch and was described as tonality. Place pitch could also be discriminated for the shorter durations (25 ms) required for identifying consonants. 7. The inaugural speech processing strategy encoded the second formant frequencies (concentrations of frequency energy in the mid frequency range of most importance for speech intelligibility) as place of stimulation, the voicing frequency as rate of stimulation and the intensity as current level. Our further speech processing strategies have extracted additional frequency information and coded this as place of stimulation

  19. Recursive time-varying filter banks for subband image coding

    NASA Technical Reports Server (NTRS)

    Smith, Mark J. T.; Chung, Wilson C.

    1992-01-01

    Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.

  20. Time transfer by IRIG-B time code via dedicated telephone link

    NASA Technical Reports Server (NTRS)

    Missout, G.; Beland, J.; Label, D.; Bedard, G.; Bussiere, P.

    1982-01-01

    Measurements were made of the stability of time transfer by the IRIG-B code over a dedicated telephone link on a microwave system. The short and long term Allan Variance was measured on both types of microwave system, one of which is synchronized, the other having free local oscillators. The results promise a time transfer accuracy of 10 microns. The paper also describes a prototype slave clock designed to detect interference in the IRIG-B code to ensure local time is kept during such interference.

  1. Parallel Processing of Distributed Video Coding to Reduce Decoding Time

    NASA Astrophysics Data System (ADS)

    Tonomura, Yoshihide; Nakachi, Takayuki; Fujii, Tatsuya; Kiya, Hitoshi

    This paper proposes a parallelized DVC framework that treats each bitplane independently to reduce the decoding time. Unfortunately, simple parallelization generates inaccurate bit probabilities because additional side information is not available for the decoding of subsequent bitplanes, which degrades encoding efficiency. Our solution is an effective estimation method that can calculate the bit probability as accurately as possible by index assignment without recourse to side information. Moreover, we improve the coding performance of Rate-Adaptive LDPC (RA-LDPC), which is used in the parallelized DVC framework. This proposal selects a fitting sparse matrix for each bitplane according to the syndrome rate estimation results at the encoder side. Simulations show that our parallelization method reduces the decoding time by up to 35[%] and achieves a bit rate reduction of about 10[%].

  2. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    PubMed Central

    Pisoni, David B.

    2015-01-01

    Purpose Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method Experiments 1 (N = 37) and 2 (N = 21) used pre- and posttest measures of speech and nonspeech recognition to find evidence of learning (within subject) and to compare the effects of 3 kinds of training (between subject) on the perceptual abilities of adults with normal hearing listening to simulations of cochlear implant processing. Subjects were given interactive, standard lab-based, or control training experience for 1 hr between the pre- and posttest tasks (unique sets across Experiments 1 & 2). Results Subjects receiving interactive training showed significant learning on sentence recognition in quiet task (Experiment 1), outperforming controls but not lab-trained subjects following training. Training groups did not differ significantly on any other task, even those directly involved in the interactive training experience. Conclusions Interactive training has the potential to produce learning in 1 domain (sentence recognition in quiet), but the particulars of the present training method (short duration, high complexity) may have limited benefits to this single criterion task. PMID:25674884

  3. Focal manipulations of formant trajectories reveal a role of auditory feedback in the online control of both within-syllable and between-syllable speech timing.

    PubMed

    Cai, Shanqing; Ghosh, Satrajit S; Guenther, Frank H; Perkell, Joseph S

    2011-11-01

    Within the human motor repertoire, speech production has a uniquely high level of spatiotemporal complexity. The production of running speech comprises the traversing of spatial positions with precisely coordinated articulator movements to produce 10-15 sounds/s. How does the brain use auditory feedback, namely the self-perception of produced speech sounds, in the online control of spatial and temporal parameters of multisyllabic articulation? This question has important bearings on the organizational principles of sequential actions, yet its answer remains controversial due to the long latency of the auditory feedback pathway and technical challenges involved in manipulating auditory feedback in precisely controlled ways during running speech. In this study, we developed a novel technique for introducing time-varying, focal perturbations in the auditory feedback during multisyllabic, connected speech. Manipulations of spatial and temporal parameters of the formant trajectory were tested separately on two groups of subjects as they uttered "I owe you a yo-yo." Under these perturbations, significant and specific changes were observed in both the spatial and temporal parameters of the produced formant trajectories. Compensations to spatial perturbations were bidirectional and opposed the perturbations. Furthermore, under perturbations that manipulated the timing of auditory feedback trajectory (slow-down or speed-up), significant adjustments in syllable timing were observed in the subjects' productions. These results highlight the systematic roles of auditory feedback in the online control of a highly over-learned action as connected speech articulation and provide a first look at the properties of this type of sensorimotor interaction in sequential movements. PMID:22072698

  4. Suboptimal Use of Neural Information in a Mammalian Auditory System

    PubMed Central

    Zilany, Muhammad S. A.; Huang, Nicholas J.; Abrams, Kristina S.; Idrobo, Fabio

    2014-01-01

    Establishing neural determinants of psychophysical performance requires both behavioral and neurophysiological metrics amenable to correlative analyses. It is often assumed that organisms use neural information optimally, such that any information available in a neural code that could improve behavioral performance is used. Studies have shown that detection of amplitude-modulated (AM) auditory tones by humans is correlated to neural synchrony thresholds, as recorded in rabbit at the level of the inferior colliculus, the first level of the ascending auditory pathway where neurons are tuned to AM stimuli. Behavioral thresholds in rabbit, however, are ∼10 dB higher (i.e., 3 times less sensitive) than in humans, and are better correlated to rate-based than temporal coding schemes in the auditory midbrain. The behavioral and physiological results shown here illustrate an unexpected, suboptimal utilization of available neural information that could provide new insights into the mechanisms that link neuronal function to behavior. PMID:24453321

  5. Time-Dependent, Parallel Neutral Particle Transport Code System.

    Energy Science and Technology Software Center (ESTSC)

    2009-09-10

    Version 00 PARTISN (PARallel, TIme-Dependent SN) is the evolutionary successor to CCC-547/DANTSYS. The PARTISN code package is a modular computer program package designed to solve the time-independent or dependent multigroup discrete ordinates form of the Boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, the Solver Module, and themore » Edit Module, respectively. PARTISN is the evolutionary successor to the DANTSYSTM code system package. The Input and Edit Modules in PARTISN are very similar to those in DANTSYS. However, unlike DANTSYS, the Solver Module in PARTISN contains one, two, and three-dimensional solvers in a single module. In addition to the diamond-differencing method, the Solver Module also has Adaptive Weighted Diamond-Differencing (AWDD), Linear Discontinuous (LD), and Exponential Discontinuous (ED) spatial differencing methods. The spatial mesh may consist of either a standard orthogonal mesh or a block adaptive orthogonal mesh. The Solver Module may be run in parallel for two and three dimensional problems. One can now run 1-D problems in parallel using Energy Domain Decomposition (triggered by Block 5 input keyword npeg>0). EDD can also be used in 2-D/3-D with or without our standard Spatial Domain Decomposition. Both the static (fixed source or eigenvalue) and time-dependent forms of the transport equation are solved in forward or adjoint mode. In addition, PARTISN now has a probabilistic mode for Probability of Initiation (static) and Probability of Survival (dynamic) calculations. Vacuum, reflective, periodic, white, or inhomogeneous boundary conditions are solved. General anisotropic scattering and inhomogeneous sources are permitted. PARTISN solves the transport equation on orthogonal (single level or block-structured AMR) grids in 1-D

  6. Time and Category Information in Pattern-Based Codes

    PubMed Central

    Eyherabide, Hugo Gabriel; Samengo, Inés

    2010-01-01

    Sensory stimuli are usually composed of different features (the what) appearing at irregular times (the when). Neural responses often use spike patterns to represent sensory information. The what is hypothesized to be encoded in the identity of the elicited patterns (the pattern categories), and the when, in the time positions of patterns (the pattern timing). However, this standard view is oversimplified. In the real world, the what and the when might not be separable concepts, for instance, if they are correlated in the stimulus. In addition, neuronal dynamics can condition the pattern timing to be correlated with the pattern categories. Hence, timing and categories of patterns may not constitute independent channels of information. In this paper, we assess the role of spike patterns in the neural code, irrespective of the nature of the patterns. We first define information-theoretical quantities that allow us to quantify the information encoded by different aspects of the neural response. We also introduce the notion of synergy/redundancy between time positions and categories of patterns. We subsequently establish the relation between the what and the when in the stimulus with the timing and the categories of patterns. To that aim, we quantify the mutual information between different aspects of the stimulus and different aspects of the response. This formal framework allows us to determine the precise conditions under which the standard view holds, as well as the departures from this simple case. Finally, we study the capability of different response aspects to represent the what and the when in the neural response. PMID:21151371

  7. Speech motor learning changes the neural response to both auditory and somatosensory signals

    PubMed Central

    Ito, Takayuki; Coppola, Joshua H.; Ostry, David J.

    2016-01-01

    In the present paper, we present evidence for the idea that speech motor learning is accompanied by changes to the neural coding of both auditory and somatosensory stimuli. Participants in our experiments undergo adaptation to altered auditory feedback, an experimental model of speech motor learning which like visuo-motor adaptation in limb movement, requires that participants change their speech movements and associated somatosensory inputs to correct for systematic real-time changes to auditory feedback. We measure the sensory effects of adaptation by examining changes to auditory and somatosensory event-related responses. We find that adaptation results in progressive changes to speech acoustical outputs that serve to correct for the perturbation. We also observe changes in both auditory and somatosensory event-related responses that are correlated with the magnitude of adaptation. These results indicate that sensory change occurs in conjunction with the processes involved in speech motor adaptation. PMID:27181603

  8. Application of satellite time transfer in autonomous spacecraft clocks. [binary time code

    NASA Technical Reports Server (NTRS)

    Chi, A. R.

    1979-01-01

    The conceptual design of a spacecraft clock that will provide a standard time scale for experimenters in future spacecraft., and can be sychronized to a time scale without the need for additional calibration and validation is described. The time distribution to the users is handled through onboard computers, without human intervention for extended periods. A group parallel binary code, under consideration for onboard use, is discussed. Each group in the code can easily be truncated. The autonomously operated clock not only achieves simpler procedures and shorter lead times for data processing, but also contributes to spacecraft autonomy for onboard navigation and data packetization. The clock can be used to control the sensor in a spacecraft, compare another time signal such as that from the global positioning system, and, if the cost is not a consideration, can be used on the ground in remote sites for timekeeping and control.

  9. Modeling neural adaptation in the frog auditory system

    NASA Astrophysics Data System (ADS)

    Wotton, Janine; McArthur, Kimberly; Bohara, Amit; Ferragamo, Michael; Megela Simmons, Andrea

    2005-09-01

    Extracellular recordings from the auditory midbrain, Torus semicircularis, of the leopard frog reveal a wide diversity of tuning patterns. Some cells seem to be well suited for time-based coding of signal envelope, and others for rate-based coding of signal frequency. Adaptation for ongoing stimuli plays a significant role in shaping the frequency-dependent response rate at different levels of the frog auditory system. Anuran auditory-nerve fibers are unusual in that they reveal frequency-dependent adaptation [A. L. Megela, J. Acoust. Soc. Am. 75, 1155-1162 (1984)], and therefore provide rate-based input. In order to examine the influence of these peripheral inputs on central responses, three layers of auditory neurons were modeled to examine short-term neural adaptation to pure tones and complex signals. The response of each neuron was simulated with a leaky integrate and fire model, and adaptation was implemented by means of an increasing threshold. Auditory-nerve fibers, dorsal medullary nucleus neurons, and toral cells were simulated and connected in three ascending layers. Modifying the adaptation properties of the peripheral fibers dramatically alters the response at the midbrain. [Work supported by NOHR to M.J.F.; Gustavus Presidential Scholarship to K.McA.; NIH DC05257 to A.M.S.

  10. Coded acoustic wave sensors and system using time diversity

    NASA Technical Reports Server (NTRS)

    Solie, Leland P. (Inventor); Hines, Jacqueline H. (Inventor)

    2012-01-01

    An apparatus and method for distinguishing between sensors that are to be wirelessly detected is provided. An interrogator device uses different, distinct time delays in the sensing signals when interrogating the sensors. The sensors are provided with different distinct pedestal delays. Sensors that have the same pedestal delay as the delay selected by the interrogator are detected by the interrogator whereas other sensors with different pedestal delays are not sensed. Multiple sensors with a given pedestal delay are provided with different codes so as to be distinguished from one another by the interrogator. The interrogator uses a signal that is transmitted to the sensor and returned by the sensor for combination and integration with the reference signal that has been processed by a function. The sensor may be a surface acoustic wave device having a differential impulse response with a power spectral density consisting of lobes. The power spectral density of the differential response is used to determine the value of the sensed parameter or parameters.

  11. Interference between postural control and spatial vs. non-spatial auditory reaction time tasks in older adults.

    PubMed

    Fuhrman, Susan I; Redfern, Mark S; Jennings, J Richard; Furman, Joseph M

    2015-01-01

    This study investigated whether spatial aspects of an information processing task influence dual-task interference. Two groups (Older/Young) of healthy adults participated in dual-task experiments. Two auditory information processing tasks included a frequency discrimination choice reaction time task (non-spatial task) and a lateralization choice reaction time task (spatial task). Postural tasks included combinations of standing with eyes open or eyes closed on either a fixed floor or a sway-referenced floor. Reaction times and postural sway via center of pressure were recorded. Baseline measures of reaction time and sway were subtracted from the corresponding dual-task results to calculate reaction time task costs and postural task costs. Reaction time task cost increased with eye closure (p = 0.01), sway-referenced flooring (p < 0.0001), and the spatial task (p = 0.04). Additionally, a significant (p = 0.05) task x vision x age interaction indicated that older subjects had a significant vision X task interaction whereas young subjects did not. However, when analyzed by age group, the young group showed minimal differences in interference for the spatial and non-spatial tasks with eyes open, but showed increased interference on the spatial relative to non-spatial task with eyes closed. On the contrary, older subjects demonstrated increased interference on the spatial relative to the non-spatial task with eyes open, but not with eyes closed. These findings suggest that visual-spatial interference may occur in older subjects when vision is used to maintain posture. PMID:26410669

  12. From ear to body: the auditory-motor loop in spatial cognition.

    PubMed

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    SPATIAL MEMORY IS MAINLY STUDIED THROUGH THE VISUAL SENSORY MODALITY: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space. PMID:25249933

  13. From ear to body: the auditory-motor loop in spatial cognition

    PubMed Central

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space. PMID:25249933

  14. Cues for auditory stream segregation of birdsong in budgerigars and zebra finches: Effects of location, timing, amplitude, and frequency.

    PubMed

    Dent, Micheal L; Martin, Amanda K; Flaherty, Mary M; Neilans, Erikson G

    2016-02-01

    Deciphering the auditory scene is a problem faced by many organisms. However, when faced with numerous overlapping sounds from multiple locations, listeners are still able to attribute the individual sound objects to their individual sound-producing sources. Here, the characteristics of sounds important for integrating versus segregating in birds were determined. Budgerigars and zebra finches were trained using operant conditioning procedures on an identification task to peck one key when they heard a whole zebra finch song and to peck another when they heard a zebra finch song missing a middle syllable. Once the birds were trained to a criterion performance level on those stimuli, probe trials were introduced on a small proportion of trials. The probe songs contained modifications of the incomplete training song's missing syllable. When the bird responded as if the probe was a whole song, it suggests they streamed together the altered syllable and the rest of the song. When the bird responded as if the probe was a non-whole song, it suggests they segregated the altered probe from the rest of the song. Results show that some features, such as location and intensity, are more important for segregating than other features, such as timing and frequency. PMID:26936551

  15. Spiking Neurons Learning Phase Delays: How Mammals May Develop Auditory Time-Difference Sensitivity

    NASA Astrophysics Data System (ADS)

    Leibold, Christian; van Hemmen, J. Leo

    2005-04-01

    Time differences between the two ears are an important cue for animals to azimuthally locate a sound source. The first binaural brainstem nucleus, in mammals the medial superior olive, is generally believed to perform the necessary computations. Its cells are sensitive to variations of interaural time differences of about 10 μs. The classical explanation of such a neuronal time-difference tuning is based on the physical concept of delay lines. Recent data, however, are inconsistent with a temporal delay and rather favor a phase delay. By means of a biophysical model we show how spike-timing-dependent synaptic learning explains precise interplay of excitation and inhibition and, hence, accounts for a physical realization of a phase delay.

  16. Feature Assignment in Perception of Auditory Figure

    ERIC Educational Resources Information Center

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  17. MEG dual scanning: a procedure to study real-time auditory interaction between two persons

    PubMed Central

    Baess, Pamela; Zhdanov, Andrey; Mandel, Anne; Parkkonen, Lauri; Hirvenkari, Lotta; Mäkelä, Jyrki P.; Jousmäki, Veikko; Hari, Riitta

    2012-01-01

    Social interactions fill our everyday life and put strong demands on our brain function. However, the possibilities for studying the brain basis of social interaction are still technically limited, and even modern brain imaging studies of social cognition typically monitor just one participant at a time. We present here a method to connect and synchronize two faraway neuromagnetometers. With this method, two participants at two separate sites can interact with each other through a stable real-time audio connection with minimal delay and jitter. The magnetoencephalographic (MEG) and audio recordings of both laboratories are accurately synchronized for joint offline analysis. The concept can be extended to connecting multiple MEG devices around the world. As a proof of concept of the MEG-to-MEG link, we report the results of time-sensitive recordings of cortical evoked responses to sounds delivered at laboratories separated by 5 km. PMID:22514530

  18. Development of a test for recording both visual and auditory reaction times, potentially useful for future studies in patients on opioids therapy

    PubMed Central

    Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio

    2015-01-01

    Objective Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual’s capacity to drive safely. Methods The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Results Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. Conclusion We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely. PMID:25709406

  19. The Effect of Dopaminergic Medication on Beat-Based Auditory Timing in Parkinson’s Disease

    PubMed Central

    Cameron, Daniel J.; Pickett, Kristen A.; Earhart, Gammon M.; Grahn, Jessica A.

    2016-01-01

    Parkinson’s disease (PD) adversely affects timing abilities. Beat-based timing is a mechanism that times events relative to a regular interval, such as the “beat” in musical rhythm, and is impaired in PD. It is unknown if dopaminergic medication influences beat-based timing in PD. Here, we tested beat-based timing over two sessions in participants with PD (OFF then ON dopaminergic medication) and in unmedicated control participants. People with PD and control participants completed two tasks. The first was a discrimination task in which participants compared two rhythms and determined whether they were the same or different. Rhythms either had a beat structure (metric simple rhythms) or did not (metric complex rhythms), as in previous studies. Discrimination accuracy was analyzed to test for the effects of beat structure, as well as differences between participants with PD and controls, and effects of medication (PD group only). The second task was the Beat Alignment Test (BAT), in which participants listened to music with regular tones superimposed, and responded as to whether the tones were “ON” or “OFF” the beat of the music. Accuracy was analyzed to test for differences between participants with PD and controls, and for an effect of medication in patients. Both patients and controls discriminated metric simple rhythms better than metric complex rhythms. Controls also improved at the discrimination task in the second vs. first session, whereas people with PD did not. For participants with PD, the difference in performance between metric simple and metric complex rhythms was greater (sensitivity to changes in simple rhythms increased and sensitivity to changes in complex rhythms decreased) when ON vs. OFF medication. Performance also worsened with disease severity. For the BAT, no group differences or effects of medication were found. Overall, these findings suggest that timing is impaired in PD, and that dopaminergic medication influences beat

  20. The Effect of Dopaminergic Medication on Beat-Based Auditory Timing in Parkinson's Disease.

    PubMed

    Cameron, Daniel J; Pickett, Kristen A; Earhart, Gammon M; Grahn, Jessica A

    2016-01-01

    Parkinson's disease (PD) adversely affects timing abilities. Beat-based timing is a mechanism that times events relative to a regular interval, such as the "beat" in musical rhythm, and is impaired in PD. It is unknown if dopaminergic medication influences beat-based timing in PD. Here, we tested beat-based timing over two sessions in participants with PD (OFF then ON dopaminergic medication) and in unmedicated control participants. People with PD and control participants completed two tasks. The first was a discrimination task in which participants compared two rhythms and determined whether they were the same or different. Rhythms either had a beat structure (metric simple rhythms) or did not (metric complex rhythms), as in previous studies. Discrimination accuracy was analyzed to test for the effects of beat structure, as well as differences between participants with PD and controls, and effects of medication (PD group only). The second task was the Beat Alignment Test (BAT), in which participants listened to music with regular tones superimposed, and responded as to whether the tones were "ON" or "OFF" the beat of the music. Accuracy was analyzed to test for differences between participants with PD and controls, and for an effect of medication in patients. Both patients and controls discriminated metric simple rhythms better than metric complex rhythms. Controls also improved at the discrimination task in the second vs. first session, whereas people with PD did not. For participants with PD, the difference in performance between metric simple and metric complex rhythms was greater (sensitivity to changes in simple rhythms increased and sensitivity to changes in complex rhythms decreased) when ON vs. OFF medication. Performance also worsened with disease severity. For the BAT, no group differences or effects of medication were found. Overall, these findings suggest that timing is impaired in PD, and that dopaminergic medication influences beat-based and non

  1. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    PubMed

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  2. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    PubMed Central

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  3. Auditory Reserve and the Legacy of Auditory Experience

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function. PMID:25405381

  4. One hundred ways to process time, frequency, rate and scale in the central auditory system: a pattern-recognition meta-analysis.

    PubMed

    Hemery, Edgar; Aucouturier, Jean-Julien

    2015-01-01

    The mammalian auditory system extracts features from the acoustic environment based on the responses of spatially distributed sets of neurons in the subcortical and cortical auditory structures. The characteristic responses of these neurons (linearly approximated by their spectro-temporal receptive fields, or STRFs) suggest that auditory representations are formed, as early as in the inferior colliculi, on the basis of a time, frequency, rate (temporal modulations) and scale (spectral modulations) analysis of sound. However, how these four dimensions are integrated and processed in subsequent neural networks remains unclear. In this work, we present a new methodology to generate computational insights into the functional organization of such processes. We first propose a systematic framework to explore more than a hundred different computational strategies proposed in the literature to process the output of a generic STRF model. We then evaluate these strategies on their ability to compute perceptual distances between pairs of environmental sounds. Finally, we conduct a meta-analysis of the dataset of all these algorithms' accuracies to examine whether certain combinations of dimensions and certain ways to treat such dimensions are, on the whole, more computationally effective than others. We present an application of this methodology to a dataset of ten environmental sound categories, in which the analysis reveals that (1) models are most effective when they organize STRF data into frequency groupings-which is consistent with the known tonotopic organization of receptive fields in auditory structures -, and that (2) models that treat STRF data as time series are no more effective than models that rely only on summary statistics along time-which corroborates recent experimental evidence on texture discrimination by summary statistics. PMID:26190996

  5. A High-Rate Space-Time Block Code with Full Diversity

    NASA Astrophysics Data System (ADS)

    Gao, Zhenzhen; Zhu, Shihua; Zhong, Zhimeng

    A new high-rate space-time block code (STBC) with full transmit diversity gain for four transmit antennas based on a generalized Alamouti code structure is proposed. The proposed code has lower Maximum Likelihood (ML) decoding complexity than the Double ABBA scheme does. Constellation rotation is used to maximize the diversity product. With the optimal rotated constellations, the proposed code significantly outperforms some known high-rate STBCs in the literature with similar complexity and the same spectral efficiency.

  6. Auditory system

    NASA Technical Reports Server (NTRS)

    Ades, H. W.

    1973-01-01

    The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.

  7. Auditory synesthesias.

    PubMed

    Afra, Pegah

    2015-01-01

    Synesthesia is experienced when sensory stimulation of one sensory modality (the inducer) elicits an involuntary or automatic sensation in another sensory modality or different aspect of the same sensory modality (the concurrent). Auditory synesthesias (AS) occur when auditory stimuli trigger a variety of concurrents, or when non-auditory sensory stimulations trigger auditory synesthetic perception. The AS are divided into three types: developmental, acquired, and induced. Developmental AS are not a neurologic disorder but a different way of experiencing one's environment. They are involuntary and highly consistent experiences throughout one's life. Acquired AS have been reported in association with neurologic diseases that cause deafferentation of anterior optic pathways, with pathologic lesions affecting the central nervous system (CNS) outside of the optic pathways, as well as non-lesional cases associated with migraine, and epilepsy. It also has been reported with mood disorders as well as a single idiopathic case. Induced AS has been reported in experimental and postsurgical blindfolding, as well as intake of hallucinogenics or psychedelics. In this chapter the three different types of synesthesia, their characteristics, and phenomologic differences, as well as their possible neural mechanisms are discussed. PMID:25726281

  8. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... of on-time performance codes. (a) Each reporting carrier shall calculate an on-time performance...

  9. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... of on-time performance codes. (a) Each reporting carrier shall calculate an on-time performance...

  10. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... of on-time performance codes. (a) Each reporting carrier shall calculate an on-time performance...

  11. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... of on-time performance codes. (a) Each reporting carrier shall calculate an on-time performance...

  12. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... of on-time performance codes. (a) Each reporting carrier shall calculate an on-time performance...

  13. Speakers' acceptance of real-time speech exchange indicates that we use auditory feedback to specify the meaning of what we say.

    PubMed

    Lind, Andreas; Hall, Lars; Breidegard, Björn; Balkenius, Christian; Johansson, Petter

    2014-06-01

    Speech is usually assumed to start with a clearly defined preverbal message, which provides a benchmark for self-monitoring and a robust sense of agency for one's utterances. However, an alternative hypothesis states that speakers often have no detailed preview of what they are about to say, and that they instead use auditory feedback to infer the meaning of their words. In the experiment reported here, participants performed a Stroop color-naming task while we covertly manipulated their auditory feedback in real time so that they said one thing but heard themselves saying something else. Under ideal timing conditions, two thirds of these semantic exchanges went undetected by the participants, and in 85% of all nondetected exchanges, the inserted words were experienced as self-produced. These findings indicate that the sense of agency for speech has a strong inferential component, and that auditory feedback of one's own voice acts as a pathway for semantic monitoring, potentially overriding other feedback loops. PMID:24777489

  14. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  15. The Perception of Auditory Motion.

    PubMed

    Carlile, Simon; Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  16. Time-dependent recycling modeling with edge plasma transport codes

    NASA Astrophysics Data System (ADS)

    Pigarov, A.; Krasheninnikov, S.; Rognlien, T.; Taverniers, S.; Hollmann, E.

    2013-10-01

    First,we discuss extensions to Macroblob approach which allow to simulate more accurately dynamics of ELMs, pedestal and edge transport with UEDGE code. Second,we present UEDGE modeling results for H mode discharge with infrequent ELMs and large pedestal losses on DIII-D. In modeled sequence of ELMs this discharge attains a dynamic equilibrium. Temporal evolution of pedestal plasma profiles, spectral line emission, and surface temperature matching experimental data over ELM cycle is discussed. Analysis of dynamic gas balance highlights important role of material surfaces. We quantified the wall outgassing between ELMs as 3X the NBI fueling and the recycling coefficient as 0.8 for wall pumping via macroblob-wall interactions. Third,we also present results from multiphysics version of UEDGE with built-in, reduced, 1-D wall models and analyze the role of various PMI processes. Progress in framework-coupled UEDGE/WALLPSI code is discussed. Finally, implicit coupling schemes are important feature of multiphysics codes and we report on the results of parametric analysis of convergence and performance for Picard and Newton iterations in a system of coupled deterministic-stochastic ODE and proposed modifications enhancing convergence.

  17. One hundred ways to process time, frequency, rate and scale in the central auditory system: a pattern-recognition meta-analysis

    PubMed Central

    Hemery, Edgar; Aucouturier, Jean-Julien

    2015-01-01

    The mammalian auditory system extracts features from the acoustic environment based on the responses of spatially distributed sets of neurons in the subcortical and cortical auditory structures. The characteristic responses of these neurons (linearly approximated by their spectro-temporal receptive fields, or STRFs) suggest that auditory representations are formed, as early as in the inferior colliculi, on the basis of a time, frequency, rate (temporal modulations) and scale (spectral modulations) analysis of sound. However, how these four dimensions are integrated and processed in subsequent neural networks remains unclear. In this work, we present a new methodology to generate computational insights into the functional organization of such processes. We first propose a systematic framework to explore more than a hundred different computational strategies proposed in the literature to process the output of a generic STRF model. We then evaluate these strategies on their ability to compute perceptual distances between pairs of environmental sounds. Finally, we conduct a meta-analysis of the dataset of all these algorithms' accuracies to examine whether certain combinations of dimensions and certain ways to treat such dimensions are, on the whole, more computationally effective than others. We present an application of this methodology to a dataset of ten environmental sound categories, in which the analysis reveals that (1) models are most effective when they organize STRF data into frequency groupings—which is consistent with the known tonotopic organization of receptive fields in auditory structures -, and that (2) models that treat STRF data as time series are no more effective than models that rely only on summary statistics along time—which corroborates recent experimental evidence on texture discrimination by summary statistics. PMID:26190996

  18. The Role of Animacy in the Real Time Comprehension of Mandarin Chinese: Evidence from Auditory Event-Related Brain Potentials

    ERIC Educational Resources Information Center

    Philipp, Markus; Bornkessel-Schlesewsky, Ina; Bisang, Walter; Schlesewsky, Matthias

    2008-01-01

    Two auditory ERP studies examined the role of animacy in sentence comprehension in Mandarin Chinese by comparing active and passive sentences in simple verb-final (Experiment 1) and relative clause constructions (Experiment 2). In addition to the voice manipulation (which modulated the assignment of actor and undergoer roles to the arguments),…

  19. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... disclosure of on-time performance codes. (a) Any air carrier may determine, in accordance with the...

  20. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... disclosure of on-time performance codes. (a) Any air carrier may determine, in accordance with the...

  1. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... disclosure of on-time performance codes. (a) Any air carrier may determine, in accordance with the...

  2. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... disclosure of on-time performance codes. (a) Any air carrier may determine, in accordance with the...

  3. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... disclosure of on-time performance codes. (a) Any air carrier may determine, in accordance with the...

  4. Robust Timing Synchronization for Aviation Communications, and Efficient Modulation and Coding Study for Quantum Communication

    NASA Technical Reports Server (NTRS)

    Xiong, Fugin

    2003-01-01

    One half of Professor Xiong's effort will investigate robust timing synchronization schemes for dynamically varying characteristics of aviation communication channels. The other half of his time will focus on efficient modulation and coding study for the emerging quantum communications.

  5. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  6. CoCoNuT: General relativistic hydrodynamics code with dynamical space-time evolution

    NASA Astrophysics Data System (ADS)

    Dimmelmeier, Harald; Novak, Jérôme; Cerdá-Durán, Pablo

    2012-02-01

    CoCoNuT is a general relativistic hydrodynamics code with dynamical space-time evolution. The main aim of this numerical code is the study of several astrophysical scenarios in which general relativity can play an important role, namely the collapse of rapidly rotating stellar cores and the evolution of isolated neutron stars. The code has two flavors: CoCoA, the axisymmetric (2D) magnetized version, and CoCoNuT, the 3D non-magnetized version.

  7. Perception and coding of interaural time differences with bilateral cochlear implants.

    PubMed

    Laback, Bernhard; Egger, Katharina; Majdak, Piotr

    2015-04-01

    Bilateral cochlear implantation is increasingly becoming the standard in the clinical treatment of bilateral deafness. The main motivation is to provide users of bilateral cochlear implants (CIs) access to binaural cues essential for localizing sound sources and understanding speech in environments of interfering sounds. One of those cues, interaural level differences, can be perceived well by CI users to allow some basic left versus right localization. However, interaural time differences (ITDs) which are important for localization of low-frequency sounds and spatial release from masking are not adequately represented by clinical envelope-based CI systems. Here, we first review the basic ITD sensitivity of CI users, particularly their dependence on stimulation parameters like stimulation rate and place, modulation rate, and envelope shape in single-electrode stimulation, as well as stimulation level, electrode spacing, and monaural across-electrode timing in multiple-electrode stimulation. Then, we discuss factors involved in ITD perception in electric hearing including the match between highly phase-locked electric auditory nerve response properties and binaural cell properties, the restricted stimulation of apical tonotopic pathways, channel interactions in multiple-electrode stimulation, and the onset age of binaural auditory input. Finally, we present clinically available CI stimulation strategies and experimental strategies aiming at improving listeners' access to ITD cues. This article is part of a Special Issue entitled . PMID:25456088

  8. Driving-Simulator-Based Test on the Effectiveness of Auditory Red-Light Running Vehicle Warning System Based on Time-To-Collision Sensor

    PubMed Central

    Yan, Xuedong; Xue, Qingwan; Ma, Lu; Xu, Yongcun

    2014-01-01

    The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR) collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM). The collisions avoidance related variables were measured in terms of brake reaction time (BRT), maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles. PMID:24566631

  9. Quantitative Characterization of Super-Resolution Infrared Imaging Based on Time-Varying Focal Plane Coding

    NASA Astrophysics Data System (ADS)

    Wang, X.; Yuan, Y.; Zhang, J.; Chen, Y.; Cheng, Y.

    2014-10-01

    High resolution infrared image has been the goal of an infrared imaging system. In this paper, a super-resolution infrared imaging method using time-varying coded mask is proposed based on focal plane coding and compressed sensing theory. The basic idea of this method is to set a coded mask on the focal plane of the optical system, and the same scene could be sampled many times repeatedly by using time-varying control coding strategy, the super-resolution image is further reconstructed by sparse optimization algorithm. The results of simulation are quantitatively evaluated by introducing the Peak Signal-to-Noise Ratio (PSNR) and Modulation Transfer Function (MTF), which illustrate that the effect of compressed measurement coefficient r and coded mask resolution m on the reconstructed image quality. Research results show that the proposed method will promote infrared imaging quality effectively, which will be helpful for the practical design of new type of high resolution ! infrared imaging systems.

  10. Auditory spatial attention representations in the human cerebral cortex.

    PubMed

    Kong, Lingqiang; Michalka, Samantha W; Rosen, Maya L; Sheremata, Summer L; Swisher, Jascha D; Shinn-Cunningham, Barbara G; Somers, David C

    2014-03-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753

  11. Intercept Centering and Time Coding in Latent Difference Score Models

    ERIC Educational Resources Information Center

    Grimm, Kevin J.

    2012-01-01

    Latent difference score (LDS) models combine benefits derived from autoregressive and latent growth curve models allowing for time-dependent influences and systematic change. The specification and descriptions of LDS models include an initial level of ability or trait plus an accumulation of changes. A limitation of this specification is that the…

  12. Further Development and Implementation of Implicit Time Marching in the CAA Code

    NASA Technical Reports Server (NTRS)

    Golubev, Vladimir V.

    2003-01-01

    The fellowship research project continued last-year work on implementing implicit time marching concepts in the Broadband Aeroacoustic System Simulator (BASS) code. This code is being developed at NASA Glenn for analysis of unsteady flow and sources of noise in propulsion systems, including jet noise and fan noise.

  13. Naturalistic Stimuli Increase the Rate and Efficiency of Information Transmission by Primary Auditory Afferents

    NASA Astrophysics Data System (ADS)

    Rieke, F.; Bodnar, D. A.; Bialek, W.

    1995-12-01

    Natural sounds, especially communication sounds, have highly structured amplitude and phase spectra. We have quantified how structure in the amplitude spectrum of natural sounds affects coding in primary auditory afferents. Auditory afferents encode stimuli with naturalistic amplitude spectra dramatically better than broad-band stimuli (approximating white noise); the rate at which the spike train carries information about the stimulus is 2-6 times higher for naturalistic sounds. Furthermore, the information rates can reach 90% of the fundamental limit to information transmission set by the statistics of the spike response. These results indicate that the coding strategy of the auditory nerve is matched to the structure of natural sounds; this `tuning' allows afferent spike trains to provide higher processing centres with a more complete description of the sensory world.

  14. Assessment of effectiveness of signal-code constructions in time division-multi-access satellite systems

    NASA Astrophysics Data System (ADS)

    Portnoy, S. L.; Ankudinov, D. R.

    1985-01-01

    Energy losses in TDMA satellite circuits are investigated on the basis of the model of a Gaussian memoryless channel incorporating a signal code construction. The signal code construction is a consolidated two stage construction with a modulation system as the inner stage and correcting codes as the outer stage. Signal code constructions employing Gray codes, cascade codes and M-ary block codes are considered. Real TDMA systems are analyzed on the assumptions that the calculations are made using an audio frequency equivalent of the circuit, the relay carries a single trunk, the timing and carrier frequency synchronization is ideal, the signal is transmitted in the continuous stream, and there is no noise at the input of the receiving filter. The effectiveness of a signal code construction employing cascade codes on a real satellite link incorporating MDVU-40 equipment is modeled. The method can be used to select the signal code construction in a communications channel for the required data rate, and to maximize the energy gain and attainable transmission rate over the relay trunk.

  15. Selective processing of auditory evoked responses with iterative-randomized stimulation and averaging: A strategy for evaluating the time-invariant assumption.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D

    2016-03-01

    The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed. PMID:26778545

  16. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    PubMed

    Mossbridge, Julia A; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  17. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  18. Gene-Auto: Automatic Software Code Generation for Real-Time Embedded Systems

    NASA Astrophysics Data System (ADS)

    Rugina, A.-E.; Thomas, D.; Olive, X.; Veran, G.

    2008-08-01

    This paper gives an overview of the Gene-Auto ITEA European project, which aims at building a qualified C code generator from mathematical models under Matlab-Simulink and Scilab-Scicos. The project is driven by major European industry partners, active in the real-time embedded systems domains. The Gene- Auto code generator will significantly improve the current development processes in such domains by shortening the time to market and by guaranteeing the quality of the generated code through the use of formal methods. The first version of the Gene-Auto code generator has already been released and has gone thought a validation phase on real-life case studies defined by each project partner. The validation results are taken into account in the implementation of the second version of the code generator. The partners aim at introducing the Gene-Auto results into industrial development by 2010.

  19. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  20. Auditory sequence analysis and phonological skill.

    PubMed

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E; Turton, Stuart; Griffiths, Timothy D

    2012-11-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  1. Auditory short-term memory in the primate auditory cortex.

    PubMed

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26541581

  2. Change in the coding of interaural time difference along the tonotopic axis of the chicken nucleus laminaris

    PubMed Central

    Palanca-Castan, Nicolas; Köppl, Christine

    2015-01-01

    Interaural time differences (ITDs) are an important cue for the localization of sounds in azimuthal space. Both birds and mammals have specialized, tonotopically organized nuclei in the brain stem for the processing of ITD: medial superior olive in mammals and nucleus laminaris (NL) in birds. The specific way in which ITDs are derived was long assumed to conform to a delay-line model in which arrays of systematically arranged cells create a representation of auditory space with different cells responding maximally to specific ITDs. This model was supported by data from barn owl NL taken from regions above 3 kHz and from chicken above 1 kHz. However, data from mammals often do not show defining features of the Jeffress model such as a systematic topographic representation of best ITDs or the presence of axonal delay lines, and an alternative has been proposed in which neurons are not topographically arranged with respect to ITD and coding occurs through the assessment of the overall response of two large neuron populations, one in each hemisphere. Modeling studies have suggested that the presence of different coding systems could be related to the animal’s head size and frequency range rather than their phylogenetic group. Testing this hypothesis requires data from across the tonotopic range of both birds and mammals. The aim of this study was to obtain in vivo recordings from neurons in the low-frequency range (<1000 Hz) of chicken NL. Our data argues for the presence of a modified Jeffress system that uses the slopes of ITD-selective response functions instead of their peaks to topographically represent ITD at mid- to high frequencies. At low frequencies, below several 100 Hz, the data did not support any current model of ITD coding. This is different to what was previously shown in the barn owl and suggests that constraints in optimal ITD processing may be associated with the particular demands on sound localization determined by the animal’s ecological niche

  3. Change in the coding of interaural time difference along the tonotopic axis of the chicken nucleus laminaris.

    PubMed

    Palanca-Castan, Nicolas; Köppl, Christine

    2015-01-01

    Interaural time differences (ITDs) are an important cue for the localization of sounds in azimuthal space. Both birds and mammals have specialized, tonotopically organized nuclei in the brain stem for the processing of ITD: medial superior olive in mammals and nucleus laminaris (NL) in birds. The specific way in which ITDs are derived was long assumed to conform to a delay-line model in which arrays of systematically arranged cells create a representation of auditory space with different cells responding maximally to specific ITDs. This model was supported by data from barn owl NL taken from regions above 3 kHz and from chicken above 1 kHz. However, data from mammals often do not show defining features of the Jeffress model such as a systematic topographic representation of best ITDs or the presence of axonal delay lines, and an alternative has been proposed in which neurons are not topographically arranged with respect to ITD and coding occurs through the assessment of the overall response of two large neuron populations, one in each hemisphere. Modeling studies have suggested that the presence of different coding systems could be related to the animal's head size and frequency range rather than their phylogenetic group. Testing this hypothesis requires data from across the tonotopic range of both birds and mammals. The aim of this study was to obtain in vivo recordings from neurons in the low-frequency range (<1000 Hz) of chicken NL. Our data argues for the presence of a modified Jeffress system that uses the slopes of ITD-selective response functions instead of their peaks to topographically represent ITD at mid- to high frequencies. At low frequencies, below several 100 Hz, the data did not support any current model of ITD coding. This is different to what was previously shown in the barn owl and suggests that constraints in optimal ITD processing may be associated with the particular demands on sound localization determined by the animal's ecological niche in

  4. Time-frequency analysis of transient evoked-otoacoustic emissions in individuals with auditory neuropathy spectrum disorder.

    PubMed

    Narne, Vijaya Kumar; Prabhu, P Prashanth; Chatni, Suma

    2014-07-01

    The aim of the study was to describe and quantify the cochlear active mechanisms in individuals with Auditory Neuropathy Spectrum Disorders (ANSD). Transient Evoked Otoacoustic Emissions (TEOAEs) were recorded in 15 individuals with ANSD and 22 individuals with normal hearing. TEOAEs were analyzed by Wavelet transform method to describe and quantify the characteristics of TEOAEs in narrow-band frequency regions. It was noted that the amplitude of TEOAEs was higher and latency slightly shorter in individuals with ANSD compared to normal hearing individuals at low and mid frequencies. The increased amplitude and reduced latencies of TEOAEs in ANSD group could be attributed to the efferent system damage, especially at low and mid frequencies seen in individuals with ANSD. Thus, wavelet analysis of TEOAEs proves to be another important tool to understand the patho-physiology in individuals with ANSD. PMID:24768764

  5. Electrophysiological study of auditory development.

    PubMed

    Lippé, S; Martinez-Montes, E; Arcand, C; Lassonde, M

    2009-12-15

    Cortical auditory evoked potential (CAEP) testing, a non-invasive technique, is widely employed to study auditory brain development. The aim of this study was to investigate the development of the auditory electrophysiological signal without addressing specific abilities such as speech or music discrimination. We were interested in the temporal and spectral domains of conventional auditory evoked potentials. We analyzed cerebral responses to auditory stimulation (broadband noises) in 40 infants and children (1 month to 5 years 6 months) and 10 adults using high-density electrophysiological recording. We hypothesized that the adult auditory response has precursors that can be identified in infant and child responses. Results confirm that complex adult CAEP responses and spectral activity patterns appear after 5 years, showing decreased involvement of lower frequencies and increased involvement of higher frequencies. In addition, time-locked response to stimulus and event-related spectral pertubation across frequencies revealed alpha and beta band contributions to the CAEP of infants and toddlers before mutation to the beta and gamma band activity of the adult response. A detailed analysis of electrophysiological responses to a perceptual stimulation revealed general development patterns and developmental precursors of the adult response. PMID:19665050

  6. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response

    PubMed Central

    Nuttall, Helen E.; Moore, David R.; Barry, Johanna G.; Krumbholz, Katrin

    2015-01-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. PMID:25787954

  7. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  8. Real-time transmission of digital video using variable-length coding

    NASA Astrophysics Data System (ADS)

    Bizon, Thomas P.; Shalkhauser, Mary Jo; Whyte, Wayne A., Jr.

    1993-03-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  9. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex.

    PubMed

    Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  10. Investigate Methods to Decrease Compilation Time-AX-Program Code Group Computer Science R& D Project

    SciTech Connect

    Cottom, T

    2003-06-11

    Large simulation codes can take on the order of hours to compile from scratch. In Kull, which uses generic programming techniques, a significant portion of the time is spent generating and compiling template instantiations. I would like to investigate methods that would decrease the overall compilation time for large codes. These would be methods which could then be applied, hopefully, as standard practice to any large code. Success is measured by the overall decrease in wall clock time a developer spends waiting for an executable. Analyzing the make system of a slow to build project can benefit all developers on the project. Taking the time to analyze the number of processors used over the life of the build and restructuring the system to maximize the parallelization can significantly reduce build times. Distributing the build across multiple machines with the same configuration can increase the number of available processors for building and can help evenly balance the load. Becoming familiar with compiler options can have its benefits as well. The time improvements of the sum can be significant. Initial compilation time for Kull on OSF1 was {approx} 3 hours. Final time on OSF1 after completion is 16 minutes. Initial compilation time for Kull on AIX was {approx} 2 hours. Final time on AIX after completion is 25 minutes. Developers now spend 3 hours less waiting for a Kull executable on OSF1, and 2 hours less on AIX platforms. In the eyes of many Kull code developers, the project was a huge success.

  11. Real-time minimal bit error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  12. Digital coherent detection research on Brillouin optical time domain reflectometry with simplex pulse codes

    NASA Astrophysics Data System (ADS)

    Hao, Yun-Qi; Ye, Qing; Pan, Zheng-Qing; Cai, Hai-Wen; Qu, Rong-Hui

    2014-11-01

    The digital coherent detection technique has been investigated without any frequency-scanning device in the Brillouin optical time domain reflectometry (BOTDR), where the simplex pulse codes are applied in the sensing system. The time domain signal of every code sequence is collected by the data acquisition card (DAQ). A shift-averaging technique is applied in the frequency domain for the reason that the local oscillator (LO) in the coherent detection is fix-frequency deviated from the primary source. With the 31-bit simplex code, the signal-to-noise ratio (SNR) has 3.5-dB enhancement with the same single pulse traces, accordant with the theoretical analysis. The frequency fluctuation for simplex codes is 14.01 MHz less than that for a single pulse as to 4-m spatial resolution. The results are believed to be beneficial for the BOTDR performance improvement.

  13. Auditory nerve disease and auditory neuropathy spectrum disorders.

    PubMed

    Kaga, Kimitaka

    2016-02-01

    In 1996, a new type of bilateral hearing disorder was discerned and published almost simultaneously by Kaga et al. [1] and Starr et al. [2]. Although the pathophysiology of this disorder as reported by each author was essentially identical, Kaga used the term "auditory nerve disease" and Starr used the term "auditory neuropathy". Auditory neuropathy (AN) in adults is an acquired disorder characterized by mild-to-moderate pure-tone hearing loss, poor speech discrimination, and absence of the auditory brainstem response (ABR) all in the presence of normal cochlear outer hair cell function as indicated by normal distortion product otoacoustic emissions (DPOAEs) and evoked summating potentials (SPs) by electrocochleography (ECoG). A variety of processes and etiologies are thought to be involved in its pathophysiology including mutations of the OTOF and/or OPA1 genes. Most of the subsequent reports in the literature discuss the various auditory profiles of patients with AN [3,4] and in this report we present the profiles of an additional 17 cases of adult AN. Cochlear implants are useful for the reacquisition of hearing in adult AN although hearing aids are ineffective. In 2008, the new term of Auditory Neuropathy Spectrum Disorders (ANSD) was proposed by the Colorado Children's Hospital group following a comprehensive study of newborn hearing test results. When ABRs were absent and DPOAEs were present in particular cases during newborn screening they were classified as ANSD. In 2013, our group in the Tokyo Medical Center classified ANSD into three types by following changes in ABRs and DPOAEs over time with development. In Type I, there is normalization of hearing over time, Type II shows a change into profound hearing loss and Type III is true auditory neuropathy (AN). We emphasize that, in adults, ANSD is not the same as AN. PMID:26209259

  14. A New Quaternion Design for Space-Time-Polarization Block Code with Full Diversity

    NASA Astrophysics Data System (ADS)

    Ma, Huanfei; Kan, Haibin; Imai, Hideki

    Construction of quaternion design for Space-Time-Polarization Block Codes (STPBCs) is a hot but difficult topic. This letter introduces a novel way to construct high dimensional quaternion designs based on any existing low dimensional quaternion orthogonal designs(QODs) for STPBC, while preserving the merits of the original QODs such as full diversity and simple decoding. Furthermore, it also provides a specific schema to reach full diversity and maximized code gain by signal constellation rotation on the polarization plane.

  15. Fast minimum-redundancy prefix coding for real-time space data compression

    NASA Astrophysics Data System (ADS)

    Huang, Bormin

    2007-09-01

    The minimum-redundancy prefix-free code problem is to determine an array l = {l I ,..., f n} of n integer codeword lengths, given an array f = {f I ,..., f n} of n symbol occurrence frequencies, such that the Kraft-McMillan inequality [equation] holds and the number of the total coded bits [equation] is minimized. Previous minimum-redundancy prefix-free code based on Huffman's greedy algorithm solves this problem in O (n) time if the input array f is sorted; but in O (n log n) time if f is unsorted. In this paper a fast algorithm is proposed to solve this problem in linear time if f is unsorted. It is suitable for real-time applications in satellite communication and consumer electronics. We also develop its VLSI architecture that consists of four modules, namely, the frequency table builder, the codeword length table builder, the codeword table builder, and the input-to-codeword mapper.

  16. Development of a variable time-step transient NEW code: SPANDEX

    SciTech Connect

    Aviles, B.N. )

    1993-01-01

    This paper describes a three-dimensional, variable time-step transient multigroup diffusion theory code, SPANDEX (space-time nodal expansion method). SPANDEX is based on the static nodal expansion method (NEM) code, NODEX (Ref. 1), and employs a nonlinear algorithm and a fifth-order expansion of the transverse-integrated fluxes. The time integration scheme in SPANDEX is a fourth-order implicit generalized Runge-Kutta method (GRK) with on-line error control and variable time-step selection. This Runge-Kutta method has been applied previously to point kinetics and one-dimensional finite difference transient analysis. This paper describes the application of the Runge-Kutta method to three-dimensional reactor transient analysis in a multigroup NEM code.

  17. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF...

  18. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF...

  19. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF...

  20. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF...

  1. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF...

  2. Bat's auditory system: Corticofugal feedback and plasticity

    NASA Astrophysics Data System (ADS)

    Suga, Nobuo

    2001-05-01

    The auditory system of the mustached bat consists of physiologically distinct subdivisions for processing different types of biosonar information. It was found that the corticofugal (descending) auditory system plays an important role in improving and adjusting auditory signal processing. Repetitive acoustic stimulation, cortical electrical stimulation or auditory fear conditioning evokes plastic changes of the central auditory system. The changes are based upon egocentric selection evoked by focused positive feedback associated with lateral inhibition. Focal electric stimulation of the auditory cortex evokes short-term changes in the auditory cortex and subcortical auditory nuclei. An increase in a cortical acetylcholine level during the electric stimulation changes the cortical changes from short-term to long-term. There are two types of plastic changes (reorganizations): centripetal best frequency shifts for expanded reorganization of a neural frequency map and centrifugal best frequency shifts for compressed reorganization of the map. Which changes occur depends on the balance between inhibition and facilitation. Expanded reorganization has been found in different sensory systems and different species of mammals, whereas compressed reorganization has been thus far found only in the auditory subsystems highly specialized for echolocation. The two types of reorganizations occur in both the frequency and time domains. [Work supported by NIDCO DC00175.

  3. Rejection positivity predicts trial-to-trial reaction times in an auditory selective attention task: a computational analysis of inhibitory control

    PubMed Central

    Chen, Sufen; Melara, Robert D.

    2014-01-01

    A series of computer simulations using variants of a formal model of attention (Melara and Algom, 2003) probed the role of rejection positivity (RP), a slow-wave electroencephalographic (EEG) component, in the inhibitory control of distraction. Behavioral and EEG data were recorded as participants performed auditory selective attention tasks. Simulations that modulated processes of distractor inhibition accounted well for reaction-time (RT) performance, whereas those that modulated target excitation did not. A model that incorporated RP from actual EEG recordings in estimating distractor inhibition was superior in predicting changes in RT as a function of distractor salience across conditions. A model that additionally incorporated momentary fluctuations in EEG as the source of trial-to-trial variation in performance precisely predicted individual RTs within each condition. The results lend support to the linking proposition that RP controls the speed of responding to targets through the inhibitory control of distractors. PMID:25191244

  4. Neural processing of auditory signals in the time domain: delay-tuned coincidence detectors in the mustached bat.

    PubMed

    Suga, Nobuo

    2015-06-01

    The central auditory system produces combination-sensitive neurons tuned to a specific combination of multiple signal elements. Some of these neurons act as coincidence detectors with delay lines for the extraction of spectro-temporal information from sounds. "Delay-tuned" neurons of mustached bats are tuned to a combination of up to four signal elements with a specific delay between them and form a delay map. They are produced in the inferior colliculus by the coincidence of the rebound response following glycinergic inhibition to the first harmonic of a biosonar pulse with the short-latency response to the 2nd-4th harmonics of its echo. Compared with collicular delay-tuned neurons, thalamic and cortical ones respond more to pulse-echo pairs than individual sounds. Cortical delay-tuned neurons are clustered in the three separate areas. They interact with each other through a circuit mediating positive feedback and lateral inhibition for adjustment and improvement of the delay tuning of cortical and subcortical neurons. The current article reviews the mechanisms for delay tuning and the response properties of collicular, thalamic and cortical delay-tuned neurons in relation to hierarchical signal processing. PMID:25752443

  5. Performance of asynchronous fiber-optic code division multiple access system based on three-dimensional wavelength/time/space codes and its link analysis.

    PubMed

    Singh, Jaswinder

    2010-03-10

    A novel family of three-dimensional (3-D) wavelength/time/space codes for asynchronous optical code-division-multiple-access (CDMA) systems with "zero" off-peak autocorrelation and "unity" cross correlation is reported. Antipodal signaling and differential detection is employed in the system. A maximum of [(W x T+1) x W] codes are generated for unity cross correlation, where W and T are the number of wavelengths and time chips used in the code and are prime. The conditions for violation of the cross-correlation constraint are discussed. The expressions for number of generated codes are determined for various code dimensions. It is found that the maximum number of codes are generated for S < or = min(W,T), where W and T are prime and S is the number of space channels. The performance of these codes is compared to the earlier reported two-dimensional (2-D)/3-D codes for asynchronous systems. The codes have a code-set-size to code-size ratio greater than W/S. For instance, with a code size of 2065 (59 x 7 x 5), a total of 12,213 users can be supported, and 130 simultaneous users at a bit-error rate (BER) of 10(-9). An arrayed-waveguide-grating-based reconfigurable encoder/decoder design for 2-D implementation for the 3-D codes is presented so that the need for multiple star couplers and fiber ribbons is eliminated. The hardware requirements of the coders used for various modulation/detection schemes are given. The effect of insertion loss in the coders is shown to be significantly reduced with loss compensation by using an amplifier after encoding. An optical CDMA system for four users is simulated and the results presented show the improvement in performance with the use of loss compensation. PMID:20220892

  6. Neural code alterations and abnormal time patterns in Parkinson’s disease

    NASA Astrophysics Data System (ADS)

    Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo

    2015-04-01

    Objective. The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson’s disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. Approach. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. Main results. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson’s disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. Significance. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson’s disease.

  7. Interaural intensity and latency difference in the dolphin's auditory system.

    PubMed

    Popov, V V; Supin AYa

    1991-12-01

    Binaural hearing mechanisms were measured in dolphins (Inia geoffrensis) by recording the auditory nerve evoked response from the body surface. The azimuthal position of a sound source at 10-15 degrees from the longitudinal axis elicited interaural intensity disparity up to 20 dB and interaural latency difference as large as 250 microseconds. The latter was many times greater than the acoustical interaural time delay. This latency difference seems to be caused by the intensity disparity. The latency difference seems to be an effective way of coding of intensity disparity. PMID:1816509

  8. Timing group delay and differential code bias corrections for BeiDou positioning

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Zhang, Xiaohong; Wang, Jinling

    2015-05-01

    This article first clearly figures out the relationship between parameters of timing group delay (TGD) and differential code bias (DCB) for BDS, and demonstrates the equivalence of TGD and DCB correction models combining theory with practice. The TGD/DCB correction models have been extended to various occasions for BDS positioning, and such models have been evaluated by real triple-frequency datasets. To test the effectiveness of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both standard point positioning (SPP) and precise point positioning (PPP) tests are carried out for BDS signals with different schemes. Furthermore, the influence of differential code biases on BDS positioning estimates such as coordinates, receiver clock biases, tropospheric delays and carrier phase ambiguities is investigated comprehensively. Comparative analysis show that the unmodeled differential code biases degrade the performance of BDS SPP by a factor of two or more, whereas the estimates of PPP are subject to varying degrees of influences. For SPP, the accuracy of dual-frequency combinations is slightly worse than that of single-frequency, and they are much more sensitive to the differential code biases, particularly for the B2B3 combination. For PPP, the uncorrected differential code biases are mostly absorbed into the receiver clock bias and carrier phase ambiguities and thus resulting in a much longer convergence time. Even though the influence of the differential code biases could be mitigated over time and comparable positioning accuracy could be achieved after convergence, it is suggested to properly handle with the differential code biases since it is vital for PPP convergence and integer ambiguity resolution.

  9. AUDITORY CORTICAL PLASTICITY: DOES IT PROVIDE EVIDENCE FOR COGNITIVE PROCESSING IN THE AUDITORY CORTEX?

    PubMed Central

    Irvine, Dexter R. F.

    2007-01-01

    The past 20 years have seen substantial changes in our view of the nature of the processing carried out in auditory cortex. Some processing of a cognitive nature, previously attributed to higher order “association” areas, is now considered to take place in auditory cortex itself. One argument adduced in support of this view is the evidence indicating a remarkable degree of plasticity in the auditory cortex of adult animals. Such plasticity has been demonstrated in a wide range of paradigms, in which auditory input or the behavioural significance of particular inputs is manipulated. Changes over the same time period in our conceptualization of the receptive fields of cortical neurons, and well-established mechanisms for use-related changes in synaptic function, can account for many forms of auditory cortical plasticity. On the basis of a review of auditory cortical plasticity and its probable mechanisms, it is argued that only plasticity associated with learning tasks provides a strong case for cognitive processing in auditory cortex. Even in this case the evidence is indirect, in that it has not yet been established that the changes in auditory cortex are necessary for behavioural learning and memory. Although other lines of evidence provide convincing support for cognitive processing in auditory cortex, that provided by auditory cortical plasticity remains equivocal. PMID:17303356

  10. Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals

    PubMed Central

    Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro

    2012-01-01

    Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497

  11. Just in time? Using QR codes for multi-professional learning in clinical practice.

    PubMed

    Jamu, Joseph Tawanda; Lowi-Jones, Hannah; Mitchell, Colin

    2016-07-01

    Clinical guidelines and policies are widely available on the hospital intranet or from the internet, but can be difficult to access at the required time and place. Clinical staff with smartphones could use Quick Response (QR) codes for contemporaneous access to relevant information to support the Just in Time Learning (JIT-L) paradigm. There are several studies that advocate the use of smartphones to enhance learning amongst medical students and junior doctors in UK. However, these participants are already technologically orientated. There are limited studies that explore the use of smartphones in nursing practice. QR Codes were generated for each topic and positioned at relevant locations on a medical ward. Support and training were provided for staff. Website analytics and semi-structured interviews were performed to evaluate the efficacy, acceptability and feasibility of using QR codes to facilitate Just in Time learning. Use was intermittently high but not sustained. Thematic analysis of interviews revealed a positive assessment of the Just in Time learning paradigm and context-sensitive clinical information. However, there were notable barriers to acceptance, including usability of QR codes and appropriateness of smartphone use in a clinical environment. The use of Just in Time learning for education and reference may be beneficial to healthcare professionals. However, alternative methods of access for less technologically literate users and a change in culture of mobile device use in clinical areas may be needed. PMID:27428702

  12. VizieR Online Data Catalog: ynogkm: code for calculating time-like geodesics (Yang+, 2014)

    NASA Astrophysics Data System (ADS)

    Yang, X.-L.; Wang, J.-C.

    2013-11-01

    Here we present the source file for a new public code named ynogkm, aim on calculating the time-like geodesics in a Kerr-Newmann spacetime fast. In the code the four Boyer-Lindquis coordinates and proper time are expressed as functions of a parameter p semi-analytically, i.e., r(p), μ(p), φ(p), t(p), and σ(p), by using the Weiers- trass' and Jacobi's elliptic functions and integrals. All of the ellip- tic integrals are computed by Carlson's elliptic integral method, which guarantees the fast speed of the code.The source Fortran file ynogkm.f90 contains three modules: constants, rootfind, ellfunction, and blcoordinates. (3 data files).

  13. Development of the N1-P2 auditory evoked response to amplitude rise time and rate of formant transition of speech sounds.

    PubMed

    Carpenter, Allen L; Shahin, Antoine J

    2013-06-01

    We investigated the development of weighting strategies for acoustic cues by examining the morphology of the N1-P2 auditory evoked potential (AEP) to changes in amplitude rise time (ART) and rate of formant transition (RFT) of consonant-vowel (CV) pairs in 4-6-year olds and adults. In the AEP session, individuals listened passively to the CVs /ba/, /wa/, and a /ba/ with a superimposed slower-rising /wa/ envelope (/ba/(wa)). In the behavioral session, individuals listened to the same stimuli and judged whether they heard a /ba/ or /wa/. We hypothesized that a developmental shift in weighting strategies should be reflected in a change in the morphology of the N1-P2 AEP. In 6-year olds and adults, the N1-P2 amplitude at the vertex reflected a change in RFT but not in ART. In contrast, in the 4-5-year olds, the vertex N1-P2 did not show specificity to changes in ART or RFT. In all groups, the N1-P2 amplitude at channel C4 (right hemisphere) reflected a change in ART but not in RFT. Behaviorally, 6-year olds and adults predominately utilized RFT cues (classified /ba/(wa) as /ba/) during phonetic judgments, as opposed to 4-5-year olds which utilized both cues equally. Our findings suggest that both ART and RFT are encoded in the auditory cortex, but an N1-P2 shift toward the vertex following age 4-5 indicates a shift toward an adult-like weighting strategy, such that, to utilize RFT to a greater extent. PMID:23570734

  14. On the effect of timing errors in run length codes. [redundancy removal algorithms for digital channels

    NASA Technical Reports Server (NTRS)

    Wilkins, L. C.; Wintz, P. A.

    1975-01-01

    Many redundancy removal algorithms employ some sort of run length code. Blocks of timing words are coded with synchronization words inserted between blocks. The probability of incorrectly reconstructing a sample because of a channel error in the timing data is a monotonically nondecreasing function of time since the last synchronization word. In this paper we compute the 'probability that the accumulated magnitude of timing errors equal zero' as a function of time since the last synchronization word for a zero-order predictor (ZOP). The result is valid for any data source that can be modeled by a first-order Markov chain and any digital channel that can be modeled by a channel transition matrix. An example is presented.

  15. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  16. Computer code for space-time diagnostics of nuclear safety parameters

    SciTech Connect

    Solovyev, D. A.; Semenov, A. A.; Gruzdov, F. V.; Druzhaev, A. A.; Shchukin, N. V.; Dolgenko, S. G.; Solovyeva, I. V.; Ovchinnikova, E. A.

    2012-07-01

    The computer code ECRAN 3D (Experimental and Calculation Reactor Analysis) is designed for continuous monitoring and diagnostics of reactor cores and databases for RBMK-1000 on the basis of analytical methods for the interrelation parameters of nuclear safety. The code algorithms are based on the analysis of deviations between the physically obtained figures and the results of neutron-physical and thermal-hydraulic calculations. Discrepancies between the measured and calculated signals are equivalent to obtaining inadequacy between performance of the physical device and its simulator. The diagnostics system can solve the following problems: identification of facts and time for inconsistent results, localization of failures, identification and quantification of the causes for inconsistencies. These problems can be effectively solved only when the computer code is working in a real-time mode. This leads to increasing requirements for a higher code performance. As false operations can lead to significant economic losses, the diagnostics system must be based on the certified software tools. POLARIS, version 4.2.1 is used for the neutron-physical calculation in the computer code ECRAN 3D. (authors)

  17. Auditory Training for Central Auditory Processing Disorder.

    PubMed

    Weihing, Jeffrey; Chermak, Gail D; Musiek, Frank E

    2015-11-01

    Auditory training (AT) is an important component of rehabilitation for patients with central auditory processing disorder (CAPD). The present article identifies and describes aspects of AT as they relate to applications in this population. A description of the types of auditory processes along with information on relevant AT protocols that can be used to address these specific deficits is included. Characteristics and principles of effective AT procedures also are detailed in light of research that reflects on their value. Finally, research investigating AT in populations who show CAPD or present with auditory complaints is reported. Although efficacy data in this area are still emerging, current findings support the use of AT for treatment of auditory difficulties. PMID:27587909

  18. A novel repetition space-time coding scheme for mobile FSO systems

    NASA Astrophysics Data System (ADS)

    Li, Ming; Cao, Yang; Li, Shu-ming; Yang, Shao-wen

    2015-03-01

    Considering the influence of more random atmospheric turbulence, worse pointing errors and highly dynamic link on the transmission performance of mobile multiple-input multiple-output (MIMO) free space optics (FSO) communication systems, this paper establishes a channel model for the mobile platform. Based on the combination of Alamouti space-time code and time hopping ultra-wide band (TH-UWB) communications, a novel repetition space-time coding (RSTC) method for mobile 2×2 free-space optical communications with pulse position modulation (PPM) is developed. In particular, two decoding methods of equal gain combining (EGC) maximum likelihood detection (MLD) and correlation matrix detection (CMD) are derived. When a quasi-static fading and weak turbulence channel model are considered, simulation results show that whether the channel state information (CSI) is known or not, the coding system demonstrates more significant performance of the symbol error rate (SER) than the uncoding. In other words, transmitting diversity can be achieved while conveying the information only through the time delays of the modulated signals transmitted from different antennas. CMD has almost the same effect of signal combining with maximal ratio combining (MRC). However, when the channel correlation increases, SER performance of the coding 2×2 system degrades significantly.

  19. The Role of Coding Time in Estimating and Interpreting Growth Curve Models.

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.; Deeb-Sossa, Natalia; Papadakis, Alison A.; Bollen, Kenneth A.; Curran, Patrick J.

    2004-01-01

    The coding of time in growth curve models has important implications for the interpretation of the resulting model that are sometimes not transparent. The authors develop a general framework that includes predictors of growth curve components to illustrate how parameter estimates and their standard errors are exactly determined as a function of…

  20. Process timing and its relation to the coding of tonal harmony.

    PubMed

    Aksentijevic, Aleksandar; Barber, Paul J; Elliott, Mark A

    2011-10-01

    Advances in auditory research suggest that gamma-band synchronization of frequency-specific cortical loci could be responsible for the integration of pure tones (harmonics) into harmonic complex tones. Thus far, evidence for such a mechanism has been revealed in neurophysiological studies, with little corroborative psychophysical evidence. In six experiments, we observed a rate- and time-specific response-time advantage for a sequence of target pips when the defining frequency of the target was a fractional multiple of a priming frequency. The effect was only observed when the prime and target sequences were presented at 33 pips per second and when the interstimulus interval was approximately 100 and 250 ms. This evidence implicates oscillatory gamma-band activity in the representation of harmonic complex tones and suggests that synchronization with precise temporal characteristics is important for disambiguating related harmonic templates. An outline of a model is presented, which accounts for these findings in terms of fast resynchronization of relevant neuronal assemblies. PMID:21500937

  1. Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1980-01-01

    There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.

  2. A Design of Low Frequency Time-Code Receiver Based on DSP and FPGA

    NASA Astrophysics Data System (ADS)

    Li, Guo-Dong; Xu, Lin-Sheng

    2006-06-01

    The hardware of a low frequency time-code receiver which was designed with FPGA (field programmable gate array) and DSP (digital signal processor) is introduced. The method of realizing the time synchronization for the receiver system is described. The software developed for DSP and FPGA is expounded, and the results of test and simulation are presented. The design is charcterized by high accuracy, good reliability, fair extensibility, etc.

  3. Enhanced auditory temporal gap detection in listeners with musical training.

    PubMed

    Mishra, Srikanta K; Panda, Manas R; Herbert, Carolyn

    2014-08-01

    Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians. PMID:25096143

  4. Dynamic Divisive Normalization Predicts Time-Varying Value Coding in Decision-Related Circuits

    PubMed Central

    LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W.

    2014-01-01

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. PMID:25429145

  5. Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.

    PubMed

    Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W

    2014-11-26

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. PMID:25429145

  6. Three-dimensional subband coding implemented in programmable real-time hardware

    NASA Astrophysics Data System (ADS)

    Glenn, William E.

    1995-02-01

    Pyramid codes such as 3-D subband coding, wavelet and fractile can be implemented with much simpler encoders and decoders than DCT-based compression systems. Since they do not use blocks, they do not suffer from blocking artifacts. They are particularly useful when the cost of the encoder is a concern, such as picture phone, teleconferencing, and camcorders. If used in production, post-production processing can be implemented without degradation. A programmable real-time encoder and decoder have been constructed for evaluating color 3-D subband coding algorithms. The results of tests with this hardware are presented for a wide range of compression ratios as well as the post-production processing of compressed video.

  7. Delayed Auditory Feedback and Movement

    ERIC Educational Resources Information Center

    Pfordresher, Peter Q.; Dalla Bella, Simone

    2011-01-01

    It is well known that timing of rhythm production is disrupted by delayed auditory feedback (DAF), and that disruption varies with delay length. We tested the hypothesis that disruption depends on the state of the movement trajectory at the onset of DAF. Participants tapped isochronous rhythms at a rate specified by a metronome while hearing DAF…

  8. Reliable Wireless Broadcast with Linear Network Coding for Multipoint-to-Multipoint Real-Time Communications

    NASA Astrophysics Data System (ADS)

    Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi

    This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.

  9. a Real-Time Earthquake Moment Tensor Scanning Code for the Antelope System (brtt, Inc)

    NASA Astrophysics Data System (ADS)

    Macpherson, K. A.; Ruppert, N. A.; Freymueller, J. T.; Lindquist, K.; Harvey, D.; Dreger, D. S.; Lombard, P. N.; Guilhem, A.

    2015-12-01

    While all seismic observatories routinely determine hypocentral location and local magnitude within a few minutes of an earthquake's occurrence, the ability to estimate seismic moment and sense of slip in a similar time frame is less widespread. This is unfortunate, because moment and mechanism are critical parameters for rapid hazard assessment; for larger events, moment magnitude is more reliable due to the tendency of local magnitude to saturate, and certain mechanisms such as off-shore thrust events might indicate earthquakes with tsunamigenic potential. In order to increase access to this capability, we have developed a continuous moment tensor scanning code for Antelope, the ubiquitous open-architecture seismic acquisition and processing software in use around the world. The scanning code, which uses an algorithm that has previously been employed for real-time monitoring at the University of California, Berkeley, is able to produce full moment tensor solutions for moderate events from regional seismic data. The algorithm monitors a grid of potential sources by continuously cross-correlating pre-computed synthetic seismograms with long-period recordings from a sparse network of broad-band stations. The code package consists of 3 modules. One module is used to create a monitoring grid by constructing source-receiver geometry, calling a frequency-wavenumber code to produce synthetics, and computing the generalized linear inverse of the array of synthetics. There is a real-time scanning module that correlates streaming data with pre-inverted synthetics, monitors the variance reduction, and writes the moment tensor solution to a database if an earthquake detection occurs. Finally, there is an 'off-line' module that is very similar to the real-time scanner, with the exception that it utilizes pre-recorded data stored in Antelope databases and is useful for testing purposes or for quickly producing moment tensor catalogs for long time series. The code is open source

  10. Visual task enhances spatial selectivity in the human auditory cortex.

    PubMed

    Salminen, Nelli H; Aho, Joanna; Sams, Mikko

    2013-01-01

    The auditory cortex represents spatial locations differently from other sensory modalities. While visual and tactile cortices utilize topographical space maps, for audition no such cortical map has been found. Instead, auditory cortical neurons have wide spatial receptive fields and together they form a population rate code of sound source location. Recent studies have shown that this code is modulated by task conditions so that during auditory tasks it provides better selectivity to sound source location than during idle listening. The goal of this study was to establish whether the neural representation of auditory space can also be influenced by task conditions involving other sensory modalities than hearing. Therefore, we conducted magnetoencephalography (MEG) recordings in which auditory spatial selectivity of the human cortex was probed with an adaptation paradigm while subjects performed a visual task. Engaging in the task led to an increase in neural selectivity to sound source location compared to when no task was performed. This suggests that an enhancement in the population rate code of auditory space took place during task performance. This enhancement in auditory spatial selectivity was independent of the direction of visual orientation. Together with previous studies, these findings suggest that performing any demanding task, even one in which sounds and their source locations are irrelevant, can lead to enhancements in the neural representation of auditory space. Such mechanisms may have great survival value as sounds are capable of producing location information on potentially relevant events in all directions and over long distances. PMID:23543781

  11. A generalized time-frequency subtraction method for robust speech enhancement based on wavelet filter banks modeling of human auditory system.

    PubMed

    Shao, Yu; Chang, Chip-Hong

    2007-08-01

    We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods. PMID:17702286

  12. Auditory imagery: empirical findings.

    PubMed

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear). PMID:20192565

  13. Auditory color constancy

    NASA Astrophysics Data System (ADS)

    Kluender, Keith R.; Kiefte, Michael

    2003-10-01

    It is both true and efficient that sensorineural systems respond to change and little else. Perceptual systems do not record absolute level be it loudness, pitch, brightness, or color. This fact has been demonstrated in every sensory domain. For example, the visual system is remarkable at maintaining color constancy over widely varying illumination such as sunlight and varieties of artificial light (incandescent, fluorescent, etc.) for which spectra reflected from objects differ dramatically. Results will be reported for a series of experiments demonstrating how auditory systems similarly compensate for reliable characteristics of spectral shape in acoustic signals. Specifically, listeners' perception of vowel sounds, characterized by both local (e.g., formants) and broad (e.g., tilt) spectral composition, changes radically depending upon reliable spectral composition of precursor signals. These experiments have been conducted using a variety of precursor signals consisting of meaningful and time-reversed vocoded sentences, as well as novel nonspeech precursors consisting of multiple filter poles modulating sinusoidally across a source spectrum with specific local and broad spectral characteristics. Constancy across widely varying spectral compositions shares much in common with visual color constancy. However, auditory spectral constancy appears to be more effective than visual constancy in compensating for local spectral fluctuations. [Work supported by NIDCD DC-04072.

  14. A Brain System for Auditory Working Memory

    PubMed Central

    Joseph, Sabine; Gander, Phillip E.; Barascud, Nicolas; Halpern, Andrea R.; Griffiths, Timothy D.

    2016-01-01

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. SIGNIFICANCE STATEMENT In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. PMID:27098693

  15. Optogenetic stimulation of the auditory pathway

    PubMed Central

    Hernandez, Victor H.; Gehrt, Anna; Reuter, Kirsten; Jing, Zhizi; Jeschke, Marcus; Mendoza Schulz, Alejandro; Hoch, Gerhard; Bartels, Matthias; Vogt, Gerhard; Garnham, Carolyn W.; Yawo, Hiromu; Fukazawa, Yugo; Augustine, George J.; Bamberg, Ernst; Kügler, Sebastian; Salditt, Tim; de Hoz, Livia; Strenzke, Nicola; Moser, Tobias

    2014-01-01

    Auditory prostheses can partially restore speech comprehension when hearing fails. Sound coding with current prostheses is based on electrical stimulation of auditory neurons and has limited frequency resolution due to broad current spread within the cochlea. In contrast, optical stimulation can be spatially confined, which may improve frequency resolution. Here, we used animal models to characterize optogenetic stimulation, which is the optical stimulation of neurons genetically engineered to express the light-gated ion channel channelrhodopsin-2 (ChR2). Optogenetic stimulation of spiral ganglion neurons (SGNs) activated the auditory pathway, as demonstrated by recordings of single neuron and neuronal population responses. Furthermore, optogenetic stimulation of SGNs restored auditory activity in deaf mice. Approximation of the spatial spread of cochlear excitation by recording local field potentials (LFPs) in the inferior colliculus in response to suprathreshold optical, acoustic, and electrical stimuli indicated that optogenetic stimulation achieves better frequency resolution than monopolar electrical stimulation. Virus-mediated expression of a ChR2 variant with greater light sensitivity in SGNs reduced the amount of light required for responses and allowed neuronal spiking following stimulation up to 60 Hz. Our study demonstrates a strategy for optogenetic stimulation of the auditory pathway in rodents and lays the groundwork for future applications of cochlear optogenetics in auditory research and prosthetics. PMID:24509078

  16. Investigating bottom-up auditory attention

    PubMed Central

    Kaya, Emine Merve; Elhilali, Mounya

    2014-01-01

    Bottom-up attention is a sensory-driven selection mechanism that directs perception toward a subset of the stimulus that is considered salient, or attention-grabbing. Most studies of bottom-up auditory attention have adapted frameworks similar to visual attention models whereby local or global “contrast” is a central concept in defining salient elements in a scene. In the current study, we take a more fundamental approach to modeling auditory attention; providing the first examination of the space of auditory saliency spanning pitch, intensity and timbre; and shedding light on complex interactions among these features. Informed by psychoacoustic results, we develop a computational model of auditory saliency implementing a novel attentional framework, guided by processes hypothesized to take place in the auditory pathway. In particular, the model tests the hypothesis that perception tracks the evolution of sound events in a multidimensional feature space, and flags any deviation from background statistics as salient. Predictions from the model corroborate the relationship between bottom-up auditory attention and statistical inference, and argues for a potential role of predictive coding as mechanism for saliency detection in acoustic scenes. PMID:24904367

  17. Imaginary time propagation code for large-scale two-dimensional eigenvalue problems in magnetic fields

    NASA Astrophysics Data System (ADS)

    Luukko, P. J. J.; Räsänen, E.

    2013-03-01

    We present a code for solving the single-particle, time-independent Schrödinger equation in two dimensions. Our program utilizes the imaginary time propagation (ITP) algorithm, and it includes the most recent developments in the ITP method: the arbitrary order operator factorization and the exact inclusion of a (possibly very strong) magnetic field. Our program is able to solve thousands of eigenstates of a two-dimensional quantum system in reasonable time with commonly available hardware. The main motivation behind our work is to allow the study of highly excited states and energy spectra of two-dimensional quantum dots and billiard systems with a single versatile code, e.g., in quantum chaos research. In our implementation we emphasize a modern and easily extensible design, simple and user-friendly interfaces, and an open-source development philosophy. Catalogue identifier: AENR_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 11310 No. of bytes in distributed program, including test data, etc.: 97720 Distribution format: tar.gz Programming language: C++ and Python. Computer: Tested on x86 and x86-64 architectures. Operating system: Tested under Linux with the g++ compiler. Any POSIX-compliant OS with a C++ compiler and the required external routines should suffice. Has the code been vectorised or parallelized?: Yes, with OpenMP. RAM: 1 MB or more, depending on system size. Classification: 7.3. External routines: FFTW3 (http://www.fftw.org), CBLAS (http://netlib.org/blas), LAPACK (http://www.netlib.org/lapack), HDF5 (http://www.hdfgroup.org/HDF5), OpenMP (http://openmp.org), TCLAP (http://tclap.sourceforge.net), Python (http://python.org), Google Test (http://code.google.com/p/googletest/) Nature of problem: Numerical calculation

  18. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  19. Real-time speech encoding based on Code-Excited Linear Prediction (CELP)

    NASA Technical Reports Server (NTRS)

    Leblanc, Wilfrid P.; Mahmoud, S. A.

    1988-01-01

    This paper reports on the work proceeding with regard to the development of a real-time voice codec for the terrestrial and satellite mobile radio environments. The codec is based on a complexity reduced version of code-excited linear prediction (CELP). The codebook search complexity was reduced to only 0.5 million floating point operations per second (MFLOPS) while maintaining excellent speech quality. Novel methods to quantize the residual and the long and short term model filters are presented.

  20. The oscillatory activities and its synchronization in auditory-visual integration as revealed by event-related potentials to bimodal stimuli

    NASA Astrophysics Data System (ADS)

    Guo, Jia; Xu, Peng; Yao, Li; Shu, Hua; Zhao, Xiaojie

    2012-03-01

    Neural mechanism of auditory-visual speech integration is always a hot study of multi-modal perception. The articulation conveys speech information that helps detect and disambiguate the auditory speech. As important characteristic of EEG, oscillations and its synchronization have been applied to cognition research more and more. This study analyzed the EEG data acquired by unimodal and bimodal stimuli using time frequency and phase synchrony approach, investigated the oscillatory activities and its synchrony modes behind evoked potential during auditory-visual integration, in order to reveal the inherent neural integration mechanism under these modes. It was found that beta activity and its synchronization differences had relationship with gesture N1-P2, which happened in the earlier stage of speech coding to pronouncing action. Alpha oscillation and its synchronization related with auditory N1-P2 might be mainly responsible for auditory speech process caused by anticipation from gesture to sound feature. The visual gesture changing enhanced the interaction of auditory brain regions. These results provided explanations to the power and connectivity change of event-evoked oscillatory activities which matched ERPs during auditory-visual speech integration.

  1. Coded throughput performance simulations for the time-varying satellite channel. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Han, LI

    1995-01-01

    The design of a reliable satellite communication link involving the data transfer from a small, low-orbit satellite to a ground station, but through a geostationary satellite, was examined. In such a scenario, the received signal power to noise density ratio increases as the transmitting low-orbit satellite comes into view, and then decreases as it then departs, resulting in a short-duration, time-varying communication link. The optimal values of the small satellite antenna beamwidth, signaling rate, modulation scheme and the theoretical link throughput (in bits per day) have been determined. The goal of this thesis is to choose a practical coding scheme which maximizes the daily link throughput while satisfying a prescribed probability of error requirement. We examine the throughput of both fixed rate and variable rate concatenated forward error correction (FEC) coding schemes for the additive white Gaussian noise (AWGN) channel, and then examine the effect of radio frequency interference (RFI) on the best coding scheme among them. Interleaving is used to mitigate degradation due to RFI. It was found that the variable rate concatenated coding scheme could achieve 74 percent of the theoretical throughput, equivalent to 1.11 Gbits/day based on the cutoff rate R(sub 0). For comparison, 87 percent is achievable for AWGN-only case.

  2. TART98 a coupled neutron-photon 3-D, combinatorial geometry time dependent Monte Carlo Transport code

    SciTech Connect

    Cullen, D E

    1998-11-22

    TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.

  3. Hearing the light: neural and perceptual encoding of optogenetic stimulation in the central auditory pathway

    PubMed Central

    Guo, Wei; Hight, Ariel E.; Chen, Jenny X.; Klapoetke, Nathan C.; Hancock, Kenneth E.; Shinn-Cunningham, Barbara G.; Boyden, Edward S.; Lee, Daniel J.; Polley, Daniel B.

    2015-01-01

    Optogenetics provides a means to dissect the organization and function of neural circuits. Optogenetics also offers the translational promise of restoring sensation, enabling movement or supplanting abnormal activity patterns in pathological brain circuits. However, the inherent sluggishness of evoked photocurrents in conventional channelrhodopsins has hampered the development of optoprostheses that adequately mimic the rate and timing of natural spike patterning. Here, we explore the feasibility and limitations of a central auditory optoprosthesis by photoactivating mouse auditory midbrain neurons that either express channelrhodopsin-2 (ChR2) or Chronos, a channelrhodopsin with ultra-fast channel kinetics. Chronos-mediated spike fidelity surpassed ChR2 and natural acoustic stimulation to support a superior code for the detection and discrimination of rapid pulse trains. Interestingly, this midbrain coding advantage did not translate to a perceptual advantage, as behavioral detection of midbrain activation was equivalent with both opsins. Auditory cortex recordings revealed that the precisely synchronized midbrain responses had been converted to a simplified rate code that was indistinguishable between opsins and less robust overall than acoustic stimulation. These findings demonstrate the temporal coding benefits that can be realized with next-generation channelrhodopsins, but also highlight the challenge of inducing variegated patterns of forebrain spiking activity that support adaptive perception and behavior. PMID:26000557

  4. Aging effects on the binaural interaction component of the auditory brainstem response in the Mongolian gerbil: Effects of interaural time and level differences.

    PubMed

    Laumen, Geneviève; Tollin, Daniel J; Beutelmann, Rainer; Klump, Georg M

    2016-07-01

    The effect of interaural time difference (ITD) and interaural level difference (ILD) on wave 4 of the binaural and summed monaural auditory brainstem responses (ABRs) as well as on the DN1 component of the binaural interaction component (BIC) of the ABR in young and old Mongolian gerbils (Meriones unguiculatus) was investigated. Measurements were made at a fixed sound pressure level (SPL) and a fixed level above visually detected ABR threshold to compensate for individual hearing threshold differences. In both stimulation modes (fixed SPL and fixed level above visually detected ABR threshold) an effect of ITD on the latency and the amplitude of wave 4 as well as of the BIC was observed. With increasing absolute ITD values BIC latencies were increased and amplitudes were decreased. ILD had a much smaller effect on these measures. Old animals showed a reduced amplitude of the DN1 component. This difference was due to a smaller wave 4 in the summed monaural ABRs of old animals compared to young animals whereas wave 4 in the binaural-evoked ABR showed no age-related difference. In old animals the small amplitude of the DN1 component was correlated with small binaural-evoked wave 1 and wave 3 amplitudes. This suggests that the reduced peripheral input affects central binaural processing which is reflected in the BIC. PMID:27173973

  5. A Unified Mathematical Framework for Coding Time, Space, and Sequences in the Hippocampal Region

    PubMed Central

    MacDonald, Christopher J.; Tiganj, Zoran; Shankar, Karthik H.; Du, Qian; Hasselmo, Michael E.; Eichenbaum, Howard

    2014-01-01

    The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1. PMID:24672015

  6. Impairment of auditory spatial localization in congenitally blind human subjects.

    PubMed

    Gori, Monica; Sandini, Giulio; Martinoli, Cristina; Burr, David C

    2014-01-01

    Several studies have demonstrated enhanced auditory processing in the blind, suggesting that they compensate their visual impairment in part with greater sensitivity of the other senses. However, several physiological studies show that early visual deprivation can impact negatively on auditory spatial localization. Here we report for the first time severely impaired auditory localization in the congenitally blind: thresholds for spatially bisecting three consecutive, spatially-distributed sound sources were seriously compromised, on average 4.2-fold typical thresholds, and half performing at random. In agreement with previous studies, these subjects showed no deficits on simpler auditory spatial tasks or with auditory temporal bisection, suggesting that the encoding of Euclidean auditory relationships is specifically compromised in the congenitally blind. It points to the importance of visual experience in the construction and calibration of auditory spatial maps, with implications for rehabilitation strategies for the congenitally blind. PMID:24271326

  7. Impairment of auditory spatial localization in congenitally blind human subjects

    PubMed Central

    Gori, Monica; Sandini, Giulio; Martinoli, Cristina

    2014-01-01

    Several studies have demonstrated enhanced auditory processing in the blind, suggesting that they compensate their visual impairment in part with greater sensitivity of the other senses. However, several physiological studies show that early visual deprivation can impact negatively on auditory spatial localization. Here we report for the first time severely impaired auditory localization in the congenitally blind: thresholds for spatially bisecting three consecutive, spatially-distributed sound sources were seriously compromised, on average 4.2-fold typical thresholds, and half performing at random. In agreement with previous studies, these subjects showed no deficits on simpler auditory spatial tasks or with auditory temporal bisection, suggesting that the encoding of Euclidean auditory relationships is specifically compromised in the congenitally blind. It points to the importance of visual experience in the construction and calibration of auditory spatial maps, with implications for rehabilitation strategies for the congenitally blind. PMID:24271326

  8. Performance and optimization of direct implicit time integration schemes for use in electrostatic particle simulation codes

    SciTech Connect

    Procassini, R.J.; Birdsall, C.K.; Morse, E.C.; Cohen, B.I.

    1988-01-01

    Implicit time integration schemes allow for the use of larger time steps than conventional explicit methods, thereby extending the applicability of kinetic particle simulation methods. This paper will describe a study of the performance and optimization of two such direct implicit schemes, which are used to follow the trajectories of charged particles in an electrostatic, particle-in-cell plasma simulation code. The direct implicit method that was used for this study is an alternative to the moment-equation implicit method. 10 refs., 7 figs., 4 tabs.

  9. Attending to auditory memory.

    PubMed

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26638836

  10. Dynamics of auditory working memory

    PubMed Central

    Kaiser, Jochen

    2015-01-01

    Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions. PMID:26029146

  11. A 2.9 ps equivalent resolution interpolating time counter based on multiple independent coding lines

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Jachna, Z.; Kwiatkowski, P.; Rozyc, K.

    2013-03-01

    We present the design, operation and test results of a time counter that has an equivalent resolution of 2.9 ps, a measurement uncertainty at the level of 6 ps, and a measurement range of 10 s. The time counter has been implemented in a general-purpose reprogrammable device Spartan-6 (Xilinx). To obtain both high precision and wide measurement range the counting of periods of a reference clock is combined with a two-stage interpolation within a single period of the clock signal. The interpolation involves a four-phase clock in the first interpolation stage (FIS) and an equivalent coding line (ECL) in the second interpolation stage (SIS). The ECL is created as a compound of independent discrete time coding lines (TCL). The number of TCLs used to create the virtual ECL has an effect on its resolution. We tested ECLs made from up to 16 TCLs, but the idea may be extended to a larger number of lines. In the presented time counter the coarse resolution of the counting method equal to 2 ns (period of the 500 MHz reference clock) is firstly improved fourfold in the FIS and next even more than 400 times in the SIS. The proposed solution allows us to overcome the technological limitation in achievable resolution and improve the precision of conversion of integrated interpolators based on tapped delay lines.

  12. Tracking the Time Course of Word-Frequency Effects in Auditory Word Recognition with Event-Related Potentials

    ERIC Educational Resources Information Center

    Dufour, Sophie; Brunelliere, Angele; Frauenfelder, Ulrich H.

    2013-01-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to…

  13. Learning Novel Phonological Representations in Developmental Dyslexia: Associations with Basic Auditory Processing of Rise Time and Phonological Awareness

    ERIC Educational Resources Information Center

    Thomson, Jennifer M.; Goswami, Usha

    2010-01-01

    Across languages, children with developmental dyslexia are known to have impaired lexical phonological representations. Here, we explore associations between learning new phonological representations, phonological awareness, and sensitivity to amplitude envelope onsets (rise time). We show that individual differences in learning novel phonological…

  14. The Development of Auditory Perception in Children Following Auditory Brainstem Implantation

    PubMed Central

    Colletti, Liliana; Shannon, Robert V.; Colletti, Vittorio

    2014-01-01

    Auditory brainstem implants (ABI) can provide useful auditory perception and language development in deaf children who are not able to use a cochlear implant (CI). We prospectively followed-up a consecutive group of 64 deaf children up to 12 years following ABI implantation. The etiology of deafness in these children was: cochlear nerve aplasia in 49, auditory neuropathy in 1, cochlear malformations in 8, bilateral cochlear post-meningitic ossification in 3, NF2 in 2, and bilateral cochlear fractures due to a head injury in 1. Thirty five children had other congenital non-auditory disabilities. Twenty two children had previous CIs with no benefit. Fifty eight children were fitted with the Cochlear 24 ABI device and six with the MedEl ABI device and all children followed the same rehabilitation program. Auditory perceptual abilities were evaluated on the Categories of Auditory Performance (CAP) scale. No child was lost to follow-up and there were no exclusions from the study. All children showed significant improvement in auditory perception with implant experience. Seven children (11%) were able to achieve the highest score on the CAP test; they were able to converse on the telephone within 3 years of implantation. Twenty children (31.3%) achieved open set speech recognition (CAP score of 5 or greater) and 30 (46.9%) achieved a CAP level of 4 or greater. Of the 29 children without non-auditory disabilities, 18 (62%) achieved a CAP score of 5 or greater with the ABI. All children showed continued improvements in auditory skills over time. The long-term results of ABI implantation reveal significant auditory benefit in most children, and open set auditory recognition in many. PMID:25377987

  15. Representation of Reward Feedback in Primate Auditory Cortex

    PubMed Central

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2011-01-01

    It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys’ performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1) the reward expectancy for each trial, (2) the reward-size received, and (3) the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized. PMID:21369350

  16. Process Timing and Its Relation to the Coding of Tonal Harmony

    ERIC Educational Resources Information Center

    Aksentijevic, Aleksandar; Barber, Paul J.; Elliott, Mark A.

    2011-01-01

    Advances in auditory research suggest that gamma-band synchronization of frequency-specific cortical loci could be responsible for the integration of pure tones (harmonics) into harmonic complex tones. Thus far, evidence for such a mechanism has been revealed in neurophysiological studies, with little corroborative psychophysical evidence. In six…

  17. Auditory memory function in expert chess players

    PubMed Central

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. Results: The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Conclusion: Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time. PMID:26793666

  18. Cryptographic robustness of a quantum cryptography system using phase-time coding

    SciTech Connect

    Molotkov, S. N.

    2008-01-15

    A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In the absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.

  19. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it

  20. Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code System.

    Energy Science and Technology Software Center (ESTSC)

    2013-06-24

    Version 07 TART2012 is a coupled neutron-photon Monte Carlo transport code designed to use three-dimensional (3-D) combinatorial geometry. Neutron and/or photon sources as well as neutron induced photon production can be tracked. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART2012 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared tomore » other similar codes. Use of the entire system can save you a great deal of time and energy. TART2012 extends the general utility of the code to even more areas of application than available in previous releases by concentrating on improving the physics, particularly with regard to improved treatment of neutron fission, resonance self-shielding, molecular binding, and extending input options used by the code. Several utilities are included for creating input files and displaying TART results and data. TART2012 uses the latest ENDF/B-VI, Release 8, data. New for TART2012 is the use of continuous energy neutron cross sections, in addition to its traditional multigroup cross sections. For neutron interaction, the data are derived using ENDF-ENDL2005 and include both continuous energy cross sections and 700 group neutron data derived using a combination of ENDF/B-VI, Release 8, and ENDL data. The 700 group structure extends from 10-5 eV up to 1 GeV. Presently nuclear data are only available up to 20 MeV, so that only 616 of the groups are currently used. For photon interaction, 701 point photon data were derived using the Livermore EPDL97 file. The new 701 point structure extends from 100 eV up to 1 GeV, and is currently used over this entire energy range. TART2012 completely supersedes all older versions of TART, and it is strongly recommended that one use only the most recent version of TART2012 and its data files. Check author’s homepage for related information: http

  1. Space-Time Coded MC-CDMA: Blind Channel Estimation, Identifiability, and Receiver Design

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Li, Hongbin

    2003-12-01

    Integrating the strengths of multicarrier (MC) modulation and code division multiple access (CDMA), MC-CDMA systems are of great interest for future broadband transmissions. This paper considers the problem of channel identification and signal combining/detection schemes for MC-CDMA systems equipped with multiple transmit antennas and space-time (ST) coding. In particular, a subspace based blind channel identification algorithm is presented. Identifiability conditions are examined and specified which guarantee unique and perfect (up to a scalar) channel estimation when knowledge of the noise subspace is available. Several popular single-user based signal combining schemes, namely the maximum ratio combining (MRC) and the equal gain combining (EGC), which are often utilized in conventional single-transmit-antenna based MC-CDMA systems, are extended to the current ST-coded MC-CDMA (STC-MC-CDMA) system to perform joint combining and decoding. In addition, a linear multiuser minimum mean-squared error (MMSE) detection scheme is also presented, which is shown to outperform the MRC and EGC at some increased computational complexity. Numerical examples are presented to evaluate and compare the proposed channel identification and signal detection/combining techniques.

  2. Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests

    NASA Astrophysics Data System (ADS)

    Zilhão, Miguel; Witek, Helvi; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Nerozzi, Andrea

    2010-04-01

    The numerical evolution of Einstein’s field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.

  3. ZEROD: a zero dimensional, time dependent tokamak plasma simulation code for reactor scoping studies

    SciTech Connect

    Hacker, M.P.

    1980-02-01

    ZEROD integrates in time the volume-averaged electron and ion energy balance equations, as well as particle balance equations for the various ion species. The ion species included are deuterium, tritium, thermal alpha particles, and one impurity species. The code incorporates models for plasma heating via neutral deuterium beam injection, rf heating by waves in the lower-hybrid frequency regime, control of the burn thermal equilibrium, and fueling. Geometric quantities such as the plasma volume, surface area, profile averaging factors, etc., are calculated on the basis of realistic models for the plasma cross-sectional shape, including circles, ellipses, dees, and doublets.

  4. Explicit time-reversible orbit integration in Particle In Cell codes with static homogeneous magnetic field

    NASA Astrophysics Data System (ADS)

    Patacchini, L.; Hutchinson, I. H.

    2009-04-01

    A new explicit time-reversible orbit integrator for the equations of motion in a static homogeneous magnetic field - called Cyclotronic integrator - is presented. Like Spreiter and Walter's Taylor expansion algorithm, for sufficiently weak electric field gradients this second order method does not require a fine resolution of the Larmor motion; it has however the essential advantage of being symplectic, hence time-reversible. The Cyclotronic integrator is only subject to a linear stability constraint ( ΩΔ t < π, Ω being the Larmor angular frequency), and is therefore particularly suitable to electrostatic Particle In Cell codes with uniform magnetic field where Ω is larger than any other characteristic frequency, yet a resolution of the particles' gyromotion is required. Application examples and a detailed comparison with the well-known (time-reversible) Boris algorithm are presented; it is in particular shown that implementation of the Cyclotronic integrator in the kinetic codes SCEPTIC and Democritus can reduce the cost of orbit integration by up to a factor of ten.

  5. Instantly decodable network coding for real-time scalable video broadcast over wireless networks

    NASA Astrophysics Data System (ADS)

    Karim, Mohammad S.; Sadeghi, Parastoo; Sorour, Sameh; Aboutorab, Neda

    2016-01-01

    In this paper, we study real-time scalable video broadcast over wireless networks using instantly decodable network coding (IDNC). Such real-time scalable videos have hard deadline and impose a decoding order on the video layers. We first derive the upper bound on the probability that the individual completion times of all receivers meet the deadline. Using this probability, we design two prioritized IDNC algorithms, namely the expanding window IDNC (EW-IDNC) algorithm and the non-overlapping window IDNC (NOW-IDNC) algorithm. These algorithms provide a high level of protection to the most important video layer, namely the base layer, before considering additional video layers, namely the enhancement layers, in coding decisions. Moreover, in these algorithms, we select an appropriate packet combination over a given number of video layers so that these video layers are decoded by the maximum number of receivers before the deadline. We formulate this packet selection problem as a two-stage maximal clique selection problem over an IDNC graph. Simulation results over a real scalable video sequence show that our proposed EW-IDNC and NOW-IDNC algorithms improve the received video quality compared to the existing IDNC algorithms.

  6. Architecture for time or transform domain decoding of reed-solomon codes

    NASA Technical Reports Server (NTRS)

    Shao, Howard M. (Inventor); Truong, Trieu-Kie (Inventor); Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  7. Comparison of WDM/Pulse-Position-Modulation (WDM/PPM) with Code/Pulse-Position-Swapping (C/PPS) Based on Wavelength/Time Codes

    SciTech Connect

    Mendez, A J; Hernandez, V J; Gagliardi, R M; Bennett, C V

    2009-06-19

    Pulse position modulation (PPM) signaling is favored in intensity modulated/direct detection (IM/DD) systems that have average power limitations. Combining PPM with WDM over a fiber link (WDM/PPM) enables multiple accessing and increases the link's throughput. Electronic bandwidth and synchronization advantages are further gained by mapping the time slots of PPM onto a code space, or code/pulse-position-swapping (C/PPS). The property of multiple bits per symbol typical of PPM can be combined with multiple accessing by using wavelength/time [W/T] codes in C/PPS. This paper compares the performance of WDM/PPM and C/PPS for equal wavelengths and bandwidth.

  8. What causes auditory distraction?

    PubMed

    Macken, William J; Phelps, Fiona G; Jones, Dylan M

    2009-02-01

    The role of separating task-relevant from task-irrelevant aspects of the environment is typically assigned to the executive functioning of working memory. However, pervasive aspects of auditory distraction have been shown to be unrelated to working memory capacity in a range of studies of individual differences. We measured individual differences in global pattern matching and deliberate recoding of auditory sequences, and showed that, although deliberate processing was related to short-term memory performance, it did not predict the extent to which that performance was disrupted by task-irrelevant sound. Individual differences in global sequence processing were, however, positively related to the degree to which auditory distraction occurred. We argue that much auditory distraction, rather than being a negative function of working memory capacity, is in fact a positive function of the acuity of obligatory auditory processing. PMID:19145024

  9. TTVFast: An efficient and accurate code for transit timing inversion problems

    SciTech Connect

    Deck, Katherine M.; Agol, Eric; Holman, Matthew J.; Nesvorný, David

    2014-06-01

    Transit timing variations (TTVs) have proven to be a powerful technique for confirming Kepler planet candidates, for detecting non-transiting planets, and for constraining the masses and orbital elements of multi-planet systems. These TTV applications often require the numerical integration of orbits for computation of transit times (as well as impact parameters and durations); frequently tens of millions to billions of simulations are required when running statistical analyses of the planetary system properties. We have created a fast code for transit timing computation, TTVFast, which uses a symplectic integrator with a Keplerian interpolator for the calculation of transit times. The speed comes at the expense of accuracy in the calculated times, but the accuracy lost is largely unnecessary, as transit times do not need to be calculated to accuracies significantly smaller than the measurement uncertainties on the times. The time step can be tuned to give sufficient precision for any particular system. We find a speed-up of at least an order of magnitude relative to dynamical integrations with high precision using a Bulirsch-Stoer integrator.

  10. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features. PMID:22271265

  11. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain

    PubMed Central

    Lu, Kai; Vicario, David S.

    2014-01-01

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds. PMID:25246563

  12. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    PubMed

    Lu, Kai; Vicario, David S

    2014-10-01

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds. PMID:25246563

  13. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    SciTech Connect

    Arndt, S.A.

    1997-07-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.

  14. A visual parallel-BCI speller based on the time-frequency coding strategy

    NASA Astrophysics Data System (ADS)

    Xu, Minpeng; Chen, Long; Zhang, Lixin; Qi, Hongzhi; Ma, Lan; Tang, Jiabei; Wan, Baikun; Ming, Dong

    2014-04-01

    Objective. Spelling is one of the most important issues in brain-computer interface (BCI) research. This paper is to develop a visual parallel-BCI speller system based on the time-frequency coding strategy in which the sub-speller switching among four simultaneously presented sub-spellers and the character selection are identified in a parallel mode. Approach. The parallel-BCI speller was constituted by four independent P300+SSVEP-B (P300 plus SSVEP blocking) spellers with different flicker frequencies, thereby all characters had a specific time-frequency code. To verify its effectiveness, 11 subjects were involved in the offline and online spellings. A classification strategy was designed to recognize the target character through jointly using the canonical correlation analysis and stepwise linear discriminant analysis. Main results. Online spellings showed that the proposed parallel-BCI speller had a high performance, reaching the highest information transfer rate of 67.4 bit min-1, with an average of 54.0 bit min-1 and 43.0 bit min-1 in the three rounds and five rounds, respectively. Significance. The results indicated that the proposed parallel-BCI could be effectively controlled by users with attention shifting fluently among the sub-spellers, and highly improved the BCI spelling performance.

  15. Applying an optical space-time coding method to enhance light scattering signals in microfluidic devices.

    PubMed

    Mei, Zhe; Wu, Tsung-Feng; Pion-Tonachini, Luca; Qiao, Wen; Zhao, Chao; Liu, Zhiwen; Lo, Yu-Hwa

    2011-09-01

    An "optical space-time coding method" was applied to microfluidic devices to detect the forward and large angle light scattering signals for unlabelled bead and cell detection. Because of the enhanced sensitivity by this method, silicon pin photoreceivers can be used to detect both forward scattering (FS) and large angle (45-60°) scattering (LAS) signals, the latter of which has been traditionally detected by a photomultiplier tube. This method yields significant improvements in coefficients of variation (CV), producing CVs of 3.95% to 10.05% for FS and 7.97% to 26.12% for LAS with 15 μm, 10 μm, and 5 μm beads. These are among the best values ever demonstrated with microfluidic devices. The optical space-time coding method also enables us to measure the speed and position of each particle, producing valuable information for the design and assessment of microfluidic lab-on-a-chip devices such as flow cytometers and complete blood count devices. PMID:21915241

  16. Robust image transmission over MIMO space-time coded wireless systems

    NASA Astrophysics Data System (ADS)

    Song, Daewon; Chen, Chang W.

    2006-05-01

    We present in this paper an integrated robust image transmission scheme using space-time block codes (STBC) over multi-input multi-output (MIMO) wireless systems. First, in order to achieve an excellent error resilient capability, multiple bitstreams are generated based on wavelet trees along the spatial orientations. The spatial-orientation trees in the wavelet domain are individually encoded using SPIHT. Error propagation is thus limited within each bitstreams. Then, Reed-Solomon (R-S) codes as forward error correction (FEC) are adopted to combat transmission errors over error-prone wireless channels and to detect residual errors so as to avoid error propagation in each bitstream. FEC can reduce the bit error rates at the expenses of increased data rate. However, it is often difficult to design an optimal FEC scheme for a time-varying multi-path fading channel that may fluctuate beyond the capacity of the adopted FEC scheme. Therefore, in order to overcome such difficulty, we propose an approach to alleviate the effect of multi-path fading by employing the STBC for spatial diversity with assumption that channel state information (CSI) is perfectly estimated at the receiver. Experimental results demonstrate that the proposed scheme can achieve much improved performance in terms of PSNR over Rayleigh flat fading channel as compared with a wireless system without spatial diversity.

  17. Effects of an Auditory Lateralization Training in Children Suspected to Central Auditory Processing Disorder

    PubMed Central

    Lotfi, Yones; Moosavi, Abdollah; Bakhshi, Enayatollah; Sadjedi, Hamed

    2016-01-01

    Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim of this research was investigating effects of an auditory lateralization training on speech perception in presence of noise/competing signals in children suspected to (C)APD. Subjects and Methods In this analytical interventional study, 60 children suspected to (C)APD were selected based on multiple auditory processing assessment subtests. They were randomly divided into two groups: control (mean age 9.07) and training groups (mean age 9.00). Training program consisted of detection and pointing to sound sources delivered with interaural time differences under headphones for 12 formal sessions (6 weeks). Spatial word recognition score (WRS) and monaural selective auditory attention test (mSAAT) were used to follow the auditory lateralization training effects. Results This study showed that in the training group, mSAAT score and spatial WRS in noise (p value≤0.001) improved significantly after the auditory lateralization training. Conclusions We used auditory lateralization training for 6 weeks and showed that auditory lateralization can improve speech understanding in noise significantly. The generalization of this results needs further researches. PMID:27626084

  18. Design of optoelectronic scalar-relation vector processors with time-pulse coding

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Bardachenko, Vitaliy F.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Kolesnytsky, Oleg K.

    2005-03-01

    The results of design of optoelectronic scalar-relation vector processors (SRVP) with time-pulse coding as base cells for homogeneous 1D and 2D computing mediums are considered in the paper. The conception is founded on the use of advantages of time-pulse coding in hardware embodyings of multichannel devices of analog neurobiologic and time-pulse photoconverters. The two-stage structure of the SRVP mapping generalized mathematical model of quasiuniversal map of the relation between two vectors is designed on the basis of the mathematical base which includes the generalized operations of equivalence (nonequivalence), generalized operations of t-norm and s-norm of neuro-fuzzy logic. It is shown that the application of time-pulse coding allows to use quasiuniversal elements of two-valued logic as base blocks on both cascades of the processor. Four-input universal logical elements of two-valued logic (ULE TVL) with direct and complement outputs are used for vectors analog components processing by the first cascade of the SRVP. In a modified variant the ULE TVL have direct and inverse digital outputs for direct and complement time-pulse outputs and are supplied with additional optical signals conversion drivers. The ULE TVL of the second cascade has 2n or 4n inputs, where n - dimension of treated vectors. The circuits of the ULE TVL are considered on the basis of parallel analog-to-digital converters and digital circuits implemented on CMOS transistors, have optical inputs and outputs, and have following characteristics: realized on 1.5mm technology CMOS transistors; the input currents range - 100nA...100uA; the supply voltage - 3...15V; the relative error is less than 0.5%; the output voltage delay lays in range of 10...100ns. We consider structural design and circuitry of the SRVP base blocks and show that all principal components can be implemented on the basis of optoelectronic photocurrent transformers on current mirrors comparators with two-threshold and multi

  19. Coding Odorant Concentration Through Activation Timing Between the Medial and Lateral Olfactory Bulb

    PubMed Central

    Zhou, Zhishang; Belluscio, Leonardo

    2012-01-01

    SUMMARY In mammals, each olfactory bulb (OB) contains a pair of mirror-symmetric glomerular maps organized to reflect odorant receptor identity. The functional implication of maintaining these symmetric medial-lateral maps within each OB remains unclear. Here, using in vivo multi-electrode recordings to simultaneously detect odorant-induced activity across the entire OB, we reveal a timing difference in the odorant-evoked onset latencies between the medial and lateral halves. Interestingly, the latencies in the medial and lateral OB decreased at different rates as odorant concentration increased, causing the timing difference between them to also diminish. As a result, output neurons in the medial and lateral OB fired with greater synchrony at higher odorant concentrations. Thus, we propose that temporal differences in activity between the medial and lateral OB can dynamically code odorant concentration, which is subsequently decoded in the olfactory cortex through the integration of synchronous action potentials. PMID:23168258

  20. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  1. Translating Neurocognitive Models of Auditory-Verbal Hallucinations into Therapy: Using Real-time fMRI-Neurofeedback to Treat Voices

    PubMed Central

    Fovet, Thomas; Orlov, Natasza; Dyck, Miriam; Allen, Paul; Mathiak, Klaus; Jardri, Renaud

    2016-01-01

    Auditory-verbal hallucinations (AVHs) are frequent and disabling symptoms, which can be refractory to conventional psychopharmacological treatment in more than 25% of the cases. Recent advances in brain imaging allow for a better understanding of the neural underpinnings of AVHs. These findings strengthened transdiagnostic neurocognitive models that characterize these frequent and disabling experiences. At the same time, technical improvements in real-time functional magnetic resonance imaging (fMRI) enabled the development of innovative and non-invasive methods with the potential to relieve psychiatric symptoms, such as fMRI-based neurofeedback (fMRI-NF). During fMRI-NF, brain activity is measured and fed back in real time to the participant in order to help subjects to progressively achieve voluntary control over their own neural activity. Precisely defining the target brain area/network(s) appears critical in fMRI-NF protocols. After reviewing the available neurocognitive models for AVHs, we elaborate on how recent findings in the field may help to develop strong a priori strategies for fMRI-NF target localization. The first approach relies on imaging-based “trait markers” (i.e., persistent traits or vulnerability markers that can also be detected in the presymptomatic and remitted phases of AVHs). The goal of such strategies is to target areas that show aberrant activations during AVHs or are known to be involved in compensatory activation (or resilience processes). Brain regions, from which the NF signal is derived, can be based on structural MRI and neurocognitive knowledge, or functional MRI information collected during specific cognitive tasks. Because hallucinations are acute and intrusive symptoms, a second strategy focuses more on “state markers.” In this case, the signal of interest relies on fMRI capture of the neural networks exhibiting increased activity during AVHs occurrences, by means of multivariate pattern recognition methods. The fine

  2. Translating Neurocognitive Models of Auditory-Verbal Hallucinations into Therapy: Using Real-time fMRI-Neurofeedback to Treat Voices.

    PubMed

    Fovet, Thomas; Orlov, Natasza; Dyck, Miriam; Allen, Paul; Mathiak, Klaus; Jardri, Renaud

    2016-01-01

    Auditory-verbal hallucinations (AVHs) are frequent and disabling symptoms, which can be refractory to conventional psychopharmacological treatment in more than 25% of the cases. Recent advances in brain imaging allow for a better understanding of the neural underpinnings of AVHs. These findings strengthened transdiagnostic neurocognitive models that characterize these frequent and disabling experiences. At the same time, technical improvements in real-time functional magnetic resonance imaging (fMRI) enabled the development of innovative and non-invasive methods with the potential to relieve psychiatric symptoms, such as fMRI-based neurofeedback (fMRI-NF). During fMRI-NF, brain activity is measured and fed back in real time to the participant in order to help subjects to progressively achieve voluntary control over their own neural activity. Precisely defining the target brain area/network(s) appears critical in fMRI-NF protocols. After reviewing the available neurocognitive models for AVHs, we elaborate on how recent findings in the field may help to develop strong a priori strategies for fMRI-NF target localization. The first approach relies on imaging-based "trait markers" (i.e., persistent traits or vulnerability markers that can also be detected in the presymptomatic and remitted phases of AVHs). The goal of such strategies is to target areas that show aberrant activations during AVHs or are known to be involved in compensatory activation (or resilience processes). Brain regions, from which the NF signal is derived, can be based on structural MRI and neurocognitive knowledge, or functional MRI information collected during specific cognitive tasks. Because hallucinations are acute and intrusive symptoms, a second strategy focuses more on "state markers." In this case, the signal of interest relies on fMRI capture of the neural networks exhibiting increased activity during AVHs occurrences, by means of multivariate pattern recognition methods. The fine

  3. Selective adaptation to "oddball" sounds by the human auditory system.

    PubMed

    Simpson, Andrew J R; Harper, Nicol S; Reiss, Joshua D; McAlpine, David

    2014-01-29

    Adaptation to both common and rare sounds has been independently reported in neurophysiological studies using probabilistic stimulus paradigms in small mammals. However, the apparent sensitivity of the mammalian auditory system to the statistics of incoming sound has not yet been generalized to task-related human auditory perception. Here, we show that human listeners selectively adapt to novel sounds within scenes unfolding over minutes. Listeners' performance in an auditory discrimination task remains steady for the most common elements within the scene but, after the first minute, performance improves for distinct and rare (oddball) sound elements, at the expense of rare sounds that are relatively less distinct. Our data provide the first evidence of enhanced coding of oddball sounds in a human auditory discrimination task and suggest the existence of an adaptive mechanism that tracks the long-term statistics of sounds and deploys coding resources accordingly. PMID:24478375

  4. Demodulation processes in auditory perception

    NASA Astrophysics Data System (ADS)

    Feth, Lawrence L.

    1994-08-01

    The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.

  5. Sparse Spectrotemporal Coding of Sounds

    NASA Astrophysics Data System (ADS)

    Klein, David J.; König, Peter; Körding, Konrad P.

    2003-12-01

    Recent studies of biological auditory processing have revealed that sophisticated spectrotemporal analyses are performed by central auditory systems of various animals. The analysis is typically well matched with the statistics of relevant natural sounds, suggesting that it produces an optimal representation of the animal's acoustic biotope. We address this topic using simulated neurons that learn an optimal representation of a speech corpus. As input, the neurons receive a spectrographic representation of sound produced by a peripheral auditory model. The output representation is deemed optimal when the responses of the neurons are maximally sparse. Following optimization, the simulated neurons are similar to real neurons in many respects. Most notably, a given neuron only analyzes the input over a localized region of time and frequency. In addition, multiple subregions either excite or inhibit the neuron, together producing selectivity to spectral and temporal modulation patterns. This suggests that the brain's solution is particularly well suited for coding natural sound; therefore, it may prove useful in the design of new computational methods for processing speech.

  6. Novel space-time trellis codes for free-space optical communications using transmit laser selection.

    PubMed

    García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2015-09-21

    In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results. PMID:26406626

  7. Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.

    PubMed

    Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K

    2014-02-01

    Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution. PMID:24356347

  8. [Diagnosis and therapy of auditory synaptopathy/neuropathy].

    PubMed

    Moser, T; Strenzke, N; Meyer, A; Lesinski-Schiedat, A; Lenarz, T; Beutner, D; Foerst, A; Lang-Roth, R; von Wedel, H; Walger, M; Gross, M; Keilmann, A; Limberger, A; Steffens, T; Strutz, J

    2006-11-01

    Pathological auditory brainstem responses (lack of responses, elevated thresholds and perturbed waveforms) in combination with present otoacoustic emissions are typical audiometric findings in patients with a hearing impairment that particularly affects speech comprehension or complete deafness. This heterogenous group of disorders first described as "auditory neuropathy" includes dysfunction of peripheral synaptic coding of sound by inner hair cells (synaptopathy) and/or of the generation and propagation of action potentials in the auditory nerve (neuropathy). This joint statement provides prevailing background information as well as recommendations on diagnosis and treatment. The statement focuses on the handling in the german language area but also refers to current international statements. PMID:17041780

  9. Conceptual priming for realistic auditory scenes and for auditory words.

    PubMed

    Frey, Aline; Aramaki, Mitsuko; Besson, Mireille

    2014-02-01

    Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. PMID:24378910

  10. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123

    PubMed Central

    Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  11. GATOR: A 3-D time-dependent simulation code for helix TWTs

    SciTech Connect

    Zaidman, E.G.; Freund, H.P.

    1996-12-31

    A 3D nonlinear analysis of helix TWTs is presented. The analysis and simulation code is based upon a spectral decomposition using the vacuum sheath helix modes. The field equations are integrated on a grid and advanced in time using a MacCormack predictor-corrector scheme, and the electron orbit equations are integrated using a fourth order Runge-Kutta algorithm. Charge is accumulated on the grid and the field is interpolated to the particle location by a linear map. The effect of dielectric liners on the vacuum sheath helix dispersion is included in the analysis. Several numerical cases are considered. Simulation of the injection of a DC beam and a signal at a single frequency is compared with a linear field theory of the helix TWT interaction, and good agreement is found.

  12. Real Time Optimizing Code for Stabilization and Control of Plasma Reactors

    Energy Science and Technology Software Center (ESTSC)

    1995-09-25

    LOOP4 is a flexible real-time control code that acquires signals (input variables) from an array of sensors, that computes therefrom the actual state of the reactor system, that compares the actual state to the desired state (a goal), and that commands changes to reactor controls (output, or manipulated variables) in order to minimize the difference between the actual state of the reactor and the desired state. The difference between actual and desired states is quantifiedmore » in terms of a distance metric in the space defined by the sensor measurements. The desired state of the reactor is specified in terms of target values of sensor readings that were obtained previously during development and optimization of a process engineer using conventional techniques.« less

  13. Improved wavelength coded optical time domain reflectometry based on the optical switch.

    PubMed

    Zhu, Ninghua; Tong, Youwan; Chen, Wei; Wang, Sunlong; Sun, Wenhui; Liu, Jianguo

    2014-06-16

    This paper presents an improved wavelength coded time-domain reflectometry based on the 2 × 1 optical switch. In this scheme, in order to improve the signal-noise-ratio (SNR) of the beat signal, the improved system used an optical switch to obtain wavelength-stable, low-noise and narrow optical pulses for probe and reference. Experiments were set up to demonstrate a spatial resolution of 2.5m within a range of 70km and obtain the beat signal with line width narrower than 15 MHz within a range of 50 km in fiber break detection. A system for wavelength-division-multiplexing passive optical network (WDM-PON) monitoring was also constructed to detect the fiber break of different channels by tuning the current applied on the gating section of the distributed Bragg reflector (DBR) laser. PMID:24977604

  14. Temperature sensitive auditory neuropathy.

    PubMed

    Zhang, Qiujing; Lan, Lan; Shi, Wei; Yu, Lan; Xie, Lin-Yi; Xiong, Fen; Zhao, Cui; Li, Na; Yin, Zifang; Zong, Liang; Guan, Jing; Wang, Dayong; Sun, Wei; Wang, Qiuju

    2016-05-01

    Temperature sensitive auditory neuropathy is a very rare and puzzling disorder. In the present study, we reported three unrelated 2 to 6 year-old children who were diagnosed as auditory neuropathy patients who complained of severe hearing loss when they had fever. Their hearing thresholds varied from the morning to the afternoon. Two of these patients' hearing improved with age, and one patient received positive results from cochlear implant. Genetic analysis revealed that these three patients had otoferlin (OTOF) homozygous or compound heterozygous mutations with the genotypes c.2975_2978delAG/c.4819C>T, c.4819C>T/c.4819C>T, or c.2382_2383delC/c.1621G>A, respectively. Our study suggests that these gene mutations may be the cause of temperature sensitive auditory neuropathy. The long term follow up results suggest that the hearing loss in this type of auditory neuropathy may recover with age. PMID:26778470

  15. A Simple Method for Guaranteeing ECG Quality in Real-Time Wavelet Lossy Coding

    NASA Astrophysics Data System (ADS)

    Alesanco, Álvaro; García, José

    2007-12-01

    Guaranteeing ECG signal quality in wavelet lossy compression methods is essential for clinical acceptability of reconstructed signals. In this paper, we present a simple and efficient method for guaranteeing reconstruction quality measured using the new distortion index wavelet weighted PRD (WWPRD), which reflects in a more accurate way the real clinical distortion of the compressed signal. The method is based on the wavelet transform and its subsequent coding using the set partitioning in hierarchical trees (SPIHT) algorithm. By thresholding the WWPRD in the wavelet transform domain, a very precise reconstruction error can be achieved thus enabling to obtain clinically useful reconstructed signals. Because of its computational efficiency, the method is suitable to work in a real-time operation, thus being very useful for real-time telecardiology systems. The method is extensively tested using two different ECG databases. Results led to an excellent conclusion: the method controls the quality in a very accurate way not only in mean value but also with a low-standard deviation. The effects of ECG baseline wandering as well as noise in compression are also discussed. Baseline wandering provokes negative effects when using WWPRD index to guarantee quality because this index is normalized by the signal energy. Therefore, it is better to remove it before compression. On the other hand, noise causes an increase in signal energy provoking an artificial increase of the coded signal bit rate. Clinical validation by cardiologists showed that a WWPRD value of 10[InlineEquation not available: see fulltext.] preserves the signal quality and thus they recommend this value to be used in the compression system.

  16. Auditory learning: a developmental method.

    PubMed

    Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan

    2005-05-01

    Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers. PMID:15940990

  17. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  18. [Central auditory prosthesis].

    PubMed

    Lenarz, T; Lim, H; Joseph, G; Reuter, G; Lenarz, M

    2009-06-01

    Deaf patients with severe sensory hearing loss can benefit from a cochlear implant (CI), which stimulates the auditory nerve fibers. However, patients who do not have an intact auditory nerve cannot benefit from a CI. The majority of these patients are neurofibromatosis type 2 (NF2) patients who developed neural deafness due to growth or surgical removal of a bilateral acoustic neuroma. The only current solution is the auditory brainstem implant (ABI), which stimulates the surface of the cochlear nucleus in the brainstem. Although the ABI provides improvement in environmental awareness and lip-reading capabilities, only a few NF2 patients have achieved some limited open set speech perception. In the search for alternative procedures our research group in collaboration with Cochlear Ltd. (Australia) developed a human prototype auditory midbrain implant (AMI), which is designed to electrically stimulate the inferior colliculus (IC). The IC has the potential as a new target for an auditory prosthesis as it provides access to neural projections necessary for speech perception as well as a systematic map of spectral information. In this paper the present status of research and development in the field of central auditory prostheses is presented with respect to technology, surgical technique and hearing results as well as the background concepts of ABI and AMI. PMID:19517084

  19. Effective real-time vehicle tracking using discriminative sparse coding on local patches

    NASA Astrophysics Data System (ADS)

    Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei

    2016-01-01

    A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.

  20. One-time collision arbitration algorithm in radio-frequency identification based on the Manchester code

    NASA Astrophysics Data System (ADS)

    Liu, Chen-Chung; Chan, Yin-Tsung

    2011-02-01

    In radio-requency identification (RFID) systems, when multiple tags transmit data to a reader simultaneously, these data may collide and create unsuccessful identifications; hence, anticollision algorithms are needed to reduce collisions (collision cycles) to improve the tag identification speed. We propose a one-time collision arbitration algorithm to reduce both the number of collisions and the time consumption for tags' identification in RFID. The proposed algorithm uses Manchester coding to detect the locations of collided bits, uses the divide-and-conquer strategy to find the structure of colliding bits to generate 96-bit query strings as the 96-bit candidate query strings (96BCQSs), and uses query-tree anticollision schemes with 96BCQSs to identify tags. The performance analysis and experimental results show that the proposed algorithm has three advantages: (i) reducing the number of collisions to only one, so that the time complexity of tag identification is the simplest O(1), (ii) storing identified identification numbers (IDs) and the 96BCQSs in a register to save the used memory, and (iii) resulting in the number of bits transmitted by both the reader and tags being evidently less than the other algorithms in one-tag identification or in all tags identification.

  1. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2004-12-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  2. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2005-01-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  3. The development of auditory perception in children after auditory brainstem implantation.

    PubMed

    Colletti, Liliana; Shannon, Robert V; Colletti, Vittorio

    2014-01-01

    Auditory brainstem implants (ABIs) can provide useful auditory perception and language development in deaf children who are not able to use a cochlear implant (CI). We prospectively followed up a consecutive group of 64 deaf children up to 12 years following ABI surgery. The etiology of deafness in these children was: cochlear nerve aplasia in 49, auditory neuropathy in 1, cochlear malformations in 8, bilateral cochlear postmeningitic ossification in 3, neurofibromatosis type 2 in 2, and bilateral cochlear fractures due to a head injury in 1. Thirty-five children had other congenital nonauditory disabilities. Twenty-two children had previous CIs with no benefit. Fifty-eight children were fitted with the Cochlear 24 ABI device and 6 with the MedEl ABI device, and all children followed the same rehabilitation program. Auditory perceptual abilities were evaluated on the Categories of Auditory Performance (CAP) scale. No child was lost to follow-up, and there were no exclusions from the study. All children showed significant improvement in auditory perception with implant experience. Seven children (11%) were able to achieve the highest score on the CAP test; they were able to converse on the telephone within 3 years of implantation. Twenty children (31.3%) achieved open set speech recognition (CAP score of 5 or greater) and 30 (46.9%) achieved a CAP level of 4 or greater. Of the 29 children without nonauditory disabilities, 18 (62%) achieved a CAP score of 5 or greater with the ABI. All children showed continued improvements in auditory skills over time. The long-term results of ABI surgery reveal significant auditory benefit in most children, and open set auditory recognition in many. PMID:25377987

  4. Sustained firing of model central auditory neurons yields a discriminative spectro-temporal representation for natural sounds.

    PubMed

    Carlin, Michael A; Elhilali, Mounya

    2013-01-01

    The processing characteristics of neurons in the central auditory system are directly shaped by and reflect the statistics of natural acoustic environments, but the principles that govern the relationship between natural sound ensembles and observed responses in neurophysiological studies remain unclear. In particular, accumulating evidence suggests the presence of a code based on sustained neural firing rates, where central auditory neurons exhibit strong, persistent responses to their preferred stimuli. Such a strategy can indicate the presence of ongoing sounds, is involved in parsing complex auditory scenes, and may play a role in matching neural dynamics to varying time scales in acoustic signals. In this paper, we describe a computational framework for exploring the influence of a code based on sustained firing rates on the shape of the spectro-temporal receptive field (STRF), a linear kernel that maps a spectro-temporal acoustic stimulus to the instantaneous firing rate of a central auditory neuron. We demonstrate the emergence of richly structured STRFs that capture the structure of natural sounds over a wide range of timescales, and show how the emergent ensembles resemble those commonly reported in physiological studies. Furthermore, we compare ensembles that optimize a sustained firing code with one that optimizes a sparse code, another widely considered coding strategy, and suggest how the resulting population responses are not mutually exclusive. Finally, we demonstrate how the emergent ensembles contour the high-energy spectro-temporal modulations of natural sounds, forming a discriminative representation that captures the full range of modulation statistics that characterize natural sound ensembles. These findings have direct implications for our understanding of how sensory systems encode the informative components of natural stimuli and potentially facilitate multi-sensory integration. PMID:23555217

  5. Auditory models for speech analysis

    NASA Astrophysics Data System (ADS)

    Maybury, Mark T.

    This paper reviews the psychophysical basis for auditory models and discusses their application to automatic speech recognition. First an overview of the human auditory system is presented, followed by a review of current knowledge gleaned from neurological and psychoacoustic experimentation. Next, a general framework describes established peripheral auditory models which are based on well-understood properties of the peripheral auditory system. This is followed by a discussion of current enhancements to that models to include nonlinearities and synchrony information as well as other higher auditory functions. Finally, the initial performance of auditory models in the task of speech recognition is examined and additional applications are mentioned.

  6. Auditory hallucinations induced by trazodone

    PubMed Central

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-01-01

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048

  7. Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses.

    PubMed

    Sudhakar, Shyam Kumar; Torben-Nielsen, Benjamin; De Schutter, Erik

    2015-12-01

    Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it. PMID:26630202

  8. Psychology of auditory perception.

    PubMed

    Lotto, Andrew; Holt, Lori

    2011-09-01

    Audition is often treated as a 'secondary' sensory system behind vision in the study of cognitive science. In this review, we focus on three seemingly simple perceptual tasks to demonstrate the complexity of perceptual-cognitive processing involved in everyday audition. After providing a short overview of the characteristics of sound and their neural encoding, we present a description of the perceptual task of segregating multiple sound events that are mixed together in the signal reaching the ears. Then, we discuss the ability to localize the sound source in the environment. Finally, we provide some data and theory on how listeners categorize complex sounds, such as speech. In particular, we present research on how listeners weigh multiple acoustic cues in making a categorization decision. One conclusion of this review is that it is time for auditory cognitive science to be developed to match what has been done in vision in order for us to better understand how humans communicate with speech and music. WIREs Cogni Sci 2011 2 479-489 DOI: 10.1002/wcs.123 For further resources related to this article, please visit the WIREs website. PMID:26302301

  9. Auditory Cortical Plasticity in Learning to Discriminate Modulation Rate

    PubMed Central

    van Wassenhove, Virginie; Nagarajan, Srikantan S.

    2014-01-01

    The discrimination of temporal information in acoustic inputs is a crucial aspect of auditory perception, yet very few studies have focused on auditory perceptual learning of timing properties and associated plasticity in adult auditory cortex. Here, we trained participants on a temporal discrimination task. The main task used a base stimulus (four tones separated by intervals of 200 ms) that had to be distinguished from a target stimulus (four tones with intervals down to ~180 ms). We show that participants’ auditory temporal sensitivity improves with a short amount of training (3 d, 1 h/d). Learning to discriminate temporal modulation rates was accompanied by a systematic amplitude increase of the early auditory evoked responses to trained stimuli, as measured by magnetoencephalography. Additionally, learning and auditory cortex plasticity partially generalized to interval discrimination but not to frequency discrimination. Auditory cortex plasticity associated with short-term perceptual learning was manifested as an enhancement of auditory cortical responses to trained acoustic features only in the trained task. Plasticity was also manifested as induced non-phase–locked high gamma-band power increases in inferior frontal cortex during performance in the trained task. Functional plasticity in auditory cortex is here interpreted as the product of bottom-up and top-down modulations. PMID:17344404

  10. 78 FR 34922 - Definition of Auditory Assistance Device

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-11

    ...This document modifies the definition of ``auditory assistance device'' in the Commission's rules to permit these devices to be used by anyone at any location for simultaneous language interpretation (simultaneous translation), where the spoken words are translated continuously in near real time. The revised definition permits unlicensed auditory assistance devices to be used to provide either......

  11. Basic Auditory Processing and Developmental Dyslexia in Chinese

    ERIC Educational Resources Information Center

    Wang, Hsiao-Lan Sharon; Huss, Martina; Hamalainen, Jarmo A.; Goswami, Usha

    2012-01-01

    The present study explores the relationship between basic auditory processing of sound rise time, frequency, duration and intensity, phonological skills (onset-rime and tone awareness, sound blending, RAN, and phonological memory) and reading disability in Chinese. A series of psychometric, literacy, phonological, auditory, and character…

  12. Multimodal Bivariate Thematic Maps: Auditory and Haptic Display.

    ERIC Educational Resources Information Center

    Jeong, Wooseob; Gluck, Myke

    2002-01-01

    Explores the possibility of multimodal bivariate thematic maps by utilizing auditory and haptic (sense of touch) displays. Measured completion time of tasks and the recall (retention) rate in two experiments, and findings confirmed the possibility of using auditory and haptic displays in geographic information systems (GIS). (Author/LRW)

  13. Multimodal Geographic Information Systems: Adding Haptic and Auditory Display.

    ERIC Educational Resources Information Center

    Jeong, Wooseob; Gluck, Myke

    2003-01-01

    Investigated the feasibility of adding haptic and auditory displays to traditional visual geographic information systems (GISs). Explored differences in user performance, including task completion time and accuracy, and user satisfaction with a multimodal GIS which was implemented with a haptic display, auditory display, and combined display.…

  14. How phonetically selective is the human auditory cortex?

    PubMed

    Shamma, Shihab

    2014-08-01

    Responses in the human auditory cortex to natural speech reveal a dual character. Often they are categorically selective to phonetic elements, serving as a gateway to abstract linguistic representations. But at other times they reflect a distributed generalized spectrotemporal analysis of the acoustic features, as seen in early mammalian auditory cortices. PMID:24751358

  15. Design and implementation of an efficient finite-difference, time-domain computer code for large problems

    SciTech Connect

    White, W.T. III; Taflove, A.; Stringer, J.C.; Kluge, R.F.

    1986-12-01

    As computers get larger and faster, demands upon electromagnetics codes increase. Ever larger volumes of space must be represented with increasingly more accuracy and detail. This requires continually more efficient EM codes. To meet present and future needs in DOE and DOD, we are developing FDTD3D, a three-dimensional finite-difference, time-domain EM solver. When complete, the code will efficiently solve problems with tens of millions of unknowns. It already operates faster than any other 3D, time-domain EM code, and we are using it to model linear coupling to a generic missile section. At Lawrence Livermore National Laboratory (LLNL), we anticipate the ultimate need for such a code if we are to model EM threats to objects such as airplanes or missiles. This article describes the design and implementation of FDTD3D. The first section, ''Design of FDTD3D,'' contains a brief summary of other 3D time-domain EM codes at LLNL followed by a description of the efficiency of FDTD3D. The second section, ''Implementation of FDTD3D,'' discusses recent work and future plans.

  16. High-speed two-dimensional bar-code detection system with time-sharing laser light emission method

    NASA Astrophysics Data System (ADS)

    Wakaumi, Hiroo; Nagasawa, Chikao

    2000-12-01

    A novel twodimensional bar-code detection system with time-sharing light emission laser diodes is proposed. A bias current allowing the laser diode to improve the light output rise time is optimized to slightly below the threshold of the diode, so that channel cross-talk among three-layer bar-code signals caused by the bias light can be kept small and a high-speed pulse modulation drive operation can be achieved. A prototype system for a three-layer bar code has achieved an effective scanning speed two and nine tenths times that of conventional scanners. It is estimated from the detection range that the number of time-sharing light emission laser diodes can be increased to at least four, when the current detection amplifier with a bandwidth of 6.4 MHz is used.

  17. Synchronous auditory nerve activity in the carboplatin-chinchilla model of auditory neuropathy.

    PubMed

    Cowper-Smith, C D; Dingle, R N; Guo, Y; Burkard, R; Phillips, D P

    2010-07-01

    Two hallmark features of auditory neuropathy (AN) are normal outer hair cell function in the presence of an absent/abnormal auditory brainstem response (ABR). Studies of human AN patients are unable to determine whether disruption of the ABR is the result of a reduction of neural input, a loss of auditory nerve fiber (ANF) synchrony, or both. Neurophysiological data from the carboplatin model of AN reveal intact neural synchrony in the auditory nerve and inferior colliculus, despite significant reductions in neural input. These data suggest that (1), intact neural synchrony is available to support an ABR following carboplatin treatment and, (2), impaired spike timing intrinsic to neurons is required for the disruption of the ABR observed in human AN. PMID:20649190

  18. Effects of Multimodal Presentation and Stimulus Familiarity on Auditory and Visual Processing

    ERIC Educational Resources Information Center

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2010-01-01

    Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of…

  19. Spatial representation of neural responses to natural and altered conspecific vocalizations in cat auditory cortex.

    PubMed

    Gourévitch, Boris; Eggermont, Jos J

    2007-01-01

    This study shows the neural representation of cat vocalizations, natural and altered with respect to carrier and envelope, as well as time-reversed, in four different areas of the auditory cortex. Multiunit activity recorded in primary auditory cortex (AI) of anesthetized cats mainly occurred at onsets (<200-ms latency) and at subsequent major peaks of the vocalization envelope and was significantly inhibited during the stationary course of the stimuli. The first 200 ms of processing appears crucial for discrimination of a vocalization in AI. The dorsal and ventral parts of AI appear to have different roles in coding vocalizations. The dorsal part potentially discriminated carrier-altered meows, whereas the ventral part showed differences primarily in its response to natural and time-reversed meows. In the posterior auditory field, the different temporal response types of neurons, as determined by their poststimulus time histograms, showed discrimination for carrier alterations in the meow. Sustained firing neurons in the posterior ectosylvian gyrus (EP) could discriminate, among others, by neural synchrony, temporal envelope alterations of the meow, and time reversion thereof. These findings suggest an important role of EP in the detection of information conveyed by the alterations of vocalizations. Discrimination of the neural responses to different alterations of vocalizations could be based on either firing rate, type of temporal response, or neural synchrony, suggesting that all these are likely simultaneously used in processing of natural and altered conspecific vocalizations. PMID:17021022

  20. Global Time Dependent Solutions of Stochastically Driven Standard Accretion Disks: Development of Hydrodynamical Code

    NASA Astrophysics Data System (ADS)

    Wani, Naveel; Maqbool, Bari; Iqbal, Naseer; Misra, Ranjeev

    2016-07-01

    X-ray binaries and AGNs are powered by accretion discs around compact objects, where the x-rays are emitted from the inner regions and uv emission arise from the relatively cooler outer parts. There has been an increasing evidence that the variability of the x-rays in different timescales is caused by stochastic fluctuations in the accretion disc at different radii. These fluctuations although arise in the outer parts of the disc but propagate inwards to give rise to x-ray variability and hence provides a natural connection between the x-ray and uv variability. There are analytical expressions to qualitatively understand the effect of these stochastic variabilities, but quantitative predictions are only possible by a detailed hydrodynamical study of the global time dependent solution of standard accretion disc. We have developed numerical efficient code (to incorporate all these effects), which considers gas pressure dominated solutions and stochastic fluctuations with the inclusion of boundary effect of the last stable orbit.

  1. Detection by real time PCR of walnut allergen coding sequences in processed foods.

    PubMed

    Linacero, Rosario; Ballesteros, Isabel; Sanchiz, Africa; Prieto, Nuria; Iniesto, Elisa; Martinez, Yolanda; Pedrosa, Mercedes M; Muzquiz, Mercedes; Cabanillas, Beatriz; Rovira, Mercè; Burbano, Carmen; Cuadrado, Carmen

    2016-07-01

    A quantitative real-time PCR (RT-PCR) method, employing novel primer sets designed on Jug r 1, Jug r 3, and Jug r 4 allergen-coding sequences, was set up and validated. Its specificity, sensitivity, and applicability were evaluated. The DNA extraction method based on CTAB-phenol-chloroform was best for walnut. RT-PCR allowed a specific and accurate amplification of allergen sequence, and the limit of detection was 2.5pg of walnut DNA. The method sensitivity and robustness were confirmed with spiked samples, and Jug r 3 primers detected up to 100mg/kg of raw walnut (LOD 0.01%, LOQ 0.05%). Thermal treatment combined with pressure (autoclaving) reduced yield and amplification (integrity and quality) of walnut DNA. High hydrostatic pressure (HHP) did not produce any effect on the walnut DNA amplification. This RT-PCR method showed greater sensitivity and reliability in the detection of walnut traces in commercial foodstuffs compared with ELISA assays. PMID:26920302

  2. Evaluation of a thin-slot formalism for finite-difference time-domain electromagnetics codes

    SciTech Connect

    Turner, C.D.; Bacon, L.D.

    1987-03-01

    A thin-slot formalism for use with finite-difference time-domain (FDTD) electromagnetics codes has been evaluated in both two and three dimensions. This formalism allows narrow slots to be modeled in the wall of a scatterer without reducing the space grid size to the gap width. In two dimensions, the evaluation involves the calculation of the total fields near two infinitesimally thin coplanar strips separated by a gap. A method-of-moments (MoM) solution of the same problem is used as a benchmark for comparison. Results in two dimensions show that up to 10% error can be expected in total electric and magnetic fields both near (lambda/40) and far (1 lambda) from the slot. In three dimensions, the evaluation is similar. The finite-length slot is placed in a finite plate and an MoM surface patch solution is used for the benchmark. These results, although less extensive than those in two dimensions, show that slightly larger errors can be expected. Considering the approximations made near the slot in incorporating the formalism, the results are very promising. Possibilities also exist for applying this formalism to walls of arbitrary thickness and to other types of slots, such as overlapping joints. 11 refs., 25 figs., 6 tabs.

  3. The Drosophila Auditory System

    PubMed Central

    Boekhoff-Falk, Grace; Eberl, Daniel F.

    2013-01-01

    Development of a functional auditory system in Drosophila requires specification and differentiation of the chordotonal sensilla of Johnston’s organ (JO) in the antenna, correct axonal targeting to the antennal mechanosensory and motor center (AMMC) in the brain, and synaptic connections to neurons in the downstream circuit. Chordotonal development in JO is functionally complicated by structural, molecular and functional diversity that is not yet fully understood, and construction of the auditory neural circuitry is only beginning to unfold. Here we describe our current understanding of developmental and molecular mechanisms that generate the exquisite functions of the Drosophila auditory system, emphasizing recent progress and highlighting important new questions arising from research on this remarkable sensory system. PMID:24719289

  4. Overriding auditory attentional capture.

    PubMed

    Dalton, Polly; Lavie, Nilli

    2007-02-01

    Attentional capture by color singletons during shape search can be eliminated when the target is not a feature singleton (Bacon & Egeth, 1994). This suggests that a "singleton detection" search strategy must be adopted for attentional capture to occur. Here we find similar effects on auditory attentional capture. Irrelevant high-intensity singletons interfered with an auditory search task when the target itself was also a feature singleton. However, singleton interference was eliminated when the target was not a singleton (i.e., when nontargets were made heterogeneous, or when more than one target sound was presented). These results suggest that auditory attentional capture depends on the observer's attentional set, as does visual attentional capture. The suggestion that hearing might act as an early warning system that would always be tuned to unexpected unique stimuli must therefore be modified to accommodate these strategy-dependent capture effects. PMID:17557587

  5. Event-Related Potential, Time-frequency, and Functional Connectivity Facets of Local and Global Auditory Novelty Processing: An Intracranial Study in Humans.

    PubMed

    El Karoui, Imen; King, Jean-Remi; Sitt, Jacobo; Meyniel, Florent; Van Gaal, Simon; Hasboun, Dominique; Adam, Claude; Navarro, Vincent; Baulac, Michel; Dehaene, Stanislas; Cohen, Laurent; Naccache, Lionel

    2015-11-01

    Auditory novelty detection has been associated with different cognitive processes. Bekinschtein et al. (2009) developed an experimental paradigm to dissociate these processes, using local and global novelty, which were associated, respectively, with automatic versus strategic perceptual processing. They have mostly been studied using event-related potentials (ERPs), but local spiking activity as indexed by gamma (60-120 Hz) power and interactions between brain regions as indexed by modulations in beta-band (13-25 Hz) power and functional connectivity have not been explored. We thus recorded 9 epileptic patients with intracranial electrodes to compare the precise dynamics of the responses to local and global novelty. Local novelty triggered an early response observed as an intracranial mismatch negativity (MMN) contemporary with a strong power increase in the gamma band and an increase in connectivity in the beta band. Importantly, all these responses were strictly confined to the temporal auditory cortex. In contrast, global novelty gave rise to a late ERP response distributed across brain areas, contemporary with a sustained power decrease in the beta band (13-25 Hz) and an increase in connectivity in the alpha band (8-13 Hz) within the frontal lobe. We discuss these multi-facet signatures in terms of conscious access to perceptual information. PMID:24969472

  6. Subthreshold outward currents enhance temporal integration in auditory neurons.

    PubMed

    Svirskis, Gytis; Dodla, Ramana; Rinzel, John

    2003-11-01

    Many auditory neurons possess low-threshold potassium currents ( I(KLT)) that enhance their responsiveness to rapid and coincident inputs. We present recordings from gerbil medial superior olivary (MSO) neurons in vitro and modeling results that illustrate how I(KLT) improves the detection of brief signals, of weak signals in noise, and of the coincidence of signals (as needed for sound localization). We quantify the enhancing effect of I(KLT) on temporal processing with several measures: signal-to-noise ratio (SNR), reverse correlation or spike-triggered averaging of input currents, and interaural time difference (ITD) tuning curves. To characterize how I(KLT), which activates below spike threshold, influences a neuron's voltage rise toward threshold, i.e., how it filters the inputs, we focus first on the response to weak and noisy signals. Cells and models were stimulated with a computer-generated steady barrage of random inputs, mimicking weak synaptic conductance transients (the "noise"), together with a larger but still subthreshold postsynaptic conductance, EPSG (the "signal"). Reduction of I(KLT) decreased the SNR, mainly due to an increase in spontaneous firing (more "false positive"). The spike-triggered reverse correlation indicated that I(KLT) shortened the integration time for spike generation. I(KLT) also heightened the model's timing selectivity for coincidence detection of simulated binaural inputs. Further, ITD tuning is shifted in favor of a slope code rather than a place code by precise and rapid inhibition onto MSO cells (Brand et al. 2002). In several ways, low-threshold outward currents are seen to shape integration of weak and strong signals in auditory neurons. PMID:14669013

  7. Auditory neuropathy: a challenge for diagnosis and treatment.

    PubMed

    Declau, F; Boudewyns, A; Van den Ende, J; van de Heyning, P

    2013-01-01

    In current terminology, auditory neuropathy spectrum disorder (ANSD) is a disease involving the disruption of the temporal coding of acoustic signals in auditory nerve fibres, resulting in the impairment of auditory perceptions that rely on temporal cues. There is debate about almost every aspect of the disorder, including aetiology, lesion sites, and the terminology used to describe it. ANSD is a heterogeneous disease despite similar audiological findings. The absence of an auditory brainstem response (ABR) and the presence of otoacoustic emissions (OAE) suggest an ANSD profile. However, to determine the exact anatomical site of the disorder, more in-depth audiological and electrophysiological tests must be combined with imaging, genetics and neurological examinations. Greater diagnostic specificity is therefore needed to provide these patients with more adequate treatment. PMID:24383225

  8. Maintenance of auditory-nonverbal information in working memory.

    PubMed

    Soemer, Alexander; Saito, Satoru

    2015-12-01

    According to the multicomponent view of working memory, both auditory-nonverbal information and auditory-verbal information are stored in a phonological code and are maintained by an articulation-based rehearsal mechanism (Baddeley, 2012). Two experiments have been carried out to investigate this hypothesis using sound materials that are difficult to label verbally and difficult to articulate. Participants were required to maintain 2 to 4 sounds differing in timbre over a delay of up to 12 seconds while performing different secondary tasks. While there was no convincing evidence for articulatory rehearsal as a main maintenance mechanism for auditory-nonverbal information, the results suggest that processes similar or identical to auditory imagery might contribute to maintenance. We discuss the implications of these results for multicomponent models of working memory. PMID:25962688

  9. Auditory perceptual simulation: Simulating speech rates or accents?

    PubMed

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. PMID:27177077

  10. Spectrotemporal resolution tradeoff in auditory processing as revealed by human auditory brainstem responses and psychophysical indices.

    PubMed

    Bidelman, Gavin M; Syed Khaja, Ameenuddin

    2014-06-20

    Auditory filter theory dictates a physiological compromise between frequency and temporal resolution of cochlear signal processing. We examined neurophysiological correlates of these spectrotemporal tradeoffs in the human auditory system using auditory evoked brain potentials and psychophysical responses. Temporal resolution was assessed using scalp-recorded auditory brainstem responses (ABRs) elicited by paired clicks. The inter-click interval (ICI) between successive pulses was parameterized from 0.7 to 25 ms to map ABR amplitude recovery as a function of stimulus spacing. Behavioral frequency difference limens (FDLs) and auditory filter selectivity (Q10 of psychophysical tuning curves) were obtained to assess relations between behavioral spectral acuity and electrophysiological estimates of temporal resolvability. Neural responses increased monotonically in amplitude with increasing ICI, ranging from total suppression (0.7 ms) to full recovery (25 ms) with a temporal resolution of ∼3-4 ms. ABR temporal thresholds were correlated with behavioral Q10 (frequency selectivity) but not FDLs (frequency discrimination); no correspondence was observed between Q10 and FDLs. Results suggest that finer frequency selectivity, but not discrimination, is associated with poorer temporal resolution. The inverse relation between ABR recovery and perceptual frequency tuning demonstrates a time-frequency tradeoff between the temporal and spectral resolving power of the human auditory system. PMID:24793771

  11. [Auditory neuropathy (auditory neuropathy spectrum disorders): the approaches to diagnostics and rehabilitation].

    PubMed

    Tavartkiladze, G A

    2014-01-01

    Auditory neuropathies (auditory neuropathy spectrum disorders, ANSD) may be a consequence of dysfunction of inner hair cells and/or of synapses between these cells and auditory nerve fibers. Another cause of these disorders is supposed to be pathological changes in the auditory nerve itself. The outcome of the rehabilitative treatment of the patients presenting with this disorder depends on the quality of diagnosis and precise location of the pathological process. The present study involved 82 patients with auditory neuropathies. The audiological data obtained in the course of this work were compared with the results of other authors published during the recent years. The objective audiological examination included electrocochleography, registration of auditory brainstem response (ABR) and otoacoustic emission of short-latency and long-latency evoked auditory nerve action potentials. High-amplitude cochlear microphonic and transient evoked otoacoustic emission (TEOAE) potentials were recorded in 82 patients. In 17 (20.7%) patients, otoacoustic emission disappeared in the course of time even though the microphonic potential remained stable. It was shown that the results of electrical acoustic correction in the patients exhibiting long-latency evoked auditory action potentials and positive ABR to electrical stimulation (positive promontory test) were better than in the remaining cases. The outcome of cochlear implantation to a large extent depended on the localization of the pathological process. Specifically, the results of the treatment of the patients with high-amplitude summation potentials, prolonged latency, and positive auditory action potentials in response to electrical stimulation (typical of pre-synaptic localization of the pathological process) were better than in the patients with normal summation potentials, pathological auditory nerve action potentials, TEOAE, and negative ABR to electrical stimulation (indicative of post-synaptic localization of the

  12. Spatial Coherence in Auditory Cortical Activity Fluctuations

    NASA Astrophysics Data System (ADS)

    Yoshida, Takamasa; Katura, Takusige; Yamazaki, Kyoko; Tanaka, Shigeru; Iwamoto, Mitsumasa; Tanaka, Naoki

    2007-07-01

    We examined activity fluctuations as ongoing and spontaneous activities that were recorded with voltage sensitive dye imaging in the auditory cortex of guinea pigs. We investigated whether such activities demonstrated spatial coherence, which represents the cortical functional organization. We used independent component analysis to extract neural activities from observed signals and a scaled signal-plus-noise model to estimate ongoing activities from the neural activities including response components. We mapped the correlation between the time courses in each channel and in the others for the whole observed region. Ongoing and spontaneous activities in the auditory cortex were found to have strong spatial coherence corresponding to the tonotopy, which is one of auditory functional organization.

  13. Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes

    SciTech Connect

    Lu, S.

    2002-07-01

    As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent of this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)

  14. Context dependence of spectro-temporal receptive fields with implications for neural coding.

    PubMed

    Eggermont, Jos J

    2011-01-01

    The spectro-temporal receptive field (STRF) is frequently used to characterize the linear frequency-time filter properties of the auditory system up to the neuron recorded from. STRFs are extremely stimulus dependent, reflecting the strong non-linearities in the auditory system. Changes in the STRF with stimulus type (tonal, noise-like, vocalizations), sound level and spectro-temporal sound density are reviewed here. Effects on STRF shape of task and attention are also briefly reviewed. Models to account for these changes, potential improvements to STRF analysis, and implications for neural coding are discussed. PMID:20123121

  15. Auditory Memory for Timbre

    ERIC Educational Resources Information Center

    McKeown, Denis; Wellsted, David

    2009-01-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…

  16. Auditory Channel Problems.

    ERIC Educational Resources Information Center

    Mann, Philip H.; Suiter, Patricia A.

    This teacher's guide contains a list of general auditory problem areas where students have the following problems: (a) inability to find or identify source of sound; (b) difficulty in discriminating sounds of words and letters; (c) difficulty with reproducing pitch, rhythm, and melody; (d) difficulty in selecting important from unimportant sounds;…

  17. Altered auditory function in rats exposed to hypergravic fields

    NASA Technical Reports Server (NTRS)

    Jones, T. A.; Hoffman, L.; Horowitz, J. M.

    1982-01-01

    The effect of an orthodynamic hypergravic field of 6 G on the brainstem auditory projections was studied in rats. The brain temperature and EEG activity were recorded in the rats during 6 G orthodynamic acceleration and auditory brainstem responses were used to monitor auditory function. Results show that all animals exhibited auditory brainstem responses which indicated impaired conduction and transmission of brainstem auditory signals during the exposure to the 6 G acceleration field. Significant increases in central conduction time were observed for peaks 3N, 4P, 4N, and 5P (N = negative, P = positive), while the absolute latency values for these same peaks were also significantly increased. It is concluded that these results, along with those for fields below 4 G (Jones and Horowitz, 1981), indicate that impaired function proceeds in a rostro-caudal progression as field strength is increased.

  18. Auditory Adaptation in Vocal Affect Perception

    ERIC Educational Resources Information Center

    Bestelmeyer, Patricia E. G.; Rouger, Julien; DeBruine, Lisa M.; Belin, Pascal

    2010-01-01

    Previous research has demonstrated perceptual aftereffects for emotionally expressive faces, but the extent to which they can also be obtained in a different modality is unknown. In two experiments we show for the first time that adaptation to affective, non-linguistic vocalisations elicits significant auditory aftereffects. Adaptation to angry…

  19. Developmental Changes in Auditory Temporal Perception.

    ERIC Educational Resources Information Center

    Morrongiello, Barbara A.; And Others

    1984-01-01

    Infants, preschoolers, and adults were tested to determine the shortest time interval at which they would respond to the precedence effect, an auditory phenomenon produced by presenting the same sound through two loudspeakers with the input to one loudspeaker delayed in relation to the other. Results revealed developmental differences in threshold…

  20. Task-irrelevant auditory feedback facilitates motor performance in musicians.

    PubMed

    Conde, Virginia; Altenmüller, Eckart; Villringer, Arno; Ragert, Patrick

    2012-01-01

    An efficient and fast auditory-motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in musicians using a serial reaction time task (SRTT). Our hypothesis was that musicians, due to their extensive auditory-motor practice routine during musical training, have superior performance and learning capabilities when receiving auditory feedback during SRTT relative to musicians performing the SRTT without any auditory feedback. Behaviorally, we found that auditory feedback reinforced SRTT performance of the right hand (referring to absolute response speed) while learning capabilities remained unchanged. This finding highlights a potential important role for task-irrelevant auditory feedback in motor performance in musicians, a finding that might provide further insight into auditory-motor integration independent of the trained musical context. PMID:22623920

  1. Visual-induced expectations modulate auditory cortical responses.

    PubMed

    van Wassenhove, Virginie; Grzeczkowski, Lukasz

    2015-01-01

    Active sensing has important consequences on multisensory processing (Schroeder et al., 2010). Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient color changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the "where" and the "when" of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG) while maintaining the position of their eyes on the left, right, or center of the screen. Participants counted color changes of the fixation cross while neglecting sounds which could be presented to the left, right, or both ears. First, clear alpha power increases were observed in auditory cortices, consistent with participants' attention directed to visual inputs. Second, color changes elicited robust modulations of auditory cortex responses ("when" prediction) seen as ramping activity, early alpha phase-locked responses, and enhanced high-gamma band responses in the contralateral side of sound presentation. Third, no modulations of auditory evoked or oscillatory activity were found to be specific to eye position. Altogether, our results suggest that visual transience can automatically elicit a prediction of "when" a sound will occur by changing the excitability of auditory cortices irrespective of the attended modality, eye position or spatial congruency of auditory and visual events. To the contrary, auditory cortical responses were not significantly affected by eye position suggesting that "where" predictions may require active sensing or saccadic reset to modulate auditory cortex responses, notably in the absence of spatial orientation to sounds. PMID:25705174

  2. Navajo Code Talker Joe Morris, Sr. shared insights from his time as a secret World War Two messenger

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Navajo Code Talker Joe Morris, Sr. shared insights from his time as a secret World War Two messenger with his audience at NASA's Dryden Flight Research Center on Nov. 26, 2002. NASA Dryden is located on Edwards Air Force Base in California's Mojave Desert.

  3. Estimation of the reaction times in tasks of varying difficulty from the phase coherence of the auditory steady-state response using the least absolute shrinkage and selection operator analysis.

    PubMed

    Yokota, Yusuke; Igarashi, Yasuhiko; Okada, Masato; Naruse, Yasushi

    2015-08-01

    Quantitative estimation of the workload in the brain is an important factor for helping to predict the behavior of humans. The reaction time when performing a difficult task is longer than that when performing an easy task. Thus, the reaction time reflects the workload in the brain. In this study, we employed an N-back task in order to regulate the degree of difficulty of the tasks, and then estimated the reaction times from the brain activity. The brain activity that we used to estimate the reaction time was the auditory steady-state response (ASSR) evoked by a 40-Hz click sound. Fifteen healthy participants participated in the present study and magnetoencephalogram (MEG) responses were recorded using a 148-channel magnetometer system. The least absolute shrinkage and selection operator (LASSO), which is a type of sparse modeling, was employed to estimate the reaction times from the ASSR recorded by MEG. The LASSO showed higher estimation accuracy than the least squares method. This result indicates that LASSO overcame the over-fitting to the learning data. Furthermore, the LASSO selected channels in not only the parietal region, but also in the frontal and occipital regions. Since the ASSR is evoked by auditory stimuli, it is usually large in the parietal region. However, since LASSO also selected channels in regions outside the parietal region, this suggests that workload-related neural activity occurs in many brain regions. In the real world, it is more practical to use a wearable electroencephalography device with a limited number of channels than to use MEG. Therefore, determining which brain areas should be measured is essential. The channels selected by the sparse modeling method are informative for determining which brain areas to measure. PMID:26737821

  4. On the digital signal processor based programmable real-time Reed-Solomon coding-decoding system

    NASA Astrophysics Data System (ADS)

    Zheng, Ou Yang Jing; Yong, Lin Ze

    1990-04-01

    A real-time high-speed Reed-Solomon coding-decoding system has been built on the basis of a low-cost signal processor TMS 32010. By taking advantage of the TMS 320 16-bit multiplier and barrel shifter, the evaluation of the sum-of-products expression over a prime field can be simplified. Decoding of a (256,228) code with powerful error correcting capability over GF(257) has been implemented in this system, and the information bit rate is at least twice that of the ISDN B-channel basic rate.

  5. Auditory Brainstem Response Improvements in Hyperbillirubinemic Infants

    PubMed Central

    Abdollahi, Farzaneh Zamiri; Manchaiah, Vinaya; Lotfi, Yones

    2016-01-01

    Background and Objectives Hyperbillirubinemia in infants have been associated with neuronal damage including in the auditory system. Some researchers have suggested that the bilirubin-induced auditory neuronal damages may be temporary and reversible. This study was aimed at investigating the auditory neuropathy and reversibility of auditory abnormalities in hyperbillirubinemic infants. Subjects and Methods The study participants included 41 full term hyperbilirubinemic infants (mean age 39.24 days) with normal birth weight (3,200-3,700 grams) that admitted in hospital for hyperbillirubinemia and 39 normal infants (mean age 35.54 days) without any hyperbillirubinemia or other hearing loss risk factors for ruling out maturational changes. All infants in hyperbilirubinemic group had serum bilirubin level more than 20 milligram per deciliter and undergone one blood exchange transfusion. Hearing evaluation for each infant was conducted twice: the first one after hyperbilirubinemia treatment and before leaving hospital and the second one three months after the first hearing evaluation. Hearing evaluations included transient evoked otoacoustic emission (TEOAE) screening and auditory brainstem response (ABR) threshold tracing. Results The TEOAE and ABR results of control group and TEOAE results of the hyperbilirubinemic group did not change significantly from the first to the second evaluation. However, the ABR results of the hyperbilirubinemic group improved significantly from the first to the second assessment (p=0.025). Conclusions The results suggest that the bilirubin induced auditory neuronal damage can be reversible over time so we suggest that infants with hyperbilirubinemia who fail the first hearing tests should be reevaluated after 3 months of treatment. PMID:27144228

  6. Resource allocation models of auditory working memory.

    PubMed

    Joseph, Sabine; Teki, Sundeep; Kumar, Sukhbinder; Husain, Masud; Griffiths, Timothy D

    2016-06-01

    Auditory working memory (WM) is the cognitive faculty that allows us to actively hold and manipulate sounds in mind over short periods of time. We develop here a particular perspective on WM for non-verbal, auditory objects as well as for time based on the consideration of possible parallels to visual WM. In vision, there has been a vigorous debate on whether WM capacity is limited to a fixed number of items or whether it represents a limited resource that can be allocated flexibly across items. Resource allocation models predict that the precision with which an item is represented decreases as a function of total number of items maintained in WM because a limited resource is shared among stored objects. We consider here auditory work on sequentially presented objects of different pitch as well as time intervals from the perspective of dynamic resource allocation. We consider whether the working memory resource might be determined by perceptual features such as pitch or timbre, or bound objects comprising multiple features, and we speculate on brain substrates for these behavioural models. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26835560

  7. Intensity modulation and direct detection Alamouti polarization-time coding for optical fiber transmission systems with polarization mode dispersion

    NASA Astrophysics Data System (ADS)

    Reza, Ahmed Galib; Rhee, June-Koo Kevin

    2016-07-01

    Alamouti space-time coding is modified in the form of polarization-time coding to combat against polarization mode dispersion (PMD) impairments in exploiting a polarization diversity multiplex (PDM) gain with simple intensity modulation and direct detection (IM/DD) in optical transmission systems. A theoretical model for the proposed IM/DD Alamouti polarization-time coding (APTC-IM/DD) using nonreturn-to-zero on-off keying signal can surprisingly eliminate the requirement of channel estimation for decoding in the low PMD regime, when a two-transmitter and two-receiver channel is adopted. Even in the high PMD regime, the proposed APTC-IM/DD still reveals coding gain demonstrating the robustness of APTC-IM/DD. In addition, this scheme can eliminate the requirements for a polarization state controller, a coherent receiver, and a high-speed analog-to-digital converter at a receiver. Simulation results reveal that the proposed APTC scheme is able to reduce the optical signal-to-noise ratio requirement by ˜3 dB and significantly enhance the PMD tolerance of a PDM-based IM/DD system.

  8. Dynamic bandwidth allocation algorithm for next-generation time division multiplexing passive optical networks with network coding

    NASA Astrophysics Data System (ADS)

    Wei, Pei; Gu, Rentao; Ji, Yuefeng

    2013-08-01

    An efficient dynamic bandwidth allocation (DBA) algorithm for multiclass services called MSDBA is proposed for next-generation time division multiplexing (TDM) passive optical networks with network coding (NC-PON). In MSDBA, a DBA cycle is divided into two subcycles with different coding strategies for differentiated classes of services, and the transmission time of the first subcycle overlaps with the bandwidth allocation calculation time at the optical line terminal. Moreover, according to the quality-of-service (QoS) requirements of services, different scheduling and bandwidth allocation schemes are applied to coded or uncoded services in the corresponding subcycle. Numerical analyses and simulations for performance evaluation are performed in 10 Gbps ethernet passive optical networks (10G EPON), which is a standardized solution for next-generation EPON. Evaluation results show that compared with the existing two DBA algorithms deployed in TDM NC-PON, MSDBA not only demonstrates better performance in delay and QoS support for all classes of services but also achieves the maximum end-to-end delay fairness between coded and uncoded lower-class services and guarantees the end-to-end delay bound and fixed polling order of high-class services by sacrificing their end-to-end delay fairness for compromise.

  9. LUMPED: a Visual Basic code of lumped-parameter models for mean residence time analyses of groundwater systems

    NASA Astrophysics Data System (ADS)

    Ozyurt, N. N.; Bayari, C. S.

    2003-02-01

    A Microsoft ® Visual Basic 6.0 (Microsoft Corporation, 1987-1998) code of 15 lumped-parameter models is presented for the analysis of mean residence time in aquifers. Groundwater flow systems obeying plug and exponential flow models and their combinations of parallel or serial connection can be simulated by these steady-state models which may include complications such as bypass flow and dead volume. Each model accepts tritium, krypton-85, chlorofluorocarbons (CFC-11, CFC-12 and CFC-113) and sulfur hexafluoride (SF 6) as environmental tracer. Retardation of gas tracers in the unsaturated zone and their degradation in the flow system may also be accounted for. The executable code has been tested to run under Windows 95 or higher operating systems. The results of comparisons between other comparable codes are discussed and the limitations are indicated.

  10. The MCNP-DSP code for calculations of time and frequency analysis parameters for subcritical systems

    SciTech Connect

    Valentine, T.E.; Mihalczo, J.T.

    1995-12-31

    This paper describes a modified version of the MCNP code, the MCNP-DSP. Variance reduction features were disabled to have strictly analog particle tracking in order to follow fluctuating processes more accurately. Some of the neutron and photon physics routines were modified to better represent the production of particles. Other modifications are discussed.

  11. Surveying Multidisciplinary Aspects in Real-Time Distributed Coding for Wireless Sensor Networks

    PubMed Central

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-01

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, “real-time” coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories. PMID:25633597

  12. Fast wave propagation in auditory cortex of an awake cat using a chronic microelectrode array

    NASA Astrophysics Data System (ADS)

    Witte, Russell S.; Rousche, Patrick J.; Kipke, Daryl R.

    2007-06-01

    We investigated fast wave propagation in auditory cortex of an alert cat using a chronically implanted microelectrode array. A custom, real-time imaging template exhibited wave dynamics within the 33-microwire array (3 mm2) during ten recording sessions spanning 1 month post implant. Images were based on the spatial arrangement of peri-stimulus time histograms at each recording site in response to auditory stimuli consisting of tone pips between 1 and 10 kHz at 75 dB SPL. Functional images portray stimulus-locked spiking activity and exhibit waves of excitation and inhibition that evolve during the onset, sustained and offset period of the tones. In response to 5 kHz, for example, peak excitation occurred at 27 ms after onset and again at 15 ms following tone offset. Variability of the position of the centroid of excitation during ten recording sessions reached a minimum at 31 ms post onset (σ = 125 µm) and 18 ms post offset (σ = 145 µm), suggesting a fine place/time representation of the stimulus in the cortex. The dynamics of these fast waves also depended on stimulus frequency, likely reflecting the tonotopicity in auditory cortex projected from the cochlea. Peak wave velocities of 0.2 m s-1 were also consistent with those purported across horizontal layers of cat visual cortex. The fine resolution offered by microimaging may be critical for delivering optimal coding strategies used with an auditory prosthesis. Based on the initial results, future studies seek to determine the relevance of these waves to sensory perception and behavior. The work was performed at Department of Bioengineering, Arizona State University, ECG 334 MS-9709 Arizona State University, Tempe, AZ 85287-9709, USA.

  13. Fast wave propagation in auditory cortex of an awake cat using a chronic microelectrode array.

    PubMed

    Witte, Russell S; Rousche, Patrick J; Kipke, Daryl R

    2007-06-01

    We investigated fast wave propagation in auditory cortex of an alert cat using a chronically implanted microelectrode array. A custom, real-time imaging template exhibited wave dynamics within the 33-microwire array (3 mm(2)) during ten recording sessions spanning 1 month post implant. Images were based on the spatial arrangement of peri-stimulus time histograms at each recording site in response to auditory stimuli consisting of tone pips between 1 and 10 kHz at 75 dB SPL. Functional images portray stimulus-locked spiking activity and exhibit waves of excitation and inhibition that evolve during the onset, sustained and offset period of the tones. In response to 5 kHz, for example, peak excitation occurred at 27 ms after onset and again at 15 ms following tone offset. Variability of the position of the centroid of excitation during ten recording sessions reached a minimum at 31 ms post onset (sigma = 125 microm) and 18 ms post offset (sigma = 145 microm), suggesting a fine place/time representation of the stimulus in the cortex. The dynamics of these fast waves also depended on stimulus frequency, likely reflecting the tonotopicity in auditory cortex projected from the cochlea. Peak wave velocities of 0.2 m s(-1) were also consistent with those purported across horizontal layers of cat visual cortex. The fine resolution offered by microimaging may be critical for delivering optimal coding strategies used with an auditory prosthesis. Based on the initial results, future studies seek to determine the relevance of these waves to sensory perception and behavior. PMID:17409481

  14. Vision contingent auditory pitch aftereffects.

    PubMed

    Teramoto, Wataru; Kobayashi, Maori; Hidaka, Souta; Sugita, Yoichi

    2013-08-01

    Visual motion aftereffects can occur contingent on arbitrary sounds. Two circles, placed side by side, were alternately presented, and the onsets were accompanied by tone bursts of high and low frequencies, respectively. After a few minutes of exposure to the visual apparent motion with the tones, a circle blinking at a fixed location was perceived as a lateral motion in the same direction as the previously exposed apparent motion (Teramoto et al. in PLoS One 5:e12255, 2010). In the present study, we attempted to reverse this contingency (pitch aftereffects contingent on visual information). Results showed that after prolonged exposure to the audio-visual stimuli, the apparent visual motion systematically affected the perceived pitch of the auditory stimuli. When the leftward apparent visual motion was paired with the high-low-frequency sequence during the adaptation phase, a test tone sequence was more frequently perceived as a high-low-pitch sequence when the leftward apparent visual motion was presented and vice versa. Furthermore, the effect was specific for the exposed visual field and did not transfer to the other side, thus ruling out an explanation in terms of simple response bias. These results suggest that new audiovisual associations can be established within a short time, and visual information processing and auditory processing can mutually influence each other. PMID:23727883

  15. High resolution auditory perception system

    NASA Astrophysics Data System (ADS)

    Alam, Iftekhar; Ghatol, Ashok

    2005-04-01

    Blindness is a sensory disability which is difficult to treat but can to some extent be helped by artificial aids. The paper describes the design aspects of a high resolution auditory perception system, which is designed on the principle of air sonar with binaural perception. This system is a vision substitution aid for enabling blind persons. The blind person wears ultrasonic eyeglasses which has ultrasonic sensor array embedded on it. The system has been designed to operate in multiresolution modes. The ultrasonic sound from the transmitter array is reflected back by the objects, falling in the beam of the array and is received. The received signal is converted to a sound signal, which is presented stereophonically for auditory perception. A detailed study has been done as the background work required for the system implementation; the appropriate range analysis procedure, analysis of space-time signals, the acoustic sensors study, amplification methods and study of the removal of noise using filters. Finally the system implementation including both the hardware and the software part of it has been described. Experimental results on actual blind subjects and inferences obtained during the study have also been included.

  16. Auditory object cognition in dementia.

    PubMed

    Goll, Johanna C; Kim, Lois G; Hailstone, Julia C; Lehmann, Manja; Buckley, Aisling; Crutch, Sebastian J; Warren, Jason D

    2011-07-01

    The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n=21), progressive nonfluent aphasia (PNFA; n=5), logopenic progressive aphasia (LPA; n=7) and aphasia in association with a progranulin gene mutation (GAA; n=1), and in healthy age-matched controls (n=20). Based on a cognitive framework treating complex sounds as 'auditory objects', we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes. PMID:21689671

  17. Using the time Petri net formalism for specification, validation, and code generation in robot-control applications

    SciTech Connect

    Montano, L.; Garcia, F.J.; Villarroel, J.L.

    2000-01-01

    The main objective of this paper is to show the advantages of using the time Petri net formalism for specification, validation, and code generation in robot-control applications. To achieve this objective, the authors consider as application the development of a control system for a mobile robot with a rotating rangefinder laser sensor with two degrees of freedom to be used in navigation tasks with obstacle avoidance. It is shown how the use of the time Petri net formalism in the shole development cycle can fulfill the reliability requirement of real-time systems, make the system development easy and quick, strongly reduce the time for the testing and tuning phases and, therefore, reduce the development cost significantly. It allows verification of functional and temporal requirements, error detection in the early stages of the development cycle, and automatic code generation, avoiding coding mistakes. Experimental tests show that the theoretical results obtained from the analysis of formal system models match the real-time behavior of the robotic system.

  18. Stochastic undersampling steepens auditory threshold/duration functions: implications for understanding auditory deafferentation and aging

    PubMed Central

    Marmel, Frédéric; Rodríguez-Mendoza, Medardo A.; Lopez-Poveda, Enrique A.

    2015-01-01

    It has long been known that some listeners experience hearing difficulties out of proportion with their audiometric losses. Notably, some older adults as well as auditory neuropathy patients have temporal-processing and speech-in-noise intelligibility deficits not accountable for by elevated audiometric thresholds. The study of these hearing deficits has been revitalized by recent studies that show that auditory deafferentation comes with aging and can occur even in the absence of an audiometric loss. The present study builds on the stochastic undersampling principle proposed by Lopez-Poveda and Barrios (2013) to account for the perceptual effects of auditory deafferentation. Auditory threshold/duration functions were measured for broadband noises that were stochastically undersampled to various different degrees. Stimuli with and without undersampling were equated for overall energy in order to focus on the changes that undersampling elicited on the stimulus waveforms, and not on its effects on the overall stimulus energy. Stochastic undersampling impaired the detection of short sounds (<20 ms). The detection of long sounds (>50 ms) did not change or improved, depending on the degree of undersampling. The results for short sounds show that stochastic undersampling, and hence presumably deafferentation, can account for the steeper threshold/duration functions observed in auditory neuropathy patients and older adults with (near) normal audiometry. This suggests that deafferentation might be diagnosed using pure-tone audiometry with short tones. It further suggests that the auditory system of audiometrically normal older listeners might not be “slower than normal”, as is commonly thought, but simply less well afferented. Finally, the results for both short and long sounds support the probabilistic theories of detectability that challenge the idea that auditory threshold occurs by integration of sound energy over time. PMID:26029098

  19. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    PubMed Central

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  20. Substructure of Functional Network for Auditory Stream Segregation in Auditory Cortex

    NASA Astrophysics Data System (ADS)

    Noda, Takahiro; Kanzaki, Ryohei; Takahashi, Hirokazu

    Perceptual integration and segregation of alternating tone sequence differing in frequency (ABA-ABA-…) depend on the frequency differences (ΔFs) between A and B tones and the inter-tone intervals (ITIs) between successive tones. In the auditory cortex, tonotopic separation, forward suppression and multisecond habituation have been considered as possible neural mechanisms of this perceptual phenomenon. These mechanisms, however, cannot completely account for the van Noorden's perceptual boundary and the temporally continuous perception of auditory streaming. Here we examined the temporal changes of the functional network properties in the auditory cortex in response to tone sequences with different ΔFs and ITIs. Specifically, we recorded local field potentials using microelectrode arrays from anesthetized rats and constructed the functional network based on phase synchrony in gamma-band oscillation. Consequently, the networks consisted of sub-networks highly correlated with the place code of frequency, i.e., tonotopic map, and the sub-network selective to B tones lasted for a prolonged period at large ΔF. Such characteristic substructures of functional network are a possible candidate of neural basis of auditory stream segregation.

  1. The auditory characteristics of children with inner auditory canal stenosis.

    PubMed

    Ai, Yu; Xu, Lei; Li, Li; Li, Jianfeng; Luo, Jianfen; Wang, Mingming; Fan, Zhaomin; Wang, Haibo

    2016-07-01

    Conclusions This study shows that the prevalence of auditory neuropathy spectrum disorder (ANSD) in the children with inner auditory canal (IAC) stenosis is much higher than those without IAC stenosis, regardless of whether they have other inner ear anomalies. In addition, the auditory characteristics of ANSD with IAC stenosis are significantly different from those of ANSD without any middle and inner ear malformations. Objectives To describe the auditory characteristics in children with IAC stenosis as well as to examine whether the narrow inner auditory canal is associated with ANSD. Method A total of 21 children, with inner auditory canal stenosis, participated in this study. A series of auditory tests were measured. Meanwhile, a comparative study was conducted on the auditory characteristics of ANSD, based on whether the children were associated with isolated IAC stenosis. Results Wave V in the ABR was not observed in all the patients, while cochlear microphonic (CM) response was detected in 81.1% ears with stenotic IAC. Sixteen of 19 (84.2%) ears with isolated IAC stenosis had CM response present on auditory brainstem responses (ABR) waveforms. There was no significant difference in ANSD characteristics between the children with and without isolated IAC stenosis. PMID:26981851

  2. Central auditory disorders: toward a neuropsychology of auditory objects

    PubMed Central

    Goll, Johanna C.; Crutch, Sebastian J.; Warren, Jason D.

    2012-01-01

    Purpose of review Analysis of the auditory environment, source identification and vocal communication all require efficient brain mechanisms for disambiguating, representing and understanding complex natural sounds as ‘auditory objects’. Failure of these mechanisms leads to a diverse spectrum of clinical deficits. Here we review current evidence concerning the phenomenology, mechanisms and brain substrates of auditory agnosias and related disorders of auditory object processing. Recent findings Analysis of lesions causing auditory object deficits has revealed certain broad anatomical correlations: deficient parsing of the auditory scene is associated with lesions involving the parieto-temporal junction, while selective disorders of sound recognition occur with more anterior temporal lobe or extra-temporal damage. Distributed neural networks have been increasingly implicated in the pathogenesis of such disorders as developmental dyslexia, congenital amusia and tinnitus. Auditory category deficits may arise from defective interaction of spectrotemporal encoding and executive and mnestic processes. Dedicated brain mechanisms are likely to process specialised sound objects such as voices and melodies. Summary Emerging empirical evidence suggests a clinically relevant, hierarchical and fractionated neuropsychological model of auditory object processing that provides a framework for understanding auditory agnosias and makes specific predictions to direct future work. PMID:20975559

  3. GASPS: A time-dependent, one-dimensional, planar gas dynamics computer code

    SciTech Connect

    Pierce, R.E.; Sutton, S.B.; Comfort, W.J. III

    1986-12-05

    GASP is a transient, one-dimensional planar gas dynamic computer code that can be used to calculate the propagation of a shock wave. GASP, developed at LLNL, solves the one-dimensional planar equations governing momentum, mass and energy conservation. The equations are cast in an Eulerian formulation where the mesh is fixed in space, and material flows through it. Thus it is necessary to account for convection of material from one cell to its neighbor.

  4. Space, time, and numbers in the right posterior parietal cortex: Differences between response code associations and congruency effects.

    PubMed

    Riemer, Martin; Diersch, Nadine; Bublatzky, Florian; Wolbers, Thomas

    2016-04-01

    The mental representations of space, time, and number magnitude are inherently linked. The right posterior parietal cortex (PPC) has been suggested to contain a general magnitude system that underlies the overlap between various perceptual dimensions. However, comparative studies including spatial, temporal, and numerical dimensions are missing. In a unified paradigm, we compared the impact of right PPC inhibition on associations with spatial response codes (i.e., Simon, SNARC, and STARC effects) and on congruency effects between space, time, and numbers. Prolonged cortical inhibition was induced by continuous theta-burst stimulation (cTBS), a protocol for transcranial magnetic stimulation (TMS), at the right intraparietal sulcus (IPS). Our results show that congruency effects, but not response code associations, are affected by right PPC inhibition, indicating different neuronal mechanisms underlying these effects. Furthermore, the results demonstrate that interactions between space and time perception are reflected in congruency effects, but not in an association between time and spatial response codes. Taken together, these results implicate that the congruency between purely perceptual dimensions is processed in PPC areas along the IPS, while the congruency between percepts and behavioral responses is independent of this region. PMID:26808331

  5. Early hominin auditory capacities.

    PubMed

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis

    2015-09-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  6. Early hominin auditory capacities

    PubMed Central

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis

    2015-01-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  7. Using Facebook to Reach People Who Experience Auditory Hallucinations

    PubMed Central

    Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience

  8. Auditory interfaces: The human perceiver

    NASA Technical Reports Server (NTRS)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  9. Color-Coded Prefilled Medication Syringes Decrease Time to Delivery and Dosing Error in Simulated Emergency Department Pediatric Resuscitations

    PubMed Central

    Moreira, Maria E.; Hernandez, Caleb; Stevens, Allen D.; Jones, Seth; Sande, Margaret; Blumen, Jason R.; Hopkins, Emily; Bakes, Katherine; Haukoos, Jason S.

    2016-01-01

    Study objective The Institute of Medicine has called on the US health care system to identify and reduce medical errors. Unfortunately, medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients when dosing requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national health care priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared with conventional medication administration, in simulated pediatric emergency department (ED) resuscitation scenarios. Methods We performed a prospective, block-randomized, crossover study in which 10 emergency physician and nurse teams managed 2 simulated pediatric arrest scenarios in situ, using either prefilled, color-coded syringes (intervention) or conventional drug administration methods (control). The ED resuscitation room and the intravenous medication port were video recorded during the simulations. Data were extracted from video review by blinded, independent reviewers. Results Median time to delivery of all doses for the conventional and color-coded delivery groups was 47 seconds (95% confidence interval [CI] 40 to 53 seconds) and 19 seconds (95% CI 18 to 20 seconds), respectively (difference=27 seconds; 95% CI 21 to 33 seconds). With the conventional method, 118 doses were administered, with 20 critical dosing errors (17%); with the color-coded method, 123 doses were administered, with 0 critical dosing errors (difference=17%; 95% CI 4% to 30%). Conclusion A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by emergency physician and nurse teams during simulated pediatric ED resuscitations. PMID:25701295

  10. Effects of gait training with rhythmic auditory stimulation on gait ability in stroke patients

    PubMed Central

    Song, Gui-bin; Ryu, Hyo Jeong

    2016-01-01

    [Purpose] The purpose of this study was to compare the gait abilities and motor recovery abilities in stroke patients following overground gait training with or without rhythmic auditory stimulation. [Subjects and Methods] Forty patients with hemiplegia resulting from stroke were divided into a rhythmic auditory stimulation gait training group (n=20) and a gait training group (n=20). The rhythmic auditory simulation gait group and gait training group both performed gait training. Rhythmic auditory stimulation was added during gait training in the rhythmic auditory stimulation gait training group. The gait training was performed in 30 minute sessions, five times a week, for a total four weeks. [Results] Gate ability significantly improved in both groups, and the rhythmic auditory stimulation gait training group showed more significant increases in cadence, step length, and Dynamic Gait Index. [Conclusion] The results of this study showed that gait training with rhythmic auditory stimulation was more effective at improving gait ability. PMID:27313339

  11. A Dynamic Compressive Gammachirp Auditory Filterbank

    PubMed Central

    Irino, Toshio; Patterson, Roy D.

    2008-01-01

    It is now common to use knowledge about human auditory processing in the development of audio signal processors. Until recently, however, such systems were limited by their linearity. The auditory filter system is known to be level-dependent as evidenced by psychophysical data on masking, compression, and two-tone suppression. However, there were no analysis/synthesis schemes with nonlinear filterbanks. This paper describe18300060s such a scheme based on the compressive gammachirp (cGC) auditory filter. It was developed to extend the gammatone filter concept to accommodate the changes in psychophysical filter shape that are observed to occur with changes in stimulus level in simultaneous, tone-in-noise masking. In models of simultaneous noise masking, the temporal dynamics of the filtering can be ignored. Analysis/synthesis systems, however, are intended for use with speech sounds where the glottal cycle can be long with respect to auditory time constants, and so they require specification of the temporal dynamics of auditory filter. In this paper, we describe a fast-acting level control circuit for the cGC filter and show how psychophysical data involving two-tone suppression and compression can be used to estimate the parameter values for this dynamic version of the cGC filter (referred to as the “dcGC” filter). One important advantage of analysis/synthesis systems with a dcGC filterbank is that they can inherit previously refined signal processing algorithms developed with conventional short-time Fourier transforms (STFTs) and linear filterbanks. PMID:19330044

  12. A parallel code to calculate rate-state seismicity evolution induced by time dependent, heterogeneous Coulomb stress changes

    NASA Astrophysics Data System (ADS)

    Cattania, C.; Khalid, F.

    2016-09-01

    The estimation of space and time-dependent earthquake probabilities, including aftershock sequences, has received increased attention in recent years, and Operational Earthquake Forecasting systems are currently being implemented in various countries. Physics based earthquake forecasting models compute time dependent earthquake rates based on Coulomb stress changes, coupled with seismicity evolution laws derived from rate-state friction. While early implementations of such models typically performed poorly compared to statistical models, recent studies indicate that significant performance improvements can be achieved by considering the spatial heterogeneity of the stress field and secondary sources of stress. However, the major drawback of these methods is a rapid increase in computational costs. Here we present a code to calculate seismicity induced by time dependent stress changes. An important feature of the code is the possibility to include aleatoric uncertainties due to the existence of multiple receiver faults and to the finite grid size, as well as epistemic uncertainties due to the choice of input slip model. To compensate for the growth in computational requirements, we have parallelized the code for shared memory systems (using OpenMP) and distributed memory systems (using MPI). Performance tests indicate that these parallelization strategies lead to a significant speedup for problems with different degrees of complexity, ranging from those which can be solved on standard multicore desktop computers, to those requiring a small cluster, to a large simulation that can be run using up to 1500 cores.

  13. Idealized computational models for auditory receptive fields.

    PubMed

    Lindeberg, Tony; Friberg, Anders

    2015-01-01

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973

  14. Idealized Computational Models for Auditory Receptive Fields

    PubMed Central

    Lindeberg, Tony; Friberg, Anders

    2015-01-01

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973

  15. Formation of associations in auditory cortex by slow changes of tonic firing.

    PubMed

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2011-01-01

    We review event-related slow firing changes in the auditory cortex and related brain structures. Two types of changes can be distinguished, namely increases and decreases of firing, lasting in the order of seconds. Triggering events can be auditory stimuli, reinforcers, and behavioral responses. Slow firing changes terminate with reinforcers and possibly with auditory stimuli and behavioral responses. A necessary condition for the emergence of slow firing changes seems to be that subjects have learnt that consecutive sensory or behavioral events are contingent on reinforcement. They disappear when the contingencies are no longer present. Slow firing changes in auditory cortex bear similarities with slow changes of neuronal activity that have been observed in subcortical parts of the auditory system and in other non-sensory brain structures. We propose that slow firing changes in auditory cortex provide a neuronal mechanism for anticipating, memorizing, and associating events that are related to hearing and of behavioral relevance. This may complement the representation of the timing and types of auditory and auditory-related events which may be provided by phasic responses in auditory cortex. The presence of slow firing changes indicates that many more auditory-related aspects of a behavioral procedure are reflected in the neuronal activity of auditory cortex than previously assumed. PMID:20488230

  16. Negative emotion provides cues for orienting auditory spatial attention

    PubMed Central

    Asutay, Erkin; Västfjäll, Daniel

    2015-01-01

    The auditory stimuli provide information about the objects and events around us. They can also carry biologically significant emotional information (such as unseen dangers and conspecific vocalizations), which provides cues for allocation of attention and mental resources. Here, we investigated whether task-irrelevant auditory emotional information can provide cues for orientation of auditory spatial attention. We employed a covert spatial orienting task: the dot-probe task. In each trial, two task-irrelevant auditory cues were simultaneously presented at two separate locations (left–right or front–back). Environmental sounds were selected to form emotional vs. neutral, emotional vs. emotional, and neutral vs. neutral cue pairs. The participants’ task was to detect the location of an acoustic target that was presented immediately after the task-irrelevant auditory cues. The target was presented at the same location as one of the auditory cues. The results indicated that participants were significantly faster to locate the target when it replaced the negative cue compared to when it replaced the neutral cue. The positive cues did not produce a clear attentional bias. Further, same valence pairs (emotional–emotional or neutral–neutral) did not modulate reaction times due to a lack of spatial attention capture by one cue in the pair. Taken together, the results indicate that negative affect can provide cues for the orientation of spatial attention in the auditory domain. PMID:26029149

  17. Negative emotion provides cues for orienting auditory spatial attention.

    PubMed

    Asutay, Erkin; Västfjäll, Daniel

    2015-01-01

    The auditory stimuli provide information about the objects and events around us. They can also carry biologically significant emotional information (such as unseen dangers and conspecific vocalizations), which provides cues for allocation of attention and mental resources. Here, we investigated whether task-irrelevant auditory emotional information can provide cues for orientation of auditory spatial attention. We employed a covert spatial orienting task: the dot-probe task. In each trial, two task-irrelevant auditory cues were simultaneously presented at two separate locations (left-right or front-back). Environmental sounds were selected to form emotional vs. neutral, emotional vs. emotional, and neutral vs. neutral cue pairs. The participants' task was to detect the location of an acoustic target that was presented immediately after the task-irrelevant auditory cues. The target was presented at the same location as one of the auditory cues. The results indicated that participants were significantly faster to locate the target when it replaced the negative cue compared to when it replaced the neutral cue. The positive cues did not produce a clear attentional bias. Further, same valence pairs (emotional-emotional or neutral-neutral) did not modulate reaction times due to a lack of spatial attention capture by one cue in the pair. Taken together, the results indicate that negative affect can provide cues for the orientation of spatial attention in the auditory domain. PMID:26029149

  18. ACT-ARA: Code System for the Calculation of Changes in Radiological Source Terms with Time

    Energy Science and Technology Software Center (ESTSC)

    1988-02-01

    The program calculates the source term activity as a function of time for parent isotopes as well as daughters. Also, at each time, the "probable release" is produced. Finally, the program determines the time integrated probable release for each isotope over the time period of interest.

  19. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  20. Ion channel noise can explain firing correlation in auditory nerves.

    PubMed

    Moezzi, Bahar; Iannella, Nicolangelo; McDonnell, Mark D

    2016-10-01

    Neural spike trains are commonly characterized as a Poisson point process. However, the Poisson assumption is a poor model for spiking in auditory nerve fibres because it is known that interspike intervals display positive correlation over long time scales and negative correlation over shorter time scales. We have therefore developed a biophysical model based on the well-known Meddis model of the peripheral auditory system, to produce simulated auditory nerve fibre spiking statistics that more closely match the firing correlations observed in empirical data. We achieve this by introducing biophysically realistic ion channel noise to an inner hair cell membrane potential model that includes fractal fast potassium channels and deterministic slow potassium channels. We succeed in producing simulated spike train statistics that match empirically observed firing correlations. Our model thus replicates macro-scale stochastic spiking statistics in the auditory nerve fibres due to modeling stochasticity at the micro-scale of potassium channels. PMID:27480847

  1. Auditory spatial localization: Developmental delay in children with visual impairments.

    PubMed

    Cappagli, Giulia; Gori, Monica

    2016-01-01

    For individuals with visual impairments, auditory spatial localization is one of the most important features to navigate in the environment. Many works suggest that blind adults show similar or even enhanced performance for localization of auditory cues compared to sighted adults (Collignon, Voss, Lassonde, & Lepore, 2009). To date, the investigation of auditory spatial localization in children with visual impairments has provided contrasting results. Here we report, for the first time, that contrary to visually impaired adults, children with low vision or total blindness show a significant impairment in the localization of static sounds. These results suggest that simple auditory spatial tasks are compromised in children, and that this capacity recovers over time. PMID:27002960

  2. Typical BWR/4 MSIV closure ATWS analysis using RAMONA-3B code with space-time neutron kinetics

    SciTech Connect

    Neymotin, L.; Saha, P.

    1984-01-01

    A best-estimate analysis of a typical BWR/4 MSIV closure ATWS has been performed using the RAMONA-3B code with three-dimensional neutron kinetics. All safety features, namely, the safety and relief valves, recirculation pump trip, high pressure safety injections and the standby liquid control system (boron injection), were assumed to work as designed. No other operator action was assumed. The results show a strong spatial dependence of reactor power during the transient. After the initial peak of pressure and reactor power, the reactor vessel pressure oscillated between the relief valve set points, and the reactor power oscillated between 20 to 50% of the steady state power until the hot shutdown condition was reached at approximately 1400 seconds. The suppression pool bulk water temperature at this time was predicted to be approx. 96/sup 0/C (205/sup 0/F). In view of code performance and reasonable computer running time, the RAMONA-3B code is recommended for further best-estimate analyses of ATWS-type events in BWRs.

  3. Detection of almond allergen coding sequences in processed foods by real time PCR.

    PubMed

    Prieto, Nuria; Iniesto, Elisa; Burbano, Carmen; Cabanillas, Beatriz; Pedrosa, Mercedes M; Rovira, Mercè; Rodríguez, Julia; Muzquiz, Mercedes; Crespo, Jesus F; Cuadrado, Carmen; Linacero, Rosario

    2014-06-18

    The aim of this work was to develop and analytically validate a quantitative RT-PCR method, using novel primer sets designed on Pru du 1, Pru du 3, Pru du 4, and Pru du 6 allergen-coding sequences, and contrast the sensitivity and specificity of these probes. The temperature and/or pressure processing influence on the ability to detect these almond allergen targets was also analyzed. All primers allowed a specific and accurate amplification of these sequences. The specificity was assessed by amplifying DNA from almond, different Prunus species and other common plant food ingredients. The detection limit was 1 ppm in unprocessed almond kernels. The method's robustness and sensitivity were confirmed using spiked samples. Thermal treatment under pressure (autoclave) reduced yield and amplificability of almond DNA; however, high-hydrostatic pressure treatments did not produced such effects. Compared with ELISA assay outcomes, this RT-PCR showed higher sensitivity to detect almond traces in commercial foodstuffs. PMID:24857239

  4. Static Network Code DGPS Positioning vs. Carrier Phase Single Baseline Solutions for Short Observation Time and Medium-Long Distances

    NASA Astrophysics Data System (ADS)

    Bakuła, M.

    GPS land surveys are usually based on the results of processing GPS carrier phase data. Code or pseudorange observations due to considerations of accuracy requirements and robustness are preferred in navigation and some GIS applications. Generally, the accuracy of that positioning is in the range of about 1-2meters or so, on average. But the main problem in code GPS positioning is to know how to estimate the real accuracy of DGPS positions. It is not such an easy process in code positioning when one reference station is used. In most commercial software, there are no values of accuracy but only positions are presented. DGPS positions without estimated errors cannot be used for surveying tasks and for most GIS applications due to the fact that every point has to have accuracy determined. However, when using static GPS positioning, it is well known that the accuracy is determined, both during baseline processing and next by the adjustment of a GPS network. These steps of validation with redundancy in classical static phase baseline solutions allow wide use of static or rapid static methods in the main land surveying tasks. Although these control steps are commonly used in many major surveying and engineering tasks, they are not always effective in terms of short-observation-time sessions. This paper presents a new network DGPS approach of positioning with the use of at least three reference stations. The approach concerns also valid accuracy estimation based on variance-covariance matrix in the least-squares calculations. The presented network DGPS approach has the ability of reliable accuracy estimation. Finally, network DGPS positioning is compared with static baselines solutions where five-min sessions were taken into consideration for two different rover stations. It was shown that in a short observation time of GPS positioning, code network DGPS results can give even centimetre accuracy and can be more reliable than static relative phase positioning where gross

  5. Hypermnesia using auditory input.

    PubMed

    Allen, J

    1992-07-01

    The author investigated whether hypermnesia would occur with auditory input. In addition, the author examined the effects of subjects' knowledge that they would later be asked to recall the stimuli. Two groups of 26 subjects each were given three successive recall trials after they listened to an audiotape of 59 high-imagery nouns. The subjects in the uninformed group were not told that they would later be asked to remember the words; those in the informed group were. Hypermnesia was evident, but only in the uninformed group. PMID:1447564

  6. Auditory neglect and related disorders.

    PubMed

    Gutschalk, Alexander; Dykstra, Andrew

    2015-01-01

    Neglect is a neurologic disorder, typically associated with lesions of the right hemisphere, in which patients are biased towards their ipsilesional - usually right - side of space while awareness for their contralesional - usually left - side is reduced or absent. Neglect is a multimodal disorder that often includes deficits in the auditory domain. Classically, auditory extinction, in which left-sided sounds that are correctly perceived in isolation are not detected in the presence of synchronous right-sided stimulation, has been considered the primary sign of auditory neglect. However, auditory extinction can also be observed after unilateral auditory cortex lesions and is thus not specific for neglect. Recent research has shown that patients with neglect are also impaired in maintaining sustained attention, on both sides, a fact that is reflected by an impairment of auditory target detection in continuous stimulation conditions. Perhaps the most impressive auditory symptom in full-blown neglect is alloacusis, in which patients mislocalize left-sided sound sources to their right, although even patients with less severe neglect still often show disturbance of auditory spatial perception, most commonly a lateralization bias towards the right. We discuss how these various disorders may be explained by a single model of neglect and review emerging interventions for patient rehabilitation. PMID:25726290

  7. Word Recognition in Auditory Cortex

    ERIC Educational Resources Information Center

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  8. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    PubMed

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human

  9. Saccades create similar mislocalizations in visual and auditory space.

    PubMed

    Krüger, Hannah M; Collins, Thérèse; Englitz, Bernhard; Cavanagh, Patrick

    2016-04-01

    Orienting our eyes to a light, a sound, or a touch occurs effortlessly, despite the fact that sound and touch have to be converted from head- and body-based coordinates to eye-based coordinates to do so. We asked whether the oculomotor representation is also used for localization of sounds even when there is no saccade to the sound source. To address this, we examined whether saccades introduced similar errors of localization judgments for both visual and auditory stimuli. Sixteen subjects indicated the direction of a visual or auditory apparent motion seen or heard between two targets presented either during fixation or straddling a saccade. Compared with the fixation baseline, saccades introduced errors in direction judgments for both visual and auditory stimuli: in both cases, apparent motion judgments were biased in direction of the saccade. These saccade-induced effects across modalities give rise to the possibility of shared, cross-modal location coding for perception and action. PMID:26888101

  10. Temporal factors affecting somatosensory–auditory interactions in speech processing

    PubMed Central

    Ito, Takayuki; Gracco, Vincent L.; Ostry, David J.

    2014-01-01

    Speech perception is known to rely on both auditory and visual information. However, sound-specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study, we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory–auditory interaction in speech perception. We examined the changes in event-related potentials (ERPs) in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the ERP was reliably different from the two unisensory potentials. More importantly, the magnitude of the ERP difference varied as a function of the relative timing of the somatosensory–auditory stimulation. Event-related activity change due to stimulus timing was seen between 160 and 220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory–auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production. PMID:25452733

  11. Learning Dictionaries of Sparse Codes of 3D Movements of Body Joints for Real-Time Human Activity Understanding

    PubMed Central

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications. PMID:25473850

  12. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    PubMed

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications. PMID:25473850

  13. Auditory Model: Effects on Learning under Blocked and Random Practice Schedules

    ERIC Educational Resources Information Center

    Han, Dong-Wook; Shea, Charles H.

    2008-01-01

    An experiment was conducted to determine the impact of an auditory model on blocked, random, and mixed practice schedules of three five-segment timing sequences (relative time constant). We were interested in whether or not the auditory model differentially affected the learning of relative and absolute timing under blocked and random practice.…

  14. Central auditory development in children with cochlear implants: clinical implications.

    PubMed

    Sharma, Anu; Dorman, Michael F

    2006-01-01

    A common finding in developmental neurobiology is that stimulation must be delivered to a sensory system within a narrow window of time (a sensitive period) during development in order for that sensory system to develop normally. Experiments with congenitally deaf children have allowed us to establish the existence and time limits of a sensitive period for the development of central auditory pathways in humans. Using the latency of cortical auditory evoked potentials (CAEPs) as a measure we have found that central auditory pathways are maximally plastic for a period of about 3.5 years. If the stimulation is delivered within that period CAEP latencies reach age-normal values within 3-6 months after stimulation. However, if stimulation is withheld for more than 7 years, CAEP latencies decrease significantly over a period of approximately 1 month following the onset of stimulation. They then remain constant or change very slowly over months or years. The lack of development of the central auditory system in congenitally deaf children implanted after 7 years is correlated with relatively poor development of speech and language skills [Geers, this vol, pp 50-65]. Animal models suggest that the primary auditory cortex may be functionally decoupled from higher order auditory cortex due to restricted development of inter- and intracortical connections in late-implanted children [Kral and Tillein, this vol, pp 89-108]. Another aspect of plasticity that works against late-implanted children is the reorganization of higher order cortex by other sensory modalities (e.g. vision). The hypothesis of decoupling of primary auditory cortex from higher order auditory cortex in children deprived of sound for a long time may explain the speech perception and oral language learning difficulties of children who receive an implant after the end of the sensitive period. PMID:16891837

  15. Neuromechanistic Model of Auditory Bistability

    PubMed Central

    Rankin, James; Sussman, Elyse; Rinzel, John

    2015-01-01

    Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1). Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept—a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition. PMID:26562507

  16. Neuromechanistic Model of Auditory Bistability.

    PubMed

    Rankin, James; Sussman, Elyse; Rinzel, John

    2015-11-01

    Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1). Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition. PMID:26562507

  17. Principles of auditory information-processing derived from neuroethology.

    PubMed

    Suga, N

    1989-09-01

    For auditory imaging, a bat emits orientation sounds (pulses) and listens to echoes. The parameters characterizing a pulse-echo pair each convey particular types of biosonar information. For example, a Doppler shift (a difference in frequency between an emitted pulse and its echo) carries velocity information. For a 61-kHz sound, a 1.0-kHz Doppler shift corresponds to 2.8 ms-1 velocity. The delay of the echo from the pulse conveys distance (range) information. A 1.0-ms echo delay corresponds to a target distance of 17 cm. The auditory system of the mustached bat, Pteronotus parnelli, from Central America solves the computational problems in analyzing these parameters by creating maps in the cerebral cortex. The pulse of the mustached bat is complex. It consists of four harmonics, each of which contains a long constant-frequency (CF) component and a short frequency-modulated (FM) component. Therefore, there are eight components in the emitted pulse (CF1-4 and FM1-4). The CF signal is particularly suited for target velocity measurement, whereas the FM signal is suited for target distance measurement. Since the eight components differ from each other in frequency, they are analyzed in parallel at different regions of the basilar membrane in the inner ear. Then, they are separately coded by primary auditory neurons and are sent up to the auditory cortex through several auditory nuclei. During the ascent of the signals through these auditory nuclei, neurons responding to the FM components process range information, while other neurons responding to the CF components process velocity information. A comparison of the data obtained from the mustached bat with those obtained from other species illustrates both the specialized neural mechanisms specific to the bat's auditory system, and the general neural mechanisms which are probably shared with many different types of animals. PMID:2689566

  18. Computer Code

    NASA Technical Reports Server (NTRS)

    1985-01-01

    COSMIC MINIVER, a computer code developed by NASA for analyzing aerodynamic heating and heat transfer on the Space Shuttle, has been used by Marquardt Company to analyze heat transfer on Navy/Air Force missile bodies. The code analyzes heat transfer by four different methods which can be compared for accuracy. MINIVER saved Marquardt three months in computer time and $15,000.

  19. A heterogeneous population code for elapsed time in rat medial agranular cortex

    PubMed Central

    Matell, Matthew S.; Shea-Brown, Eric; Gooch, Cindy; Wilson, A. George; Rinzel, John

    2010-01-01

    The neural mechanisms underlying the temporal control of behavior are largely unknown. Here we recorded from the medial agranular cortex in rats trained to respond on a temporal production procedure for probabilistically available food reward. Due to variability in estimating the time of food availability, robust responding typically bracketed the expected duration, starting some time before and ending some time after the signaled delay. This response period provided an analytic “steady-state” window during which the subject actively timed their behavior. Remarkably, during these response periods, a variety of firing patterns were seen which could be broadly described as ramps, peaks, and dips, with different slopes, directions, and times at which maxima or minima occur. Regularized linear discriminant analysis indicated that these patterns provided sufficiently reliable information to discriminate the elapsed duration of responding within these response periods. Modeling this across neuron variability showed that the utilization of ramps, dips, and peaks with different slopes and minimal/maximal rates at different times led to a substantial improvement in temporal prediction errors, suggesting that heterogeneity in the neural representation of elapsed time may facilitate temporally controlled behavior. PMID:21319888

  20. A new MIMO SAR system based on Alamouti space-time coding scheme and OFDM-LFM waveform design

    NASA Astrophysics Data System (ADS)

    Shi, Xiaojin; Zhang, Yunhua

    2015-10-01

    In recent years, multi-input and multi-output (MIMO) radar has attracted much attention of many researchers and institutions. MIMO radar transmits multiple signals, and receives the backscattered signals reflected from the targets. In contrast with conventional phased array radar and SAR system, MIMO radar system has significant potential advantages for achieving higher system SNR, more accurate parameter estimation, or high resolution of radar image. In this paper, we propose a new MIMO SAR system based on Alamouti space-time coding scheme and orthogonal frequency division multiplexing linearly frequency modulated (OFDM-LFM) for obtaining higher system signal-to-noise ratio (SNR) and better range resolution of SAR image.

  1. Tally and geometry definition influence on the computing time in radiotherapy treatment planning with MCNP Monte Carlo code.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G

    2006-01-01

    The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations. PMID:17946330

  2. Simulation of planar integrated photonics devices with the LLNL time- domain finite-difference code suite

    SciTech Connect

    McLeod, R.; Hawkins, R.J.; Kallman, J.S.

    1991-04-01

    Interest has recently grown in applying microwave modeling techniques to optical circuit modeling. One of the simplest, yet most powerful, microwave simulation techniques is the finite-difference time-domain algorithm (FDTD). In this technique, the differential form of the time-domain Maxwell's equations are discretized and all derivatives are approximated as differences. Minor algebraic manipulations on the resulting equations produces a set of update equations that produce fields at a given time step from fields at the previous time step. The FDTD algorithm, then, is quite simple. Source fields are launched into the discrete grid by some means. The FDTD equations advance these fields in time. At the boundaries of the grid, special update equations called radiation conditions are applied that approximate a continuing, infinite space. Because virtually no assumptions are made in the development of the FDTD method, the algorithm is able to represent a wide-range of physical effects. Waves can propagate in any direction, multiple reflections within structures can cause resonances, multiple modes of various polarizations can be launched, each of which may generate within the device an infinite spectrum of bound and radiation modes. The ability to model these types of general physical effects is what makes the FDTD method interesting to the field of optics. In this paper, we discuss the application of the finite-difference time-domain technique to integrated optics. Animations will be shown of the simulations of a TE coupler, TM grating, and a TE integrated detector. 3 refs., 1 fig.

  3. Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.

    SciTech Connect

    PADOVANI, ENRICO

    2012-04-15

    Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, was developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.

  4. Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.

    Energy Science and Technology Software Center (ESTSC)

    2012-04-15

    Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, wasmore » developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.« less

  5. Visual change detection recruits auditory cortices in early deafness.

    PubMed

    Bottari, Davide; Heimler, Benedetta; Caclin, Anne; Dalmolin, Anna; Giard, Marie-Hélène; Pavani, Francesco

    2014-07-01

    Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300 ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation. PMID:24636881

  6. The importance of individual frequencies of endogenous brain oscillations for auditory cognition - A short review.

    PubMed

    Baltus, Alina; Herrmann, Christoph Siegfried

    2016-06-01

    Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26453287

  7. Pediatric central auditory processing disorder showing elevated threshold on pure tone audiogram.

    PubMed

    Maeda, Yukihide; Nakagawa, Atsuko; Nagayasu, Rie; Sugaya, Akiko; Omichi, Ryotaro; Kariya, Shin; Fukushima, Kunihiro; Nishizaki, Kazunori

    2016-10-01

    Central auditory processing disorder (CAPD) is a condition in which dysfunction in the central auditory system causes difficulty in listening to conversations, particularly under noisy conditions, despite normal peripheral auditory function. Central auditory testing is generally performed in patients with normal hearing on the pure tone audiogram (PTA). This report shows that diagnosis of CAPD is possible even in the presence of an elevated threshold on the PTA, provided that the normal function of the peripheral auditory pathway was verified by distortion product otoacoustic emission (DPOAE), auditory brainstem response (ABR), and auditory steady state response (ASSR). Three pediatric cases (9- and 10-year-old girls and an 8-year-old boy) of CAPD with elevated thresholds on PTAs are presented. The chief complaint was difficulty in listening to conversations. PTA showed elevated thresholds, but the responses and thresholds for DPOAE, ABR, and ASSR were normal, showing that peripheral auditory function was normal. Significant findings of central auditory testing such as dichotic speech tests, time compression of speech signals, and binaural interaction tests confirmed the diagnosis of CAPD. These threshold shifts in PTA may provide a new concept of a clinical symptom due to central auditory dysfunction in CAPD. PMID:26922127

  8. Cortical auditory disorders: clinical and psychoacoustic features.

    PubMed Central

    Mendez, M F; Geehan, G R

    1988-01-01

    The symptoms of two patients with bilateral cortical auditory lesions evolved from cortical deafness to other auditory syndromes: generalised auditory agnosia, amusia and/or pure word deafness, and a residual impairment of temporal sequencing. On investigation, both had dysacusis, absent middle latency evoked responses, acoustic errors in sound recognition and matching, inconsistent auditory behaviours, and similarly disturbed psychoacoustic discrimination tasks. These findings indicate that the different clinical syndromes caused by cortical auditory lesions form a spectrum of related auditory processing disorders. Differences between syndromes may depend on the degree of involvement of a primary cortical processing system, the more diffuse accessory system, and possibly the efferent auditory system. Images PMID:2450968

  9. Constructing Noise-Invariant Representations of Sound in the Auditory Pathway

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D. B.; King, Andrew J.; Schnupp, Jan W. H.

    2013-01-01

    Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. PMID:24265596

  10. Adaptation to Vocal Expressions Reveals Multistep Perception of Auditory Emotion

    PubMed Central

    Maurage, Pierre; Rouger, Julien; Latinus, Marianne; Belin, Pascal

    2014-01-01

    The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect. PMID:24920615

  11. Short Time-Scale Sensory Coding in S1 during Discrimination of Whisker Vibrotactile Sequences.

    PubMed

    McGuire, Leah M; Telian, Gregory; Laboy-Juárez, Keven J; Miyashita, Toshio; Lee, Daniel J; Smith, Katherine A; Feldman, Daniel E

    2016-08-01

    Rodent whisker input consists of dense microvibration sequences that are often temporally integrated for perceptual discrimination. Whether primary somatosensory cortex (S1) participates in temporal integration is unknown. We trained rats to discriminate whisker impulse sequences that varied in single-impulse kinematics (5-20-ms time scale) and mean speed (150-ms time scale). Rats appeared to use the integrated feature, mean speed, to guide discrimination in this task, consistent with similar prior studies. Despite this, 52% of S1 units, including 73% of units in L4 and L2/3, encoded sequences at fast time scales (≤20 ms, mostly 5-10 ms), accurately reflecting single impulse kinematics. 17% of units, mostly in L5, showed weaker impulse responses and a slow firing rate increase during sequences. However, these units did not effectively integrate whisker impulses, but instead combined weak impulse responses with a distinct, slow signal correlated to behavioral choice. A neural decoder could identify sequences from fast unit spike trains and behavioral choice from slow units. Thus, S1 encoded fast time scale whisker input without substantial temporal integration across whisker impulses. PMID:27574970

  12. An Effect of Spatial-Temporal Association of Response Codes: Understanding the Cognitive Representations of Time

    ERIC Educational Resources Information Center

    Vallesi, Antonino; Binns, Malcolm A.; Shallice, Tim

    2008-01-01

    The present study addresses the question of how such an abstract concept as time is represented by our cognitive system. Specifically, the aim was to assess whether temporal information is cognitively represented through left-to-right spatial coordinates, as already shown for other ordered sequences (e.g., numbers). In Experiment 1, the…

  13. Short Time-Scale Sensory Coding in S1 during Discrimination of Whisker Vibrotactile Sequences

    PubMed Central

    Miyashita, Toshio; Lee, Daniel J.; Smith, Katherine A.; Feldman, Daniel E.

    2016-01-01

    Rodent whisker input consists of dense microvibration sequences that are often temporally integrated for perceptual discrimination. Whether primary somatosensory cortex (S1) participates in temporal integration is unknown. We trained rats to discriminate whisker impulse sequences that varied in single-impulse kinematics (5–20-ms time scale) and mean speed (150-ms time scale). Rats appeared to use the integrated feature, mean speed, to guide discrimination in this task, consistent with similar prior studies. Despite this, 52% of S1 units, including 73% of units in L4 and L2/3, encoded sequences at fast time scales (≤20 ms, mostly 5–10 ms), accurately reflecting single impulse kinematics. 17% of units, mostly in L5, showed weaker impulse responses and a slow firing rate increase during sequences. However, these units did not effectively integrate whisker impulses, but instead combined weak impulse responses with a distinct, slow signal correlated to behavioral choice. A neural decoder could identify sequences from fast unit spike trains and behavioral choice from slow units. Thus, S1 encoded fast time scale whisker input without substantial temporal integration across whisker impulses. PMID:27574970

  14. Multigroup Time-Independent Neutron Transport Code System for Plane or Spherical Geometry.

    Energy Science and Technology Software Center (ESTSC)

    1986-12-01

    Version 00 PALLAS-PL/SP solves multigroup time-independent one-dimensional neutron transport problems in plane or spherical geometry. The problems solved are subject to a variety of boundary conditions or a distributed source. General anisotropic scattering problems are treated for solving deep-penetration problems in which angle-dependent neutron spectra are calculated in detail.

  15. LUMPED Unsteady: a Visual Basic ® code of unsteady-state lumped-parameter models for mean residence time analyses of groundwater systems

    NASA Astrophysics Data System (ADS)

    Ozyurt, N. Nur; Bayari, C. Serdar

    2005-04-01

    A Microsoft ® Visual Basic 6.0 (Microsoft Corporation, 1987-1998) code of 9 lumped-parameter models of unsteady flow is presented for the analysis of mean residence time in aquifers. Groundwater flow systems obeying plug and well-mixed flow models and their combinations in parallel or serial connection can be simulated by the code. Models can use tritium, tritiugenic He-3, oxygen-18, deuterium, krypton-85, chlorofluorocarbons (CFC-11, CFC-12 and CFC-113) and sulfur hexafluoride (SF 6) as the environmental tracers. The executable code runs under all 32-bit Windows operating systems. Details of the code are explained and its limitations are indicated.

  16. Design and simulation of programmable relational optoelectronic time-pulse coded processors as base elements for sorting neural networks

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.

    2010-05-01

    In the paper we show that the biologically motivated conception of time-pulse encoding usage gives a set of advantages (single methodological basis, universality, tuning simplicity, learning and programming et al) at creation and design of sensor systems with parallel input-output and processing for 2D structures hybrid and next generations neuro-fuzzy neurocomputers. We show design principles of programmable relational optoelectronic time-pulse encoded processors on the base of continuous logic, order logic and temporal waves processes. We consider a structure that execute analog signal extraction, analog and time-pulse coded variables sorting. We offer optoelectronic realization of such base relational order logic element, that consists of time-pulse coded photoconverters (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutation blocks. We make technical parameters estimations of devices and processors on such base elements by simulation and experimental research: optical input signals power 0.2 - 20 uW, processing time 1 - 10 us, supply voltage 1 - 3 V, consumption power 10 - 100 uW, extended functional possibilities, learning possibilities. We discuss some aspects of possible rules and principles of learning and programmable tuning on required function, relational operation and realization of hardware blocks for modifications of such processors. We show that it is possible to create sorting machines, neural networks and hybrid data-processing systems with untraditional numerical systems and pictures operands on the basis of such quasiuniversal hardware simple blocks with flexible programmable tuning.

  17. Binaural processing by the gecko auditory periphery

    PubMed Central

    Christensen-Dalsgaard, Jakob; Tang, Yezhong

    2011-01-01

    Lizards have highly directional ears, owing to strong acoustical coupling of the eardrums and almost perfect sound transmission from the contralateral ear. To investigate the neural processing of this remarkable tympanic directionality, we combined biophysical measurements of eardrum motion in the Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (∼1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al. 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions of direct and indirect sound components at the eardrum. Best ITD and click delays match interaural transmission delays, with a range of 200–500 μs. Inserting a mold in the mouth cavity blocks ITD and ILD sensitivity. Thus the neural response accurately reflects tympanic directionality, and most neurons in the auditory pathway should be directional. PMID:21325679

  18. Auditory perspective taking.

    PubMed

    Martinson, Eric; Brock, Derek

    2013-06-01

    Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot self-noise or ego noise, echoes and reverberation, and human interference are all common sources of decreased intelligibility. Moreover, in real-world settings, these problems are routinely aggravated by a variety of sources of background noise. Military scenarios can be punctuated by high decibel noise from materiel and weaponry that would easily overwhelm a robot's normal speaking volume. Moreover, in nonmilitary settings, fans, computers, alarms, and transportation noise can cause enough interference to make a traditional speech interface unusable. This work presents and evaluates a prototype robotic interface that uses perspective taking to estimate the effectiveness of its own speech presentation and takes steps to improve intelligibility for human listeners. PMID:23096077

  19. Simulations for Full Unit-memory and Partial Unit-memory Convolutional Codes with Real-time Minimal-byte-error Probability Decoding Algorithm

    NASA Technical Reports Server (NTRS)

    Vo, Q. D.

    1984-01-01

    A program which was written to simulate Real Time Minimal-Byte-Error Probability (RTMBEP) decoding of full unit-memory (FUM) convolutional codes on a 3-bit quantized AWGN channel is described. This program was used to compute the symbol-error probability of FUM codes and to determine the signal to noise (SNR) required to achieve a bit error rate (BER) of 10 to the minus 6th power for corresponding concatenated systems. A (6,6/30) FUM code, 6-bit Reed-Solomon code combination was found to achieve the required BER at a SNR of 1.886 dB. The RTMBEP algorithm was then modified for decoding partial unit-memory (PUM) convolutional codes. A simulation program was also written to simulate the symbol-error probability of these codes.

  20. 2-D Time-Dependent Fuel Element, Thermal Analysis Code System.

    Energy Science and Technology Software Center (ESTSC)

    2001-09-24

    Version 00 WREM-TOODEE2 is a two dimensional, time-dependent, fuel-element thermal analysis program. Its primary purpose is to evaluate fuel-element thermal response during post-LOCA refill and reflood in a pressurized water reactor (PWR). TOODEE2 calculations are carried out in a two-dimensional mesh region defined in slab or cylindrical geometry by orthogonal grid lines. Coordinates which form order pairs are labeled x-y in slab geometry, and those in cylindrical geometry are labeled r-z for the axisymmetric casemore » and r-theta for the polar case. Conduction and radiation are the only heat transfer mechanisms assumed within the boundaries of the mesh region. Convective and boiling heat transfer mechanisms are assumed at the boundaries. The program numerically solves the two-dimensional, time-dependent, heat conduction equation within the mesh region. KEYWORDS: FUEL MANAGEMENT; HEAT TRANSFER; LOCA; PWR« less

  1. Response recovery in the locust auditory pathway.

    PubMed

    Wirtssohn, Sarah; Ronacher, Bernhard

    2016-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period. PMID:26609115

  2. The plastic ear and perceptual relearning in auditory spatial perception

    PubMed Central

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  3. Visual-induced expectations modulate auditory cortical responses

    PubMed Central

    van Wassenhove, Virginie; Grzeczkowski, Lukasz

    2015-01-01

    Active sensing has important consequences on multisensory processing (Schroeder et al., 2010). Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient color changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the “where” and the “when” of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG) while maintaining the position of their eyes on the left, right, or center of the screen. Participants counted color changes of the fixation cross while neglecting sounds which could be presented to the left, right, or both ears. First, clear alpha power increases were observed in auditory cortices, consistent with participants' attention directed to visual inputs. Second, color changes elicited robust modulations of auditory cortex responses (“when” prediction) seen as ramping activity, early alpha phase-locked responses, and enhanced high-gamma band responses in the contralateral side of sound presentation. Third, no modulations of auditory evoked or oscillatory activity were found to be specific to eye position. Altogether, our results suggest that visual transience can automatically elicit a prediction of “when” a sound will occur by changing the excitability of auditory cortices irrespective of the attended modality, eye position or spatial congruency of auditory and visual events. To the contrary, auditory cortical responses were not significantly affected by eye position suggesting that “where” predictions may require active sensing or saccadic reset to modulate auditory cortex responses, notably in the absence of spatial orientation to sounds. PMID:25705174

  4. Auditory Short-Term Memory Activation during Score Reading

    PubMed Central

    Simoens, Veerle L.; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback. PMID:23326487

  5. Auditory short-term memory activation during score reading.

    PubMed

    Simoens, Veerle L; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback. PMID:23326487

  6. A New Model for Real-Time Regional Vertical Total Electron Content and Differential Code Bias Estimation Using IGS Real-Time Service (IGS-RTS) Products

    NASA Astrophysics Data System (ADS)

    Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed

    2016-04-01

    The international global navigation satellite system (GNSS) real-time service (IGS-RTS) products have been used extensively for real-time precise point positioning and ionosphere modeling applications. In this study, we develop a regional model for real-time vertical total electron content (RT-VTEC) and differential code bias (RT-DCB) estimation over Europe using the IGS-RTS satellite orbit and clock products. The developed model has a spatial and temporal resolution of 1°×1° and 15 minutes, respectively. GPS observations from a regional network consisting of 60 IGS and EUREF reference stations are processed in the zero-difference mode using the Bernese-5.2 software package in order to extract the geometry-free linear combination of the smoothed code observations. The spherical harmonic expansion function is used to model the VTEC, the receiver and the satellite DCBs. To validate the proposed model, the RT-VTEC values are computed and compared with the final IGS-global ionospheric map (IGS-GIM) counterparts in three successive days under high solar activity including one of an extreme geomagnetic activity. The real-time satellite DCBs are also estimated and compared with the IGS-GIM counterparts. Moreover, the real-time receiver DCB for six IGS stations are obtained and compared with the IGS-GIM counterparts. The examined stations are located in different latitudes with different receiver types. The findings reveal that the estimated RT-VTEC values show agreement with the IGS-GIM counterparts with root mean-square-errors (RMSEs) values less than 2 TEC units. In addition, RMSEs of both the satellites and receivers DCBs are less than 0.85 ns and 0.65 ns, respectively in comparison with the IGS-GIM.

  7. Auditory function in individuals within Leber's hereditary optic neuropathy pedigrees.

    PubMed

    Rance, Gary; Kearns, Lisa S; Tan, Johanna; Gravina, Anthony; Rosenfeld, Lisa; Henley, Lauren; Carew, Peter; Graydon, Kelley; O'Hare, Fleur; Mackey, David A

    2012-03-01

    The aims of this study are to investigate whether auditory dysfunction is part of the spectrum of neurological abnormalities associated with Leber's hereditary optic neuropathy (LHON) and to determine the perceptual consequences of auditory neuropathy (AN) in affected listeners. Forty-eight subjects confirmed by genetic testing as having one of four mitochondrial mutations associated with LHON (mt11778, mtDNA14484, mtDNA14482 and mtDNA3460) participated. Thirty-two of these had lost vision, and 16 were asymptomatic at the point of data collection. While the majority of individuals showed normal sound detection, >25% (of both symptomatic and asymptomatic participants) showed electrophysiological evidence of AN with either absent or severely delayed auditory brainstem potentials. Abnormalities were observed for each of the mutations, but subjects with the mtDNA11778 type were the most affected. Auditory perception was also abnormal in both symptomatic and asymptomatic subjects, with >20% of cases showing impaired detection of auditory temporal (timing) cues and >30% showing abnormal speech perception both in quiet and in the presence of background noise. The findings of this study indicate that a relatively high proportion of individuals with the LHON genetic profile may suffer functional hearing difficulties due to neural abnormality in the central auditory pathways. PMID:21887510

  8. Auditory temporal processing skills in musicians with dyslexia.

    PubMed

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. PMID:25044949

  9. [Analysis of auditory information in the brain of the cetacean].

    PubMed

    Popov, V V; Supin, A Ia

    2006-01-01

    The cetacean brain specifics involve an exceptional development of the auditory neural centres. The place of projection sensory areas including the auditory that in the cetacean brain cortex is essentially different from that in other mammals. The EP characteristics indicated presence of several functional divisions in the auditory cortex. Physiological studies of the cetacean auditory centres were mainly performed using the EP technique. Of several types of the EPs, the short-latency auditory EP was most thoroughly studied. In cetacean, it is characterised by exceptionally high temporal resolution with the integration time about 0.3 ms which corresponds to the cut-off frequency 1700 Hz. This much exceeds the temporal resolution of the hearing in terranstrial mammals. The frequency selectivity of hearing in cetacean was measured using a number of variants of the masking technique. The hearing frequency selectivity acuity in cetacean exceeds that of most terraneous mammals (excepting the bats). This acute frequency selectivity provides the differentiation among the finest spectral patterns of auditory signals. PMID:16613059

  10. Integrated processing of spatial cues in human auditory cortex.

    PubMed

    Salminen, Nelli H; Takanen, Marko; Santala, Olli; Lamminsalo, Jarkko; Altoè, Alessandro; Pulkki, Ville

    2015-09-01

    Human sound source localization relies on acoustical cues, most importantly, the interaural differences in time and level (ITD and ILD). For reaching a unified representation of auditory space the auditory nervous system needs to combine the information provided by these two cues. In search for such a unified representation, we conducted a magnetoencephalography (MEG) experiment that took advantage of the location-specific adaptation of the auditory cortical N1 response. In general, the attenuation caused by a preceding adaptor sound to the response elicited by a probe depends on their spatial arrangement: if the two sounds coincide, adaptation is stronger than when the locations differ. Here, we presented adaptor-probe pairs that contained different localization cues, for instance, adaptors with ITD and probes with ILD. We found that the adaptation of the N1 amplitude was location-specific across localization cues. This result can be explained by the existence of auditory cortical neurons that are sensitive to sound source location independent on which cue, ITD or ILD, provides the location information. Such neurons would form a cue-independent, unified representation of auditory space in human auditory cortex. PMID:26074304

  11. SER performance of enhanced spatial multiplexing codes with ZF/MRC receiver in time-varying Rayleigh fading channels.

    PubMed

    Lee, In-Ho

    2014-01-01

    We propose enhanced spatial multiplexing codes (E-SMCs) to enable various encoding rates. The symbol error rate (SER) performance of the E-SMC is investigated when zero-forcing (ZF) and maximal-ratio combining (MRC) techniques are used at a receiver. The proposed E-SMC allows a transmitted symbol to be repeated over time to achieve further diversity gain at the cost of the encoding rate. With the spatial correlation between transmit antennas, SER equations for M-ary QAM and PSK constellations are derived by using a moment generating function (MGF) approximation of a signal-to-noise ratio (SNR), based on the assumption of independent zero-forced SNRs. Analytic and simulated results are compared for time-varying and spatially correlated Rayleigh fading channels that are modelled as first-order Markovian channels. Furthermore, we can find an optimal block length for the E-SMC that meets a required SER. PMID:25114969

  12. SER Performance of Enhanced Spatial Multiplexing Codes with ZF/MRC Receiver in Time-Varying Rayleigh Fading Channels

    PubMed Central

    Lee, In-Ho

    2014-01-01

    We propose enhanced spatial multiplexing codes (E-SMCs) to enable various encoding rates. The symbol error rate (SER) performance of the E-SMC is investigated when zero-forcing (ZF) and maximal-ratio combining (MRC) techniques are used at a receiver. The proposed E-SMC allows a transmitted symbol to be repeated over time to achieve further diversity gain at the cost of the encoding rate. With the spatial correlation between transmit antennas, SER equations for M-ary QAM and PSK constellations are derived by using a moment generating function (MGF) approximation of a signal-to-noise ratio (SNR), based on the assumption of independent zero-forced SNRs. Analytic and simulated results are compared for time-varying and spatially correlated Rayleigh fading channels that are modelled as first-order Markovian channels. Furthermore, we can find an optimal block length for the E-SMC that meets a required SER. PMID:25114969

  13. The influence of auditory-motor coupling on fractal dynamics in human gait

    PubMed Central

    Hunt, Nathaniel; McGrath, Denise; Stergiou, Nicholas

    2014-01-01

    Humans exhibit an innate ability to synchronize their movements to music. The field of gait rehabilitation has sought to capitalize on this phenomenon by invoking patients to walk in time to rhythmic auditory cues with a view to improving pathological gait. However, the temporal structure of the auditory cue, and hence the temporal structure of the target behavior has not been sufficiently explored. This study reveals the plasticity of auditory-motor coupling in human walking in relation to ‘complex' auditory cues. The authors demonstrate that auditory-motor coupling can be driven by different coloured auditory noise signals (e.g. white, brown), shifting the fractal temporal structure of gait dynamics towards the statistical properties of the signals used. This adaptive capability observed in whole-body movement, could potentially be harnessed for targeted neuromuscular rehabilitation in patient groups, depending on the specific treatment goal. PMID:25080936

  14. The influence of auditory-motor coupling on fractal dynamics in human gait.

    PubMed

    Hunt, Nathaniel; McGrath, Denise; Stergiou, Nicholas

    2014-01-01

    Humans exhibit an innate ability to synchronize their movements to music. The field of gait rehabilitation has sought to capitalize on this phenomenon by invoking patients to walk in time to rhythmic auditory cues with a view to improving pathological gait. However, the temporal structure of the auditory cue, and hence the temporal structure of the target behavior has not been sufficiently explored. This study reveals the plasticity of auditory-motor coupling in human walking in relation to 'complex' auditory cues. The authors demonstrate that auditory-motor coupling can be driven by different coloured auditory noise signals (e.g. white, brown), shifting the fractal temporal structure of gait dynamics towards the statistical properties of the signals used. This adaptive capability observed in whole-body movement, could potentially be harnessed for targeted neuromuscular rehabilitation in patient groups, depending on the specific treatment goal. PMID:25080936

  15. Using light to tell the time of day: sensory coding in the mammalian circadian visual network.

    PubMed

    Brown, Timothy M

    2016-06-15

    Circadian clocks are a near-ubiquitous feature of biology, allowing organisms to optimise their physiology to make the most efficient use of resources and adjust behaviour to maximise survival over the solar day. To fulfil this role, circadian clocks require information about time in the external world. This is most reliably obtained by measuring the pronounced changes in illumination associated with the earth's rotation. In mammals, these changes are exclusively detected in the retina and are relayed by direct and indirect neural pathways to the master circadian clock in the hypothalamic suprachiasmatic nuclei. Recent work reveals a surprising level of complexity in this sensory control of the circadian system, including the participation of multiple photoreceptive pathways conveying distinct aspects of visual and/or time-of-day information. In this Review, I summarise these important recent advances, present hypotheses as to the functions and neural origins of these sensory signals, highlight key challenges for future research and discuss the implications of our current knowledge for animals and humans in the modern world. PMID:27307539

  16. Using light to tell the time of day: sensory coding in the mammalian circadian visual network

    PubMed Central

    2016-01-01

    ABSTRACT Circadian clocks are a near-ubiquitous feature of biology, allowing organisms to optimise their physiology to make the most efficient use of resources and adjust behaviour to maximise survival over the solar day. To fulfil this role, circadian clocks require information about time in the external world. This is most reliably obtained by measuring the pronounced changes in illumination associated with the earth's rotation. In mammals, these changes are exclusively detected in the retina and are relayed by direct and indirect neural pathways to the master circadian clock in the hypothalamic suprachiasmatic nuclei. Recent work reveals a surprising level of complexity in this sensory control of the circadian system, including the participation of multiple photoreceptive pathways conveying distinct aspects of visual and/or time-of-day information. In this Review, I summarise these important recent advances, present hypotheses as to the functions and neural origins of these sensory signals, highlight key challenges for future research and discuss the implications of our current knowledge for animals and humans in the modern world. PMID:27307539

  17. Auditory Processing Disorder in Children

    MedlinePlus

    ... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick ... NIH… Turning Discovery Into Health ® National Institute on Deafness and Other Communication Disorders 31 Center Drive, MSC ...

  18. Maps of the Auditory Cortex.

    PubMed

    Brewer, Alyssa A; Barton, Brian

    2016-07-01

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration. PMID:27145914

  19. Classroom Demonstrations of Auditory Perception.

    ERIC Educational Resources Information Center

    Haws, LaDawn; Oppy, Brian J.

    2002-01-01

    Presents activities to help students gain understanding about auditory perception. Describes demonstrations that cover topics, such as sound localization, wave cancellation, frequency/pitch variation, and the influence of media on sound propagation. (CMK)

  20. Leiomyoma of External Auditory Canal.

    PubMed

    George, M V; Puthiyapurayil, Jamsheeda

    2016-09-01

    This article reports a case of piloleiomyoma of external auditory canal, which is the 7th case of leiomyoma of the external auditory canal being reported and the 2nd case of leiomyoma arising from arrectores pilorum muscles, all the other five cases were angioleiomyomas, arising from blood vessels. A 52 years old male presented with a mass in the right external auditory canal and decreased hearing of 6 months duration. Tumor excision done by end aural approach. Histopathological examination report was leiomyoma. It is extremely rare for leiomyoma to occur in the external auditory canal because of the non-availability of smooth muscles in the external canal. So it should be considered as a very rare differential diagnosis for any tumor or polyp in the ear canal. PMID:27508144

  1. Rhythmic Continuous-Time Coding in the Songbird Analog of Vocal Motor Cortex.

    PubMed

    Lynch, Galen F; Okubo, Tatsuo S; Hanuschkin, Alexander; Hahnloser, Richard H R; Fee, Michale S

    2016-05-18

    Songbirds learn and produce complex sequences of vocal gestures. Adult birdsong requires premotor nucleus HVC, in which projection neurons (PNs) burst sparsely at stereotyped times in the song. It has been hypothesized that PN bursts, as a population, form a continuous sequence, while a different model of HVC function proposes that both HVC PN and interneuron activity is tightly organized around motor gestures. Using a large dataset of PNs and interneurons recorded in singing birds, we test several predictions of these models. We find that PN bursts in adult birds are continuously and nearly uniformly distributed throughout song. However, we also find that PN and interneuron firing rates exhibit significant 10-Hz rhythmicity locked to song syllables, peaking prior to syllable onsets and suppressed prior to offsets-a pattern that predominates PN and interneuron activity in HVC during early stages of vocal learning. PMID:27196977

  2. Stimulator with arbitrary waveform for auditory evoked potentials

    NASA Astrophysics Data System (ADS)

    Martins, H. R.; Romão, M.; Plácido, D.; Provenzano, F.; Tierra-Criollo, C. J.

    2007-11-01

    The technological improvement helps many medical areas. The audiometric exams involving the auditory evoked potentials can make better diagnoses of auditory disorders. This paper proposes the development of a stimulator based on Digital Signal Processor. This stimulator is the first step of an auditory evoked potential system based on the ADSP-BF533 EZ KIT LITE (Analog Devices Company - USA). The stimulator can generate arbitrary waveform like Sine Waves, Modulated Amplitude, Pulses, Bursts and Pips. The waveforms are generated through a graphical interface programmed in C++ in which the user can define the parameters of the waveform. Furthermore, the user can set the exam parameters as number of stimuli, time with stimulation (Time ON) and time without stimulus (Time OFF). In future works will be implemented another parts of the system that includes the acquirement of electroencephalogram and signal processing to estimate and analyze the evoked potential.

  3. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    PubMed

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general. PMID:25900043

  4. Analysis of auditory information in the brains of cetaceans.

    PubMed

    Popov, V V; Supin, A Ya

    2007-03-01

    A characteristic feature of the brains of toothed cetaceans is the exclusive development of the auditory neural centers. The location of the projection sensory zones, including the auditory zones, in the cetacean cortex is significantly different from that in other mammals. The characteristics of evoked potentials demonstrate the existence of several functional subdivisions in the auditory cortex. Physiological studies of the auditory neural centers of cetaceans have been performed predominantly using the evoked potentials method. Of the several types of evoked potentials available for non-invasive recording, the most detailed studies have been performed using short-latency auditory evoked potentials (SLAEP). SLAEP in cetaceans are characterized by exclusively high time resolution, with integration times of about 0.3 msec, which on the frequency scale corresponds to a cut-off frequency of 1700 Hz. This is more than an order of magnitude greater than the time resolution of hearing in terrestrial mammals. The frequency selectivity of hearing in cetaceans has been measured using several versions of the masking method. The acuity of frequency selectivity in cetaceans is several times greater than that in most terrestrial mammals (except bats). The acute frequency selectivity allows the discrimination of very fine spectral patterns of sound signals. PMID:17294105

  5. Chimaeric sounds reveal dichotomies in auditory perception

    NASA Astrophysics Data System (ADS)

    Smith, Zachary M.; Delgutte, Bertrand; Oxenham, Andrew J.

    2002-03-01

    By Fourier's theorem, signals can be decomposed into a sum of sinusoids of different frequencies. This is especially relevant for hearing, because the inner ear performs a form of mechanical Fourier transform by mapping frequencies along the length of the cochlear partition. An alternative signal decomposition, originated by Hilbert, is to factor a signal into the product of a slowly varying envelope and a rapidly varying fine time structure. Neurons in the auditory brainstem sensitive to these features have been found in mammalian physiological studies. To investigate the relative perceptual importance of envelope and fine structure, we synthesized stimuli that we call `auditory chimaeras', which have the envelope of one sound and the fine structure of another. Here we show that the envelope is most important for speech reception, and the fine structure is most important for pitch perception and sound localization. When the two features are in conflict, the sound of speech is heard at a location determined by the fine structure, but the words are identified according to the envelope. This finding reveals a possible acoustic basis for the hypothesized `what' and `where' pathways in the auditory cortex.

  6. BALDEY: A database of auditory lexical decisions.

    PubMed

    Ernestus, Mirjam; Cutler, Anne

    2015-01-01

    In an auditory lexical decision experiment, 5541 spoken content words and pseudowords were presented to 20 native speakers of Dutch. The words vary in phonological make-up and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudowords were matched in these respects to the real words. The BALDEY ("biggest auditory lexical decision experiment yet") data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbours and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles, and frequency ratings by 75 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses. PMID:25397865

  7. Time-of-flights and traps: from the Histone Code to Mars*

    PubMed Central

    Swatkoski, Stephen; Becker, Luann; Evans-Nguyen, Theresa

    2011-01-01

    Two very different analytical instruments are featured in this perspective paper on mass spectrometer design and development. The first instrument, based upon the curved-field reflectron developed in the Johns Hopkins Middle Atlantic Mass Spectrometry Laboratory, is a tandem time-of-flight mass spectrometer whose performance and practicality are illustrated by applications to a series of research projects addressing the acetylation, deacetylation and ADP-ribosylation of histone proteins. The chemical derivatization of lysine-rich, hyperacetylated histones as their deuteroacetylated analogs enables one to obtain an accurate quantitative assessment of the extent of acetylation at each site. Chemical acetylation of histone mixtures is also used to determine the lysine targets of sirtuins, an important class of histone deacetylases (HDACs), by replacing the deacetylated residues with biotin. Histone deacetylation by sirtuins requires the co-factor NAD+, as does the attachment of ADP-ribose. The second instrument, a low voltage and low power ion trap mass spectrometer known as the Mars Organic Mass Analyzer (MOMA), is a prototype for an instrument expected to be launched in 2018. Like the tandem mass spectrometer, it is also expected to have applicability to environmental and biological analyses and, ultimately, to clinical care. PMID:20530839

  8. Spontaneous activity in the developing auditory system.

    PubMed

    Wang, Han Chin; Bergles, Dwight E

    2015-07-01

    Spontaneous electrical activity is a common feature of sensory systems during early development. This sensory-independent neuronal activity has been implicated in promoting their survival and maturation, as well as growth and refinement of their projections to yield circuits that can rapidly extract information about the external world. Periodic bursts of action potentials occur in auditory neurons of mammals before hearing onset. This activity is induced by inner hair cells (IHCs) within the developing cochlea, which establish functional connections with spiral ganglion neurons (SGNs) several weeks before they are capable of detecting external sounds. During this pre-hearing period, IHCs fire periodic bursts of Ca(2+) action potentials that excite SGNs, triggering brief but intense periods of activity that pass through auditory centers of the brain. Although spontaneous activity requires input from IHCs, there is ongoing debate about whether IHCs are intrinsically active and their firing periodically interrupted by external inhibitory input (IHC-inhibition model), or are intrinsically silent and their firing periodically promoted by an external excitatory stimulus (IHC-excitation model). There is accumulating evidence that inner supporting cells in Kölliker's organ spontaneously release ATP during this time, which can induce bursts of Ca(2+) spikes in IHCs that recapitulate many features of auditory neuron activity observed in vivo. Nevertheless, the role of supporting cells in this process remains to be established in vivo. A greater understanding of the molecular mechanisms responsible for generating IHC activity in the developing cochlea will help reveal how these events contribute to the maturation of nascent auditory circuits. PMID:25296716

  9. Blind and semi-blind ML detection for space-time block-coded OFDM wireless systems

    NASA Astrophysics Data System (ADS)

    Zaib, Alam; Al-Naffouri, Tareq Y.

    2014-12-01

    This paper investigates the joint maximum likelihood (ML) data detection and channel estimation problem for Alamouti space-time block-coded (STBC) orthogonal frequency-division multiplexing (OFDM) wireless systems. The joint ML estimation and data detection is generally considered a hard combinatorial optimization problem. We propose an efficient low-complexity algorithm based on branch-estimate-bound strategy that renders exact joint ML solution. However, the computational complexity of blind algorithm becomes critical at low signal-to-noise ratio (SNR) as the number of OFDM carriers and constellation size are increased especially in multiple-antenna systems. To overcome this problem, a semi-blind algorithm based on a new framework for reducing the complexity is proposed by relying on subcarrier reordering and decoding the carriers with different levels of confidence using a suitable reliability criterion. In addition, it is shown that by utilizing the inherent structure of Alamouti coding, the estimation performance improvement or the complexity reduction can be achieved. The proposed algorithms can reliably track the wireless Rayleigh fading channel without requiring any channel statistics. Simulation results presented against the perfect coherent detection demonstrate the effectiveness of blind and semi-blind algorithms over frequency-selective channels with different fading characteristics.

  10. Touch activates human auditory cortex.

    PubMed

    Schürmann, Martin; Caetano, Gina; Hlushchuk, Yevhen; Jousmäki, Veikko; Hari, Riitta

    2006-05-01

    Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment. PMID:16488157

  11. Neural synchrony in ventral cochlear nucleus neuron populations is not mediated by intrinsic processes but is stimulus induced: implications for auditory brainstem implants

    NASA Astrophysics Data System (ADS)

    Shivdasani, Mohit N.; Mauger, Stefan J.; Rathbone, Graeme D.; Paolini, Antonio G.

    2009-12-01

    The aim of this investigation was to elucidate if neural synchrony forms part of the spike time-based theory for coding of sound information in the ventral cochlear nucleus (VCN) of the auditory brainstem. Previous research attempts to quantify the degree of neural synchrony at higher levels of the central auditory system have indicated that synchronized firing of neurons during presentation of an acoustic stimulus could play an important role in coding complex sound features. However, it is unknown whether this synchrony could in fact arise from the VCN as it is the first station in the central auditory pathway. Cross-correlation analysis was conducted on 499 pairs of multiunit clusters recorded in the urethane-anesthetized rat VCN in response to pure tones and combinations of two tones to determine the presence of neural synchrony. The shift predictor correlogram was used as a measure for determining the synchrony owing to the effects of the stimulus. Without subtraction of the shift predictor, over 65% of the pairs of multiunit clusters exhibited significant correlation in neural firing when the frequencies of the tones presented matched their characteristic frequencies (CFs). In addition, this stimulus-evoked neural synchrony was dependent on the physical distance between electrode sites, and the CF difference between multiunit clusters as the number of correlated pairs dropped significantly for electrode sites greater than 800 µm apart and for multiunit cluster pairs with a CF difference greater than 0.5 octaves. However, subtraction of the shift predictor correlograms from the raw correlograms resulted in no remaining correlation between all VCN pairs. These results suggest that while neural synchrony may be a feature of sound coding in the VCN, it is stimulus induced and not due to intrinsic neural interactions within the nucleus. These data provide important implications for stimulation strategies for the auditory brainstem implant, which is used to

  12. Accuracy of Single Frequency GPS Observations Processing In Near Real-time With Use of Code Predicted Products

    NASA Astrophysics Data System (ADS)

    Wielgosz, P. A.

    In this year, the system of active geodetic GPS permanent stations is going to be estab- lished in Poland. This system should provide GPS observations for a wide spectrum of users, especially it will be a great opportunity for surveyors. Many of surveyors still use cheaper, single frequency receivers. This paper focuses on processing of single frequency GPS observations only. During processing of such observations the iono- sphere plays an important role, so we concentrated on the influence of the ionosphere on the positional coordinates. Twenty consecutive days of GPS data from 2001 year were processed to analyze the accuracy of a derived three-dimensional relative vec- tor position between GPS stations. Observations from two Polish EPN/IGS stations: BOGO and JOZE were used. In addition to, a new test station - IGIK was created. In this paper, the results of single frequency GPS observations processing in near real- time are presented. Baselines of 15, 27 and 42 kilometers and sessions of 1, 2, 3, 4, and 6 hours long were processed. While processing we used CODE (Centre for Orbit De- termination in Europe, Bern, Switzerland) predicted products: orbits and ionosphere info. These products are available in real-time and enable near real-time processing. Software Bernese v. 4.2 for Linux and BPE (Bernese Processing Engine) mode were used. These results are shown with a reference to dual frequency weekly solution (the best solution). Obtained GPS positional time and GPS baseline length dependency accuracy is presented for single frequency GPS observations.

  13. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    SciTech Connect

    O'Brien, M J; Procassini, R J; Joy, K I

    2009-03-09

    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

  14. Study of ITER plasma position reflectometer using a two-dimensional full-wave finite-difference time domain code

    SciTech Connect

    Silva, F. da

    2008-10-15

    The EU will supply the plasma position reflectometer for ITER. The system will have channels located at different poloidal positions, some of them obliquely viewing a plasma which has a poloidal density divergence and curvature, both adverse conditions for profile measurements. To understand the impact of such topology in the reconstruction of density profiles a full-wave two-dimensional finite-difference time domain O-mode code with the capability for frequency sweep was used. Simulations show that the reconstructed density profiles still meet the ITER radial accuracy specifications for plasma position (1 cm), except for the highest densities. Other adverse effects such as multireflections induced by the blanket, density fluctuations, and MHD activity were considered and a first understanding on their impact obtained.

  15. Reliability-aware iterative detection scheme (RAID) for distributed IDM space-time codes in relay systems

    NASA Astrophysics Data System (ADS)

    Lenkeit, Florian; Wübben, Dirk; Dekorsy, Armin

    2013-12-01

    In this article, distributed interleave-division multiplexing space-time codes (dIDM-STCs) are applied for multi-user two-hop decode-and-forward (DF) relay networks. In case of decoding errors at the relays which propagate to the destination, severe performance degradations can occur as the original detection scheme for common IDM-STCs does not take any reliability information about the first hop into account. Here, a novel reliability-aware iterative detection scheme (RAID) for dIDM-STCs is proposed. This new detection scheme takes the decoding reliability of the relays for each user into account for the detection at the destination. Performance evaluations show that the proposed RAID scheme clearly outperforms the original detection scheme and that in certain scenarios even a better performance than for adaptive relaying schemes can be achieved.

  16. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable

    PubMed Central

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition. PMID:26950589

  17. Axon Guidance in the Auditory System: Multiple Functions of Eph Receptors

    PubMed Central

    Cramer, Karina S.; Gabriele, Mark L.

    2014-01-01

    The neural pathways of the auditory system underlie our ability to detect sounds and to transform amplitude and frequency information into rich and meaningful perception. While it shares some organizational features with other sensory systems, the auditory system has some unique functions that impose special demands on precision in circuit assembly. In particular, the cochlear epithelium creates a frequency map rather than a space map, and specialized pathways extract information on interaural time and intensity differences to permit sound source localization. The assembly of auditory circuitry requires the coordinated function of multiple molecular cues. Eph receptors and their ephrin ligands constitute a large family of axon guidance molecules with developmentally regulated expression throughout the auditory system. Functional studies of Eph/ephrin signaling have revealed important roles at multiple levels of the auditory pathway, from the cochlea to the auditory cortex. These proteins provide graded cues used in establishing tonotopically ordered connections between auditory areas, as well as discrete cues that enable axons to form connections with appropriate postsynaptic partners within a target area. Throughout the auditory system, Eph proteins help to establish patterning in neural pathways during early development. This early targeting, which is further refined with neuronal activity, establishes the precision needed for auditory perception. PMID:25010398

  18. Efficient population coding of naturalistic whisker motion in the ventro-posterior medial thalamus based on precise spike timing

    PubMed Central

    Bale, Michael R.; Ince, Robin A. A.; Santagata, Greta; Petersen, Rasmus S.

    2015-01-01

    The rodent whisker-associated thalamic nucleus (VPM) contains a somatotopic map where whisker representation is divided into distinct neuronal sub-populations, called “barreloids”. Each barreloid projects to its associated cortical barrel column and so forms a gateway for incoming sensory stimuli to the barrel cortex. We aimed to determine how the population of neurons within one barreloid encodes naturalistic whisker motion. In rats, we recorded the extracellular activity of up to nine single neurons within a single barreloid, by implanting silicon probes parallel to the longitudinal axis of the barreloids. We found that play-back of texture-induced whisker motion evoked sparse responses, timed with millisecond precision. At the population level, there was synchronous activity: however, different subsets of neurons were synchronously active at different times. Mutual information between population responses and whisker motion increased near linearly with population size. When normalized to factor out firing rate differences, we found that texture was encoded with greater informational-efficiency than white noise. These results indicate that, within each VPM barreloid, there is a rich and efficient population code for naturalistic whisker motion based on precisely timed, population spike patterns. PMID:26441549

  19. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  20. A real-time photoacoustic and ultrasound dual-modality imaging system facilitated with GPU and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2014-03-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The backprojection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel was conducted to verify the performance of this system for imaging fast biological events. The GPU based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/pat realtime .