Science.gov

Sample records for auditory time coding

  1. Low Somatic Sodium Conductance Enhances Action Potential Precision in Time-Coding Auditory Neurons.

    PubMed

    Yang, Yang; Ramamurthy, Bina; Neef, Andreas; Xu-Friedman, Matthew A

    2016-11-23

    Auditory nerve fibers encode sounds in the precise timing of action potentials (APs), which is used for such computations as sound localization. Timing information is relayed through several cell types in the auditory brainstem that share an unusual property: their APs are not overshooting, suggesting that the cells have very low somatic sodium conductance (gNa). However, it is not clear how gNa influences temporal precision. We addressed this by comparing bushy cells (BCs) in the mouse cochlear nucleus with T-stellate cells (SCs), which do have normal overshooting APs. BCs play a central role in both relaying and refining precise timing information from the auditory nerve, whereas SCs discard precise timing information and encode the envelope of sound amplitude. Nucleated-patch recording at near-physiological temperature indicated that the Na current density was 62% lower in BCs, and the voltage dependence of gNa inactivation was 13 mV hyperpolarized compared with SCs. We endowed BCs with SC-like gNa using two-electrode dynamic clamp and found that synaptic activity at physiologically relevant rates elicited APs with significantly lower probability, through increased activation of delayed rectifier channels. In addition, for two near-simultaneous synaptic inputs, the window of coincidence detection widened significantly with increasing gNa, indicating that refinement of temporal information by BCs is degraded by gNa Thus, reduced somatic gNa appears to be an adaption for enhancing fidelity and precision in time-coding neurons.

  2. Are interaural time and level differences represented by independent or integrated codes in the human auditory cortex?

    PubMed

    Edmonds, Barrie A; Krumbholz, Katrin

    2014-02-01

    Sound localization is important for orienting and focusing attention and for segregating sounds from different sources in the environment. In humans, horizontal sound localization mainly relies on interaural differences in sound arrival time and sound level. Despite their perceptual importance, the neural processing of interaural time and level differences (ITDs and ILDs) remains poorly understood. Animal studies suggest that, in the brainstem, ITDs and ILDs are processed independently by different specialized circuits. The aim of the current study was to investigate whether, at higher processing levels, they remain independent or are integrated into a common code of sound laterality. For that, we measured late auditory cortical potentials in response to changes in sound lateralization elicited by perceptually matched changes in ITD and/or ILD. The responses to the ITD and ILD changes exhibited significant morphological differences. At the same time, however, they originated from overlapping areas of the cortex and showed clear evidence for functional coupling. These results suggest that the auditory cortex contains an integrated code of sound laterality, but also retains independent information about ITD and ILD cues. This cue-related information might be used to assess how consistent the cues are, and thus, how likely they would have arisen from the same source.

  3. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain

    PubMed Central

    Portfors, Christine V.

    2013-01-01

    The ubiquity of social vocalization among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve vocal communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. PMID:23726970

  4. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    PubMed

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  5. How the owl resolves auditory coding ambiguity

    PubMed Central

    Mazer, James A.

    1998-01-01

    The barn owl (Tyto alba) uses interaural time difference (ITD) cues to localize sounds in the horizontal plane. Low-order binaural auditory neurons with sharp frequency tuning act as narrow-band coincidence detectors; such neurons respond equally well to sounds with a particular ITD and its phase equivalents and are said to be phase ambiguous. Higher-order neurons with broad frequency tuning are unambiguously selective for single ITDs in response to broad-band sounds and show little or no response to phase equivalents. Selectivity for single ITDs is thought to arise from the convergence of parallel, narrow-band frequency channels that originate in the cochlea. ITD tuning to variable bandwidth stimuli was measured in higher-order neurons of the owl’s inferior colliculus to examine the rules that govern the relationship between frequency channel convergence and the resolution of phase ambiguity. Ambiguity decreased as stimulus bandwidth increased, reaching a minimum at 2–3 kHz. Two independent mechanisms appear to contribute to the elimination of ambiguity: one suppressive and one facilitative. The integration of information carried by parallel, distributed processing channels is a common theme of sensory processing that spans both modality and species boundaries. The principles underlying the resolution of phase ambiguity and frequency channel convergence in the owl may have implications for other sensory systems, such as electrolocation in electric fish and the computation of binocular disparity in the avian and mammalian visual systems. PMID:9724807

  6. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities

    PubMed Central

    Deneux, Thomas; Kempf, Alexandre; Daret, Aurélie; Ponsot, Emmanuel; Bathellier, Brice

    2016-01-01

    Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. PMID:27580932

  7. Coding of melodic gestalt in human auditory cortex.

    PubMed

    Schindler, Andreas; Herdener, Marcus; Bartels, Andreas

    2013-12-01

    The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.

  8. Diverse cortical codes for scene segmentation in primate auditory cortex.

    PubMed

    Malone, Brian J; Scott, Brian H; Semple, Malcolm N

    2015-04-01

    The temporal coherence of amplitude fluctuations is a critical cue for segmentation of complex auditory scenes. The auditory system must accurately demarcate the onsets and offsets of acoustic signals. We explored how and how well the timing of onsets and offsets of gated tones are encoded by auditory cortical neurons in awake rhesus macaques. Temporal features of this representation were isolated by presenting otherwise identical pure tones of differing durations. Cortical response patterns were diverse, including selective encoding of onset and offset transients, tonic firing, and sustained suppression. Spike train classification methods revealed that many neurons robustly encoded tone duration despite substantial diversity in the encoding process. Excellent discrimination performance was achieved by neurons whose responses were primarily phasic at tone offset and by those that responded robustly while the tone persisted. Although diverse cortical response patterns converged on effective duration discrimination, this diversity significantly constrained the utility of decoding models referenced to a spiking pattern averaged across all responses or averaged within the same response category. Using maximum likelihood-based decoding models, we demonstrated that the spike train recorded in a single trial could support direct estimation of stimulus onset and offset. Comparisons between different decoding models established the substantial contribution of bursts of activity at sound onset and offset to demarcating the temporal boundaries of gated tones. Our results indicate that relatively few neurons suffice to provide temporally precise estimates of such auditory "edges," particularly for models that assume and exploit the heterogeneity of neural responses in awake cortex.

  9. Decreasing auditory Simon effects across reaction time distributions.

    PubMed

    Xiong, Aiping; Proctor, Robert W

    2016-01-01

    The Simon effect for left-right visual stimuli previously has been shown to decrease across the reaction time (RT) distribution. This decrease has been attributed to automatic activation of the corresponding response, which then dissipates over time. In contrast, for left-right tone stimuli, the Simon effect has not been found to decrease across the RT distribution but instead tends to increase. It has been proposed that automatic activation occurs through visuomotor information transmission, whereas the auditory Simon effect reflects cognitive coding interference and not automatic activation. In 4 experiments, we examined distributions of the auditory Simon effect for RT, percentage error (PE), and an inverse efficiency score [IES = RT/(1 - PE)] as a function of tone frequency and duration to determine whether the activation-dissipation account is also applicable to auditory stimuli. Consistent decreasing functions were found for the RT Simon effect distribution with short-duration tones of low frequency and for the PE and IES Simon effect distributions for all durations and frequency sets. Together, these findings provide robust evidence that left and right auditory stimuli also produce decreasing Simon effect distribution functions suggestive of automatic activation and dissipation of the corresponding response.

  10. Improving Hearing Performance Using Natural Auditory Coding Strategies

    NASA Astrophysics Data System (ADS)

    Rattay, Frank

    Sound transfer from the human ear to the brain is based on three quite different neural coding principles when the continuous temporal auditory source signal is sent as binary code in excellent quality via 30,000 nerve fibers per ear. Cochlear implants are well-accepted neural prostheses for people with sensory hearing loss, but currently the devices are inspired only by the tonotopic principle. According to this principle, every sound frequency is mapped to a specific place along the cochlea. By electrical stimulation, the frequency content of the acoustic signal is distributed via few contacts of the prosthesis to corresponding places and generates spikes there. In contrast to the natural situation, the artificially evoked information content in the auditory nerve is quite poor, especially because the richness of the temporal fine structure of the neural pattern is replaced by a firing pattern that is strongly synchronized with an artificial cycle duration. Improvement in hearing performance is expected by involving more of the ingenious strategies developed during evolution.

  11. IRIG Serial Time Code Formats

    DTIC Science & Technology

    2016-08-01

    TELECOMMUNICATIONS AND TIMING GROUP IRIG STANDARD 200-16 IRIG SERIAL TIME CODE FORMATS DISTRIBUTION A: APPROVED FOR...ARNOLD ENGINEERING DEVELOPMENT COMPLEX NATIONAL AERONAUTICS AND SPACE ADMINISTRATION This page intentionally left blank. IRIG SERIAL TIME CODE ...Serial Time Code Formats, RCC 200-16, August 2016 v Table of Contents Preface

  12. Auditory spatial attention using interaural time differences.

    PubMed

    Sach, A J; Hill, N I; Bailey, P J

    2000-04-01

    Previous probe-signal studies of auditory spatial attention have shown faster responses to sounds at an expected versus an unexpected location, making no distinction between the use of interaural time difference (ITD) cues and interaural-level difference cues. In 5 experiments, performance on a same-different spatial discrimination task was used in place of the reaction time metric, and sounds, presented over headphones, were lateralized only by an ITD. In all experiments, performance was better for signals lateralized on the expected side of the head, supporting the conclusion that ITDs can be used as a basis for covert orienting. The performance advantage generalized to all sounds within the spatial focus and was not dissipated by a trial-by-trial rove in frequency or by a rove in spectral profile. Successful use by the listeners of a cross-modal, centrally positioned visual cue provided evidence for top-down attentional control.

  13. Interdependence of spatial and temporal coding in the auditory midbrain.

    PubMed

    Koch, U; Grothe, B

    2000-04-01

    To date, most physiological studies that investigated binaural auditory processing have addressed the topic rather exclusively in the context of sound localization. However, there is strong psychophysical evidence that binaural processing serves more than only sound localization. This raises the question of how binaural processing of spatial cues interacts with cues important for feature detection. The temporal structure of a sound is one such feature important for sound recognition. As a first approach, we investigated the influence of binaural cues on temporal processing in the mammalian auditory system. Here, we present evidence that binaural cues, namely interaural intensity differences (IIDs), have profound effects on filter properties for stimulus periodicity of auditory midbrain neurons in the echolocating big brown bat, Eptesicus fuscus. Our data indicate that these effects are partially due to changes in strength and timing of binaural inhibitory inputs. We measured filter characteristics for the periodicity (modulation frequency) of sinusoidally frequency modulated sounds (SFM) under different binaural conditions. As criteria, we used 50% filter cutoff frequencies of modulation transfer functions based on discharge rate as well as synchronicity of discharge to the sound envelope. The binaural conditions were contralateral stimulation only, equal stimulation at both ears (IID = 0 dB), and more intense at the ipsilateral ear (IID = -20, -30 dB). In 32% of neurons, the range of modulation frequencies the neurons responded to changed considerably comparing monaural and binaural (IID =0) stimulation. Moreover, in approximately 50% of neurons the range of modulation frequencies was narrower when the ipsilateral ear was favored (IID = -20) compared with equal stimulation at both ears (IID = 0). In approximately 10% of the neurons synchronization differed when comparing different binaural cues. Blockade of the GABAergic or glycinergic inputs to the cells recorded

  14. Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant

    PubMed Central

    Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa

    2015-01-01

    Introduction  The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. Objective  To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. Methods  This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. Results  There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. Conclusion  There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied. PMID:27413409

  15. Intensity-Invariant Coding in the Auditory System

    PubMed Central

    Barbour, Dennis L.

    2011-01-01

    The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. PMID:21540053

  16. Norepinephrine Modulates Coding of Complex Vocalizations in the Songbird Auditory Cortex Independent of Local Neuroestrogen Synthesis.

    PubMed

    Ikeda, Maaya Z; Jeon, Sung David; Cowell, Rosemary A; Remage-Healey, Luke

    2015-06-24

    The catecholamine norepinephrine plays a significant role in auditory processing. Most studies to date have examined the effects of norepinephrine on the neuronal response to relatively simple stimuli, such as tones and calls. It is less clear how norepinephrine shapes the detection of complex syntactical sounds, as well as the coding properties of sensory neurons. Songbirds provide an opportunity to understand how auditory neurons encode complex, learned vocalizations, and the potential role of norepinephrine in modulating the neuronal computations for acoustic communication. Here, we infused norepinephrine into the zebra finch auditory cortex and performed extracellular recordings to study the modulation of song representations in single neurons. Consistent with its proposed role in enhancing signal detection, norepinephrine decreased spontaneous activity and firing during stimuli, yet it significantly enhanced the auditory signal-to-noise ratio. These effects were all mimicked by clonidine, an α-2 receptor agonist. Moreover, a pattern classifier analysis indicated that norepinephrine enhanced the ability of single neurons to accurately encode complex auditory stimuli. Because neuroestrogens are also known to enhance auditory processing in the songbird brain, we tested the hypothesis that norepinephrine actions depend on local estrogen synthesis. Neither norepinephrine nor adrenergic receptor antagonist infusion into the auditory cortex had detectable effects on local estradiol levels. Moreover, pretreatment with fadrozole, a specific aromatase inhibitor, did not block norepinephrine's neuromodulatory effects. Together, these findings indicate that norepinephrine enhances signal detection and information encoding for complex auditory stimuli by suppressing spontaneous "noise" activity and that these actions are independent of local neuroestrogen synthesis.

  17. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.

  18. Sensorineural hearing loss amplifies neural coding of envelope information in the central auditory system of chinchillas.

    PubMed

    Zhong, Ziwei; Henry, Kenneth S; Heinz, Michael G

    2014-03-01

    People with sensorineural hearing loss often have substantial difficulty understanding speech under challenging listening conditions. Behavioral studies suggest that reduced sensitivity to the temporal structure of sound may be responsible, but underlying neurophysiological pathologies are incompletely understood. Here, we investigate the effects of noise-induced hearing loss on coding of envelope (ENV) structure in the central auditory system of anesthetized chinchillas. ENV coding was evaluated noninvasively using auditory evoked potentials recorded from the scalp surface in response to sinusoidally amplitude modulated tones with carrier frequencies of 1, 2, 4, and 8 kHz and a modulation frequency of 140 Hz. Stimuli were presented in quiet and in three levels of white background noise. The latency of scalp-recorded ENV responses was consistent with generation in the auditory midbrain. Hearing loss amplified neural coding of ENV at carrier frequencies of 2 kHz and above. This result may reflect enhanced ENV coding from the periphery and/or an increase in the gain of central auditory neurons. In contrast to expectations, hearing loss was not associated with a stronger adverse effect of increasing masker intensity on ENV coding. The exaggerated neural representation of ENV information shown here at the level of the auditory midbrain helps to explain previous findings of enhanced sensitivity to amplitude modulation in people with hearing loss under some conditions. Furthermore, amplified ENV coding may potentially contribute to speech perception problems in people with cochlear hearing loss by acting as a distraction from more salient acoustic cues, particularly in fluctuating backgrounds.

  19. Predictive coding of multisensory timing

    PubMed Central

    Shi, Zhuanghua; Burr, David

    2016-01-01

    The sense of time is foundational for perception and action, yet it frequently departs significantly from physical time. In the paper we review recent progress on temporal contextual effects, multisensory temporal integration, temporal recalibration, and related computational models. We suggest that subjective time arises from minimizing prediction errors and adaptive recalibration, which can be unified in the framework of predictive coding, a framework rooted in Helmholtz’s ‘perception as inference’. PMID:27695705

  20. Burst Firing is a Neural Code in an Insect Auditory System

    PubMed Central

    Eyherabide, Hugo G.; Rokem, Ariel; Herz, Andreas V. M.; Samengo, Inés

    2008-01-01

    Various classes of neurons alternate between high-frequency discharges and silent intervals. This phenomenon is called burst firing. To analyze burst activity in an insect system, grasshopper auditory receptor neurons were recorded in vivo for several distinct stimulus types. The experimental data show that both burst probability and burst characteristics are strongly influenced by temporal modulations of the acoustic stimulus. The tendency to burst, hence, is not only determined by cell-intrinsic processes, but also by their interaction with the stimulus time course. We study this interaction quantitatively and observe that bursts containing a certain number of spikes occur shortly after stimulus deflections of specific intensity and duration. Our findings suggest a sparse neural code where information about the stimulus is represented by the number of spikes per burst, irrespective of the detailed interspike-interval structure within a burst. This compact representation cannot be interpreted as a firing-rate code. An information-theoretical analysis reveals that the number of spikes per burst reliably conveys information about the amplitude and duration of sound transients, whereas their time of occurrence is reflected by the burst onset time. The investigated neurons encode almost half of the total transmitted information in burst activity. PMID:18946533

  1. Coding of signals in noise by amphibian auditory nerve fibers.

    PubMed

    Narins, P M

    1987-01-01

    Rate-level (R-L) functions derived for pure-tones and pure-tones in broadband noise were obtained for auditory nerve fibers in the treefrog, Eleutherodactylus coqui. Normalized R-L functions for low-frequency, low-threshold fibers exhibit a horizontal rightward shift in the presence of broadband background noise. The magnitude of this shift is directly proportional to the noise spectrum level, and inversely proportional to the fiber's threshold. R-L functions for mid- and high-frequency fibers also show a horizontal shift, but to a lesser degree, consistent with their elevated thresholds relative to the low-frequency fibers. The implications of these findings for the processing of biologically significant sounds in the high levels of background noise in the animal's natural habitat are considered.

  2. Norepinephrine Modulates Coding of Complex Vocalizations in the Songbird Auditory Cortex Independent of Local Neuroestrogen Synthesis

    PubMed Central

    Ikeda, Maaya Z.; Jeon, Sung David; Cowell, Rosemary A.

    2015-01-01

    The catecholamine norepinephrine plays a significant role in auditory processing. Most studies to date have examined the effects of norepinephrine on the neuronal response to relatively simple stimuli, such as tones and calls. It is less clear how norepinephrine shapes the detection of complex syntactical sounds, as well as the coding properties of sensory neurons. Songbirds provide an opportunity to understand how auditory neurons encode complex, learned vocalizations, and the potential role of norepinephrine in modulating the neuronal computations for acoustic communication. Here, we infused norepinephrine into the zebra finch auditory cortex and performed extracellular recordings to study the modulation of song representations in single neurons. Consistent with its proposed role in enhancing signal detection, norepinephrine decreased spontaneous activity and firing during stimuli, yet it significantly enhanced the auditory signal-to-noise ratio. These effects were all mimicked by clonidine, an α-2 receptor agonist. Moreover, a pattern classifier analysis indicated that norepinephrine enhanced the ability of single neurons to accurately encode complex auditory stimuli. Because neuroestrogens are also known to enhance auditory processing in the songbird brain, we tested the hypothesis that norepinephrine actions depend on local estrogen synthesis. Neither norepinephrine nor adrenergic receptor antagonist infusion into the auditory cortex had detectable effects on local estradiol levels. Moreover, pretreatment with fadrozole, a specific aromatase inhibitor, did not block norepinephrine's neuromodulatory effects. Together, these findings indicate that norepinephrine enhances signal detection and information encoding for complex auditory stimuli by suppressing spontaneous “noise” activity and that these actions are independent of local neuroestrogen synthesis. PMID:26109659

  3. Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH)

    PubMed Central

    Tierney, Adam; Kraus, Nina

    2014-01-01

    Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills. PMID:25505879

  4. The effect of real-time auditory feedback on learning new characters.

    PubMed

    Danna, Jérémy; Fontaine, Maureen; Paz-Villagrán, Vietminh; Gondre, Charles; Thoret, Etienne; Aramaki, Mitsuko; Kronland-Martinet, Richard; Ystad, Sølvi; Velay, Jean-Luc

    2015-10-01

    The present study investigated the effect of handwriting sonification on graphomotor learning. Thirty-two adults, distributed in two groups, learned four new characters with their non-dominant hand. The experimental design included a pre-test, a training session, and two post-tests, one just after the training sessions and another 24h later. Two characters were learned with and two without real-time auditory feedback (FB). The first group first learned the two non-sonified characters and then the two sonified characters whereas the reverse order was adopted for the second group. Results revealed that auditory FB improved the speed and fluency of handwriting movements but reduced, in the short-term only, the spatial accuracy of the trace. Transforming kinematic variables into sounds allows the writer to perceive his/her movement in addition to the written trace and this might facilitate handwriting learning. However, there were no differential effects of auditory FB, neither long-term nor short-term for the subjects who first learned the characters with auditory FB. We hypothesize that the positive effect on the handwriting kinematics was transferred to characters learned without FB. This transfer effect of the auditory FB is discussed in light of the Theory of Event Coding.

  5. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    PubMed Central

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  6. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    PubMed

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  7. Odors Bias Time Perception in Visual and Auditory Modalities

    PubMed Central

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  8. Speech Compensation for Time-Scale-Modified Auditory Feedback

    ERIC Educational Resources Information Center

    Ogane, Rintaro; Honda, Masaaki

    2014-01-01

    Purpose: The purpose of this study was to examine speech compensation in response to time-scale-modified auditory feedback during the transition of the semivowel for a target utterance of /ija/. Method: Each utterance session consisted of 10 control trials in the normal feedback condition followed by 20 perturbed trials in the modified auditory…

  9. Development of Visuo-Auditory Integration in Space and Time

    PubMed Central

    Gori, Monica; Sandini, Giulio; Burr, David

    2012-01-01

    Adults integrate multisensory information optimally (e.g., Ernst and Banks, 2002) while children do not integrate multisensory visual-haptic cues until 8–10 years of age (e.g., Gori et al., 2008). Before that age strong unisensory dominance occurs for size and orientation visual-haptic judgments, possibly reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. Within the framework of the cross-sensory calibration hypothesis, we investigate visual-auditory integration in both space and time with child-friendly spatial and temporal bisection tasks. Unimodal and bimodal (conflictual and not) audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. On the contrary, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (for PSEs), and bimodal thresholds higher than the Bayesian prediction. Only in the adult group did bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behavior develops late. We suggest that the visual dominance for space and the auditory dominance for time could reflect a cross-sensory comparison of vision in the spatial visuo-audio task and a cross-sensory comparison of audition in the temporal visuo-audio task. PMID:23060759

  10. Context-dependent coding and gain control in the auditory system of crickets.

    PubMed

    Clemens, Jan; Rau, Florian; Hennig, R Matthias; Hildebrandt, K Jannis

    2015-10-01

    Sensory systems process stimuli that greatly vary in intensity and complexity. To maintain efficient information transmission, neural systems need to adjust their properties to these different sensory contexts, yielding adaptive or stimulus-dependent codes. Here, we demonstrated adaptive spectrotemporal tuning in a small neural network, i.e. the peripheral auditory system of the cricket. We found that tuning of cricket auditory neurons was sharper for complex multi-band than for simple single-band stimuli. Information theoretical considerations revealed that this sharpening improved information transmission by separating the neural representations of individual stimulus components. A network model inspired by the structure of the cricket auditory system suggested two putative mechanisms underlying this adaptive tuning: a saturating peripheral nonlinearity could change the spectral tuning, whereas broad feed-forward inhibition was able to reproduce the observed adaptive sharpening of temporal tuning. Our study revealed a surprisingly dynamic code usually found in more complex nervous systems and suggested that stimulus-dependent codes could be implemented using common neural computations.

  11. Unanesthetized auditory cortex exhibits multiple codes for gaps in cochlear implant pulse trains.

    PubMed

    Kirby, Alana E; Middlebrooks, John C

    2012-02-01

    Cochlear implant listeners receive auditory stimulation through amplitude-modulated electric pulse trains. Auditory nerve studies in animals demonstrate qualitatively different patterns of firing elicited by low versus high pulse rates, suggesting that stimulus pulse rate might influence the transmission of temporal information through the auditory pathway. We tested in awake guinea pigs the temporal acuity of auditory cortical neurons for gaps in cochlear implant pulse trains. Consistent with results using anesthetized conditions, temporal acuity improved with increasing pulse rates. Unlike the anesthetized condition, however, cortical neurons responded in the awake state to multiple distinct features of the gap-containing pulse trains, with the dominant features varying with stimulus pulse rate. Responses to the onset of the trailing pulse train (Trail-ON) provided the most sensitive gap detection at 1,017 and 4,069 pulse-per-second (pps) rates, particularly for short (25 ms) leading pulse trains. In contrast, under conditions of 254 pps rate and long (200 ms) leading pulse trains, a sizeable fraction of units demonstrated greater temporal acuity in the form of robust responses to the offsets of the leading pulse train (Lead-OFF). Finally, TONIC responses exhibited decrements in firing rate during gaps, but were rarely the most sensitive feature. Unlike results from anesthetized conditions, temporal acuity of the most sensitive units was nearly as sharp for brief as for long leading bursts. The differences in stimulus coding across pulse rates likely originate from pulse rate-dependent variations in adaptation in the auditory nerve. Two marked differences from responses to acoustic stimulation were: first, Trail-ON responses to 4,069 pps trains encoded substantially shorter gaps than have been observed with acoustic stimuli; and second, the Lead-OFF gap coding seen for <15 ms gaps in 254 pps stimuli is not seen in responses to sounds. The current results may help

  12. Quantifying envelope and fine-structure coding in auditory nerve responses to chimaeric speech.

    PubMed

    Heinz, Michael G; Swaminathan, Jayaganesh

    2009-09-01

    Any sound can be separated mathematically into a slowly varying envelope and rapidly varying fine-structure component. This property has motivated numerous perceptual studies to understand the relative importance of each component for speech and music perception. Specialized acoustic stimuli, such as auditory chimaeras with the envelope of one sound and fine structure of another have been used to separate the perceptual roles for envelope and fine structure. Cochlear narrowband filtering limits the ability to isolate fine structure from envelope; however, envelope recovery from fine structure has been difficult to evaluate physiologically. To evaluate envelope recovery at the output of the cochlea, neural cross-correlation coefficients were developed that quantify the similarity between two sets of spike-train responses. Shuffled auto- and cross-correlogram analyses were used to compute separate correlations for responses to envelope and fine structure based on both model and recorded spike trains from auditory nerve fibers. Previous correlogram analyses were extended to isolate envelope coding more effectively in auditory nerve fibers with low center frequencies, which are particularly important for speech coding. Recovered speech envelopes were present in both model and recorded responses to one- and 16-band speech fine-structure chimaeras and were significantly greater for the one-band case, consistent with perceptual studies. Model predictions suggest that cochlear recovered envelopes are reduced following sensorineural hearing loss due to broadened tuning associated with outer-hair cell dysfunction. In addition to the within-fiber cross-stimulus cases considered here, these neural cross-correlation coefficients can also be used to evaluate spatiotemporal coding by applying them to cross-fiber within-stimulus conditions. Thus, these neural metrics can be used to quantitatively evaluate a wide range of perceptually significant temporal coding issues relevant to

  13. Seasonal plasticity of precise spike timing in the avian auditory system.

    PubMed

    Caras, Melissa L; Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A

    2015-02-25

    Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding.

  14. Seasonal Plasticity of Precise Spike Timing in the Avian Auditory System

    PubMed Central

    Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A.

    2015-01-01

    Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843

  15. Auditory cortical field coding long-lasting tonal offsets in mice

    PubMed Central

    Baba, Hironori; Tsukano, Hiroaki; Hishida, Ryuichi; Takahashi, Kuniyuki; Horii, Arata; Takahashi, Sugata; Shibuki, Katsuei

    2016-01-01

    Although temporal information processing is important in auditory perception, the mechanisms for coding tonal offsets are unknown. We investigated cortical responses elicited at the offset of tonal stimuli using flavoprotein fluorescence imaging in mice. Off-responses were clearly observed at the offset of tonal stimuli lasting for 7 s, but not after stimuli lasting for 1 s. Off-responses to the short stimuli appeared in a similar cortical region, when conditioning tonal stimuli lasting for 5–20 s preceded the stimuli. MK-801, an inhibitor of NMDA receptors, suppressed the two types of off-responses, suggesting that disinhibition produced by NMDA receptor-dependent synaptic depression might be involved in the off-responses. The peak off-responses were localized in a small region adjacent to the primary auditory cortex, and no frequency-dependent shift of the response peaks was found. Frequency matching of preceding tonal stimuli with short test stimuli was not required for inducing off-responses to short stimuli. Two-photon calcium imaging demonstrated significantly larger neuronal off-responses to stimuli lasting for 7 s in this field, compared with off-responses to stimuli lasting for 1 s. The present results indicate the presence of an auditory cortical field responding to long-lasting tonal offsets, possibly for temporal information processing. PMID:27687766

  16. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex.

    PubMed

    Trapeau, Régis; Schönwiesner, Marc

    2015-09-01

    The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere.

  17. Repetition suppression and expectation suppression are dissociable in time in early auditory evoked fields.

    PubMed

    Todorovic, Ana; de Lange, Floris P

    2012-09-26

    Repetition of a stimulus, as well as valid expectation that a stimulus will occur, both attenuate the neural response to it. These effects, repetition suppression and expectation suppression, are typically confounded in paradigms in which the nonrepeated stimulus is also relatively rare (e.g., in oddball blocks of mismatch negativity paradigms, or in repetition suppression paradigms with multiple repetitions before an alternation). However, recent hierarchical models of sensory processing inspire the hypothesis that the two might be separable in time, with repetition suppression occurring earlier, as a consequence of local transition probabilities, and suppression by expectation occurring later, as a consequence of learnt statistical regularities. Here we test this hypothesis in an auditory experiment by orthogonally manipulating stimulus repetition and stimulus expectation and, using magnetoencephalography, measuring the neural response over time in human subjects. We found that stimulus repetition (but not stimulus expectation) attenuates the early auditory response (40-60 ms), while stimulus expectation (but not stimulus repetition) attenuates the subsequent, intermediate stage of auditory processing (100-200 ms). These findings are well in line with hierarchical predictive coding models, which posit sequential stages of prediction error resolution, contingent on the level at which the hypothesis is generated.

  18. The Neural Code for Auditory Space Depends on Sound Frequency and Head Size in an Optimal Manner

    PubMed Central

    Harper, Nicol S.; Scott, Brian H.; Semple, Malcolm N.; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)–the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron’s maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  19. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    PubMed

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  20. Neural evidence for predictive coding in auditory cortex during speech production.

    PubMed

    Okada, Kayoko; Matchin, William; Hickok, Gregory

    2017-04-10

    Recent models of speech production suggest that motor commands generate forward predictions of the auditory consequences of those commands, that these forward predications can be used to monitor and correct speech output, and that this system is hierarchically organized (Hickok, Houde, & Rong, Neuron, 69(3), 407--422, 2011; Pickering & Garrod, Behavior and Brain Sciences, 36(4), 329--347, 2013). Recent psycholinguistic research has shown that internally generated speech (i.e., imagined speech) produces different types of errors than does overt speech (Oppenheim & Dell, Cognition, 106(1), 528--537, 2008; Oppenheim & Dell, Memory & Cognition, 38(8), 1147-1160, 2010). These studies suggest that articulated speech might involve predictive coding at additional levels than imagined speech. The current fMRI experiment investigates neural evidence of predictive coding in speech production. Twenty-four participants from UC Irvine were recruited for the study. Participants were scanned while they were visually presented with a sequence of words that they reproduced in sync with a visual metronome. On each trial, they were cued to either silently articulate the sequence or to imagine the sequence without overt articulation. As expected, silent articulation and imagined speech both engaged a left hemisphere network previously implicated in speech production. A contrast of silent articulation with imagined speech revealed greater activation for articulated speech in inferior frontal cortex, premotor cortex and the insula in the left hemisphere, consistent with greater articulatory load. Although both conditions were silent, this contrast also produced significantly greater activation in auditory cortex in dorsal superior temporal gyrus in both hemispheres. We suggest that these activations reflect forward predictions arising from additional levels of the perceptual/motor hierarchy that are involved in monitoring the intended speech output.

  1. A Temporal Predictive Code for Voice Motor Control: Evidence from ERP and Behavioral Responses to Pitch-shifted Auditory Feedback

    PubMed Central

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R.

    2016-01-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control. PMID:26835556

  2. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    PubMed

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control.

  3. Auditory reafferences: the influence of real-time feedback on movement control

    PubMed Central

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes. PMID:25688230

  4. Predicted effects of sensorineural hearing loss on across-fiber envelope coding in the auditory nervea

    PubMed Central

    Swaminathan, Jayaganesh; Heinz, Michael G.

    2011-01-01

    Cross-channel envelope correlations are hypothesized to influence speech intelligibility, particularly in adverse conditions. Acoustic analyses suggest speech envelope correlations differ for syllabic and phonemic ranges of modulation frequency. The influence of cochlear filtering was examined here by predicting cross-channel envelope correlations in different speech modulation ranges for normal and impaired auditory-nerve (AN) responses. Neural cross-correlation coefficients quantified across-fiber envelope coding in syllabic (0–5 Hz), phonemic (5–64 Hz), and periodicity (64–300 Hz) modulation ranges. Spike trains were generated from a physiologically based AN model. Correlations were also computed using the model with selective hair-cell damage. Neural predictions revealed that envelope cross-correlation decreased with increased characteristic-frequency separation for all modulation ranges (with greater syllabic-envelope correlation than phonemic or periodicity). Syllabic envelope was highly correlated across many spectral channels, whereas phonemic and periodicity envelopes were correlated mainly between adjacent channels. Outer-hair-cell impairment increased the degree of cross-channel correlation for phonemic and periodicity ranges for speech in quiet and in noise, thereby reducing the number of independent neural information channels for envelope coding. In contrast, outer-hair-cell impairment was predicted to decrease cross-channel correlation for syllabic envelopes in noise, which may partially account for the reduced ability of hearing-impaired listeners to segregate speech in complex backgrounds. PMID:21682421

  5. Associative learning shapes the neural code for stimulus magnitude in primary auditory cortex.

    PubMed

    Polley, Daniel B; Heiser, Marc A; Blake, David T; Schreiner, Christoph E; Merzenich, Michael M

    2004-11-16

    Since the dawn of experimental psychology, researchers have sought an understanding of the fundamental relationship between the amplitude of sensory stimuli and the magnitudes of their perceptual representations. Contemporary theories support the view that magnitude is encoded by a linear increase in firing rate established in the primary afferent pathways. In the present study, we have investigated sound intensity coding in the rat primary auditory cortex (AI) and describe its plasticity by following paired stimulus reinforcement and instrumental conditioning paradigms. In trained animals, population-response strengths in AI became more strongly nonlinear with increasing stimulus intensity. Individual AI responses became selective to more restricted ranges of sound intensities and, as a population, represented a broader range of preferred sound levels. These experiments demonstrate that the representation of stimulus magnitude can be powerfully reshaped by associative learning processes and suggest that the code for sound intensity within AI can be derived from intensity-tuned neurons that change, rather than simply increase, their firing rates in proportion to increases in sound intensity.

  6. The Use of Auditory Output for Time-Critical Information

    DTIC Science & Technology

    1992-12-01

    that uses the auditory sense for alerts in the Combat Information Center (ClC). The immediate goal was to compare operator performance using voice ...tracking task on a scenario simulation of the ClC. Alerts were presented by voice , auditory icons, or buzzers. Four different causes for alerts were...alert so that three related sounds corresponded to three alerts of each alert cause. RESULTS 1. Overall, the results showed that both voice and

  7. Multiple time scales of adaptation in the auditory system as revealed by human evoked potentials.

    PubMed

    Costa-Faidella, Jordi; Grimm, Sabine; Slabu, Lavinia; Díaz-Santaella, Francisco; Escera, Carles

    2011-06-01

    Single neurons in the primary auditory cortex of the cat show faster adaptation time constants to short- than long-term stimulus history. This ability to encode the complex past auditory stimulation in multiple time scales would enable the auditory system to generate expectations of the incoming stimuli. Here, we tested whether large neural populations exhibit this ability as well, by recording human auditory evoked potentials (AEP) to pure tones in a sequence embedding short- and long-term aspects of stimulus history. Our results yielded dynamic amplitude modulations of the P2 AEP to stimulus repetition spanning from milliseconds to tens of seconds concurrently, as well as amplitude modulations of the mismatch negativity AEP to regularity violations. A simple linear model of expectancy accounting for both short- and long-term stimulus history described our results, paralleling the behavior of neurons in the primary auditory cortex.

  8. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations

    PubMed Central

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems. PMID:26292285

  9. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations.

    PubMed

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.

  10. Bat auditory cortex – model for general mammalian auditory computation or special design solution for active time perception?

    PubMed

    Kössl, Manfred; Hechavarria, Julio; Voss, Cornelia; Schaefer, Markus; Vater, Marianne

    2015-03-01

    Audition in bats serves passive orientation, alerting functions and communication as it does in other vertebrates. In addition, bats have evolved echolocation for orientation and prey detection and capture. This put a selective pressure on the auditory system in regard to echolocation-relevant temporal computation and frequency analysis. The present review attempts to evaluate in which respect the processing modules of bat auditory cortex (AC) are a model for typical mammalian AC function or are designed for echolocation-unique purposes. We conclude that, while cortical area arrangement and cortical frequency processing does not deviate greatly from that of other mammals, the echo delay time-sensitive dorsal cortex regions contain special designs for very powerful time perception. Different bat species have either a unique chronotopic cortex topography or a distributed salt-and-pepper representation of echo delay. The two designs seem to enable similar behavioural performance.

  11. Timing predictability enhances regularity encoding in the human subcortical auditory pathway.

    PubMed

    Gorina-Careta, Natàlia; Zarnowiec, Katarzyna; Costa-Faidella, Jordi; Escera, Carles

    2016-11-17

    The encoding of temporal regularities is a critical property of the auditory system, as short-term neural representations of environmental statistics serve to auditory object formation and detection of potentially relevant novel stimuli. A putative neural mechanism underlying regularity encoding is repetition suppression, the reduction of neural activity to repeated stimulation. Although repetitive stimulation per se has shown to reduce auditory neural activity in animal cortical and subcortical levels and in the human cerebral cortex, other factors such as timing may influence the encoding of statistical regularities. This study was set out to investigate whether temporal predictability in the ongoing auditory input modulates repetition suppression in subcortical stages of the auditory processing hierarchy. Human auditory frequency-following responses (FFR) were recorded to a repeating consonant-vowel stimuli (/wa/) delivered in temporally predictable and unpredictable conditions. FFR amplitude was attenuated by repetition independently of temporal predictability, yet we observed an accentuated suppression when the incoming stimulation was temporally predictable. These findings support the view that regularity encoding spans across the auditory hierarchy and point to temporal predictability as a modulatory factor of regularity encoding in early stages of the auditory pathway.

  12. Timing predictability enhances regularity encoding in the human subcortical auditory pathway

    PubMed Central

    Gorina-Careta, Natàlia; Zarnowiec, Katarzyna; Costa-Faidella, Jordi; Escera, Carles

    2016-01-01

    The encoding of temporal regularities is a critical property of the auditory system, as short-term neural representations of environmental statistics serve to auditory object formation and detection of potentially relevant novel stimuli. A putative neural mechanism underlying regularity encoding is repetition suppression, the reduction of neural activity to repeated stimulation. Although repetitive stimulation per se has shown to reduce auditory neural activity in animal cortical and subcortical levels and in the human cerebral cortex, other factors such as timing may influence the encoding of statistical regularities. This study was set out to investigate whether temporal predictability in the ongoing auditory input modulates repetition suppression in subcortical stages of the auditory processing hierarchy. Human auditory frequency–following responses (FFR) were recorded to a repeating consonant–vowel stimuli (/wa/) delivered in temporally predictable and unpredictable conditions. FFR amplitude was attenuated by repetition independently of temporal predictability, yet we observed an accentuated suppression when the incoming stimulation was temporally predictable. These findings support the view that regularity encoding spans across the auditory hierarchy and point to temporal predictability as a modulatory factor of regularity encoding in early stages of the auditory pathway. PMID:27853313

  13. Time-dependent neural processing of auditory feedback during voice pitch error detection.

    PubMed

    Behroozmand, Roozbeh; Liu, Hanjun; Larson, Charles R

    2011-05-01

    The neural responses to sensory consequences of a self-produced motor act are suppressed compared with those in response to a similar but externally generated stimulus. Previous studies in the somatosensory and auditory systems have shown that the motor-induced suppression of the sensory mechanisms is sensitive to delays between the motor act and the onset of the stimulus. The present study investigated time-dependent neural processing of auditory feedback in response to self-produced vocalizations. ERPs were recorded in response to normal and pitch-shifted voice auditory feedback during active vocalization and passive listening to the playback of the same vocalizations. The pitch-shifted stimulus was delivered to the subjects' auditory feedback after a randomly chosen time delay between the vocal onset and the stimulus presentation. Results showed that the neural responses to delayed feedback perturbations were significantly larger than those in response to the pitch-shifted stimulus occurring at vocal onset. Active vocalization was shown to enhance neural responsiveness to feedback alterations only for nonzero delays compared with passive listening to the playback. These findings indicated that the neural mechanisms of auditory feedback processing are sensitive to timing between the vocal motor commands and the incoming auditory feedback. Time-dependent neural processing of auditory feedback may be an important feature of the audio-vocal integration system that helps to improve the feedback-based monitoring and control of voice structure through vocal error detection and correction.

  14. [A comparison of time resolution among auditory, tactile and promontory electrical stimulation--superiority of cochlear implants as human communication aids].

    PubMed

    Matsushima, J; Kumagai, M; Harada, C; Takahashi, K; Inuyama, Y; Ifukube, T

    1992-09-01

    Our previous reports showed that second formant information, using a speech coding method, could be transmitted through an electrode on the promontory. However, second formant information can also be transmitted by tactile stimulation. Therefore, to find out whether electrical stimulation of the auditory nerve would be superior to tactile stimulation for our speech coding method, the time resolutions of the two modes of stimulation were compared. The results showed that the time resolution of electrical promontory stimulation was three times better than the time resolution of tactile stimulation of the finger. This indicates that electrical stimulation of the auditory nerve is much better for our speech coding method than tactile stimulation of the finger.

  15. Time-window-of-integration (TWIN) model for saccadic reaction time: effect of auditory masker level on visual-auditory spatial interaction in elevation.

    PubMed

    Colonius, Hans; Diederich, Adele; Steenken, Rike

    2009-05-01

    Saccadic reaction time (SRT) to a visual target tends to be shorter when auditory stimuli are presented in close temporal and spatial proximity, even when subjects are instructed to ignore the auditory non-target (focused attention paradigm). Previous studies using pairs of visual and auditory stimuli differing in both azimuth and vertical position suggest that the amount of SRT facilitation decreases not with the physical but with the perceivable distance between visual target and auditory non-target. Steenken et al. (Brain Res 1220:150-156, 2008) presented an additional white-noise masker background of three seconds duration. Increasing the masker level had a diametrical effect on SRTs in spatially coincident versus disparate stimulus configurations: saccadic responses to coincident visual-auditory stimuli are slowed down, whereas saccadic responses to disparate stimuli are speeded up. Here we show that the time-window-of-integration model accounts for this observation by variation of a perceivable-distance parameter in the second stage of the model whose value does not depend on stimulus onset asynchrony between target and non-target.

  16. Onset timing of cross-sensory activations and multisensory interactions in auditory and visual sensory cortices.

    PubMed

    Raij, Tommi; Ahveninen, Jyrki; Lin, Fa-Hsuan; Witzel, Thomas; Jääskeläinen, Iiro P; Letham, Benjamin; Israeli, Emily; Sahyoun, Cherif; Vasios, Christos; Stufflebeam, Steven; Hämäläinen, Matti; Belliveau, John W

    2010-05-01

    Here we report early cross-sensory activations and audiovisual interactions at the visual and auditory cortices using magnetoencephalography (MEG) to obtain accurate timing information. Data from an identical fMRI experiment were employed to support MEG source localization results. Simple auditory and visual stimuli (300-ms noise bursts and checkerboards) were presented to seven healthy humans. MEG source analysis suggested generators in the auditory and visual sensory cortices for both within-modality and cross-sensory activations. fMRI cross-sensory activations were strong in the visual but almost absent in the auditory cortex; this discrepancy with MEG possibly reflects the influence of acoustical scanner noise in fMRI. In the primary auditory cortices (Heschl's gyrus) the onset of activity to auditory stimuli was observed at 23 ms in both hemispheres, and to visual stimuli at 82 ms in the left and at 75 ms in the right hemisphere. In the primary visual cortex (Calcarine fissure) the activations to visual stimuli started at 43 ms and to auditory stimuli at 53 ms. Cross-sensory activations thus started later than sensory-specific activations, by 55 ms in the auditory cortex and by 10 ms in the visual cortex, suggesting that the origins of the cross-sensory activations may be in the primary sensory cortices of the opposite modality, with conduction delays (from one sensory cortex to another) of 30-35 ms. Audiovisual interactions started at 85 ms in the left auditory, 80 ms in the right auditory and 74 ms in the visual cortex, i.e., 3-21 ms after inputs from the two modalities converged.

  17. Onset timing of cross-sensory activations and multisensory interactions in auditory and visual sensory cortices

    PubMed Central

    Raij, Tommi; Ahveninen, Jyrki; Lin, Fa-Hsuan; Witzel, Thomas; Jääskeläinen, Iiro P.; Letham, Benjamin; Israeli, Emily; Sahyoun, Cherif; Vasios, Christos; Stufflebeam, Steven; Hämäläinen, Matti; Belliveau, John W.

    2010-01-01

    Here we report early cross-sensory activations and audiovisual interactions at the visual and auditory cortices using magnetoencephalography (MEG) to obtain accurate timing information. Data from an identical fMRI experiment were employed to support MEG source localization results. Simple auditory and visual stimuli (300-ms noise bursts and checkerboards) were presented to seven healthy humans. MEG source analysis suggested generators in the auditory and visual sensory cortices for both within-modality and cross-sensory activations. fMRI cross-sensory activations were strong in the visual but almost absent in the auditory cortex; this discrepancy with MEG possibly reflects influence of acoustical scanner noise in fMRI. In the primary auditory cortices (Heschl’s gyrus) onset of activity to auditory stimuli was observed at 23 ms in both hemispheres, and to visual stimuli at 82 ms in the left and at 75 ms in the right hemisphere. In the primary visual cortex (Calcarine fissure) the activations to visual stimuli started at 43 ms and to auditory stimuli at 53 ms. Cross-sensory activations thus started later than sensory-specific activations, by 55 ms in the auditory cortex and by 10 ms in the visual cortex, suggesting that the origins of the cross-sensory activations may be in the primary sensory cortices of the opposite modality, with conduction delays (from one sensory cortex to another) of 30–35 ms. Audiovisual interactions started at 85 ms in the left auditory, 80 ms in the right auditory, and 74 ms in the visual cortex, i.e., 3–21 ms after inputs from both modalities converged. PMID:20584181

  18. Speech enhancement for listeners with hearing loss based on a model for vowel coding in the auditory midbrain.

    PubMed

    Rao, Akshay; Carney, Laurel H

    2014-07-01

    A novel signal-processing strategy is proposed to enhance speech for listeners with hearing loss. The strategy focuses on improving vowel perception based on a recent hypothesis for vowel coding in the auditory system. Traditionally, studies of neural vowel encoding have focused on the representation of formants (peaks in vowel spectra) in the discharge patterns of the population of auditory-nerve (AN) fibers. A recent hypothesis focuses instead on vowel encoding in the auditory midbrain, and suggests a robust representation of formants. AN fiber discharge rates are characterized by pitch-related fluctuations having frequency-dependent modulation depths. Fibers tuned to frequencies near formants exhibit weaker pitch-related fluctuations than those tuned to frequencies between formants. Many auditory midbrain neurons show tuning to amplitude modulation frequency in addition to audio frequency. According to the auditory midbrain vowel encoding hypothesis, the response map of a population of midbrain neurons tuned to modulations near voice pitch exhibits minima near formant frequencies, due to the lack of strong pitch-related fluctuations at their inputs. This representation is robust over the range of noise conditions in which speech intelligibility is also robust for normal-hearing listeners. Based on this hypothesis, a vowel-enhancement strategy has been proposed that aims to restore vowel encoding at the level of the auditory midbrain. The signal processing consists of pitch tracking, formant tracking, and formant enhancement. The novel formant-tracking method proposed here estimates the first two formant frequencies by modeling characteristics of the auditory periphery, such as saturated discharge rates of AN fibers and modulation tuning properties of auditory midbrain neurons. The formant enhancement stage aims to restore the representation of formants at the level of the midbrain by increasing the dominance of a single harmonic near each formant and saturating

  19. Anodal transcranial direct current stimulation over auditory cortex degrades frequency discrimination by affecting temporal, but not place, coding.

    PubMed

    Tang, Matthew F; Hammond, Geoffrey R

    2013-09-01

    We report three studies of the effects of anodal transcranial direct current stimulation (tDCS) over auditory cortex on audition in humans. Experiment 1 examined whether tDCS enhances rapid frequency discrimination learning. Human subjects were trained on a frequency discrimination task for 2 days with anodal tDCS applied during the first day with the second day used to assess effects of stimulation on retention. This revealed that tDCS did not affect learning but did degrade frequency discrimination during both days. Follow-up testing 2-3 months after stimulation showed no long-term effects. Following the unexpected results, two additional experiments examined the effects of tDCS on the underlying mechanisms of frequency discrimination, place and temporal coding. Place coding underlies frequency selectivity and was measured using psychophysical tuning curves with broader curves indicating poorer frequency selectivity. Temporal coding is determined by measuring the ability to discriminate sounds with different fine temporal structure. We found that tDCS does not broaden frequency selectivity but instead degraded the ability to discriminate tones with different fine temporal structure. The overall results suggest anodal tDCS applied over auditory cortex degrades frequency discrimination by affecting temporal, but not place, coding mechanisms.

  20. Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks

    PubMed Central

    Dai, Lengshi; Shinn-Cunningham, Barbara G.

    2016-01-01

    Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics

  1. Linear coding of voice onset time.

    PubMed

    Frye, Richard E; Fisher, Janet McGraw; Coty, Alexis; Zarella, Melissa; Liederman, Jacqueline; Halgren, Eric

    2007-09-01

    Voice onset time (VOT) provides an important auditory cue for recognizing spoken consonant-vowel syllables. Although changes in the neuromagnetic response to consonant-vowel syllables with different VOT have been examined, such experiments have only manipulated VOT with respect to voicing. We utilized the characteristics of a previously developed asymmetric VOT continuum [Liederman, J., Frye, R. E., McGraw Fisher, J., Greenwood, K., & Alexander, R. A temporally dynamic contextual effect that disrupts voice onset time discrimination of rapidly successive stimuli. Psychonomic Bulletin and Review, 12, 380-386, 2005] to determine if changes in the prominent M100 neuromagnetic response were linearly modulated by VOT. Eight right-handed, English-speaking, normally developing participants performed a VOT discrimination task during a whole-head neuromagnetic recording. The M100 was identified in the gradiometers overlying the right and left temporal cortices and single dipoles were fit to each M100 waveform. A repeated measures analysis of variance with post hoc contrast test for linear trend was used to determine whether characteristics of the M100 were linearly modulated by VOT. The morphology of the M100 gradiometer waveform and the peak latency of the dipole waveform were linearly modulated by VOT. This modulation was much greater in the left, as compared to the right, hemisphere. The M100 dipole moved in a linear fashion as VOT increased in both hemispheres, but along different axes in each hemisphere. This study suggests that VOT may linearly modulate characteristics of the M100, predominately in the left hemisphere, and suggests that the VOT of consonant-vowel syllables, instead of, or in addition to, voicing, should be examined in future experiments.

  2. Neural coding of sound intensity and loudness in the human auditory system.

    PubMed

    Röhl, Markus; Uppenkamp, Stefan

    2012-06-01

    Inter-individual differences in loudness sensation of 45 young normal-hearing participants were employed to investigate how and at what stage of the auditory pathway perceived loudness, the perceptual correlate of sound intensity, is transformed into neural activation. Loudness sensation was assessed by categorical loudness scaling, a psychoacoustical scaling procedure, whereas neural activation in the auditory cortex, inferior colliculi, and medial geniculate bodies was investigated with functional magnetic resonance imaging (fMRI). We observed an almost linear increase of perceived loudness and percent signal change from baseline (PSC) in all examined stages of the upper auditory pathway. Across individuals, the slope of the underlying growth function for perceived loudness was significantly correlated with the slope of the growth function for the PSC in the auditory cortex, but not in subcortical structures. In conclusion, the fMRI correlate of neural activity in the auditory cortex as measured by the blood oxygen level-dependent effect appears to be more a linear reflection of subjective loudness sensation rather than a display of physical sound pressure level, as measured using a sound-level meter.

  3. Time coded distribution via broadcasting stations

    NASA Technical Reports Server (NTRS)

    Leschiutta, S.; Pettiti, V.; Detoma, E.

    1979-01-01

    The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.

  4. Interactive rhythmic auditory stimulation reinstates natural 1/f timing in gait of Parkinson's patients.

    PubMed

    Hove, Michael J; Suzuki, Kazuki; Uchitomi, Hirotaka; Orimo, Satoshi; Miyake, Yoshihiro

    2012-01-01

    Parkinson's disease (PD) and basal ganglia dysfunction impair movement timing, which leads to gait instability and falls. Parkinsonian gait consists of random, disconnected stride times--rather than the 1/f structure observed in healthy gait--and this randomness of stride times (low fractal scaling) predicts falling. Walking with fixed-tempo Rhythmic Auditory Stimulation (RAS) can improve many aspects of gait timing; however, it lowers fractal scaling (away from healthy 1/f structure) and requires attention. Here we show that interactive rhythmic auditory stimulation reestablishes healthy gait dynamics in PD patients. In the experiment, PD patients and healthy participants walked with a) no auditory stimulation, b) fixed-tempo RAS, and c) interactive rhythmic auditory stimulation. The interactive system used foot sensors and nonlinear oscillators to track and mutually entrain with the human's step timing. Patients consistently synchronized with the interactive system, their fractal scaling returned to levels of healthy participants, and their gait felt more stable to them. Patients and healthy participants rarely synchronized with fixed-tempo RAS, and when they did synchronize their fractal scaling declined from healthy 1/f levels. Five minutes after removing the interactive rhythmic stimulation, the PD patients' gait retained high fractal scaling, suggesting that the interaction stabilized the internal rhythm generating system and reintegrated timing networks. The experiment demonstrates that complex interaction is important in the (re)emergence of 1/f structure in human behavior and that interactive rhythmic auditory stimulation is a promising therapeutic tool for improving gait of PD patients.

  5. Slow Cholinergic Modulation of Spike Probability in Ultra-Fast Time-Coding Sensory Neurons

    PubMed Central

    Goyer, David; Kurth, Stefanie; Rübsamen, Rudolf

    2016-01-01

    Abstract Sensory processing in the lower auditory pathway is generally considered to be rigid and thus less subject to modulation than central processing. However, in addition to the powerful bottom-up excitation by auditory nerve fibers, the ventral cochlear nucleus also receives efferent cholinergic innervation from both auditory and nonauditory top–down sources. We thus tested the influence of cholinergic modulation on highly precise time-coding neurons in the cochlear nucleus of the Mongolian gerbil. By combining electrophysiological recordings with pharmacological application in vitro and in vivo, we found 55–72% of spherical bushy cells (SBCs) to be depolarized by carbachol on two time scales, ranging from hundreds of milliseconds to minutes. These effects were mediated by nicotinic and muscarinic acetylcholine receptors, respectively. Pharmacological block of muscarinic receptors hyperpolarized the resting membrane potential, suggesting a novel mechanism of setting the resting membrane potential for SBC. The cholinergic depolarization led to an increase of spike probability in SBCs without compromising the temporal precision of the SBC output in vitro. In vivo, iontophoretic application of carbachol resulted in an increase in spontaneous SBC activity. The inclusion of cholinergic modulation in an SBC model predicted an expansion of the dynamic range of sound responses and increased temporal acuity. Our results thus suggest of a top–down modulatory system mediated by acetylcholine which influences temporally precise information processing in the lower auditory pathway. PMID:27699207

  6. I can see what you are saying: Auditory labels reduce visual search times.

    PubMed

    Cho, Kit W

    2016-10-01

    The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes.

  7. Code for Calculating Regional Seismic Travel Time

    SciTech Connect

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    2009-07-10

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minus predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.

  8. [Development of auditory-visual spatial integration using saccadic response time as the index].

    PubMed

    Kato, Masaharu; Konishi, Kaoru; Kurosawa, Makiko; Konishi, Yukuo

    2006-05-01

    We measured saccadic response time (SRT) to investigate developmental changes related to spatially aligned or misaligned auditory and visual stimuli responses. We exposed 4-, 5-, and 11-month-old infants to ipsilateral or contralateral auditory-visual stimuli and monitored their eye movements using an electro-oculographic (EOG) system. The SRT analyses revealed four main results. First, saccades were triggered by visual stimuli but not always triggered by auditory stimuli. Second, SRTs became shorter as the children grew older. Third, SRTs for the ipsilateral and visual-only conditions were the same in all infants. Fourth, SRTs for the contralateral condition were longer than for the ipsilateral and visual-only conditions in 11-month-old infants but were the same for all three conditions in 4- and 5-month-old infants. These findings suggest that infants acquire the function of auditory-visual spatial integration underlying saccadic eye movement between the ages of 5 and 11 months. The dependency of SRTs on the spatial configuration of auditory and visual stimuli can be explained by cortical control of the superior colliculus. Our finding of no differences in SRTs between the ipsilateral and visual-only conditions suggests that there are multiple pathways for controlling the superior colliculus and that these pathways have different developmental time courses.

  9. A Comparative Study of Simple Auditory Reaction Time in Blind (Congenitally) and Sighted Subjects

    PubMed Central

    Gandhi, Pritesh Hariprasad; Gokhale, Pradnya A.; Mehta, H. B.; Shah, C. J.

    2013-01-01

    Background: Reaction time is the time interval between the application of a stimulus and the appearance of appropriate voluntary response by a subject. It involves stimulus processing, decision making, and response programming. Reaction time study has been popular due to their implication in sports physiology. Reaction time has been widely studied as its practical implications may be of great consequence e.g., a slower than normal reaction time while driving can have grave results. Objective: To study simple auditory reaction time in congenitally blind subjects and in age sex matched sighted subjects. To compare the simple auditory reaction time between congenitally blind subjects and healthy control subjects. Materials and Methods: Study had been carried out in two groups: The 1st of 50 congenitally blind subjects and 2nd group comprises of 50 healthy controls. It was carried out on Multiple Choice Reaction Time Apparatus, Inco Ambala Ltd. (Accuracy±0.001 s) in a sitting position at Government Medical College and Hospital, Bhavnagar and at a Blind School, PNR campus, Bhavnagar, Gujarat, India. Observations/Results: Simple auditory reaction time response with four different type of sound (horn, bell, ring, and whistle) was recorded in both groups. According to our study, there is no significant different in reaction time between congenital blind and normal healthy persons. Conclusion: Blind individuals commonly utilize tactual and auditory cues for information and orientation and they reliance on touch and audition, together with more practice in using these modalities to guide behavior, is often reflected in better performance of blind relative to sighted participants in tactile or auditory discrimination tasks, but there is not any difference in reaction time between congenitally blind and sighted people. PMID:24249930

  10. Onset coding is degraded in auditory nerve fibers from mutant mice lacking synaptic ribbons.

    PubMed

    Buran, Bradley N; Strenzke, Nicola; Neef, Andreas; Gundelfinger, Eckart D; Moser, Tobias; Liberman, M Charles

    2010-06-02

    Synaptic ribbons, found at the presynaptic membrane of sensory cells in both ear and eye, have been implicated in the vesicle-pool dynamics of synaptic transmission. To elucidate ribbon function, we characterized the response properties of single auditory nerve fibers in mice lacking Bassoon, a scaffolding protein involved in anchoring ribbons to the membrane. In bassoon mutants, immunohistochemistry showed that fewer than 3% of the hair cells' afferent synapses retained anchored ribbons. Auditory nerve fibers from mutants had normal threshold, dynamic range, and postonset adaptation in response to tone bursts, and they were able to phase lock with normal precision to amplitude-modulated tones. However, spontaneous and sound-evoked discharge rates were reduced, and the reliability of spikes, particularly at stimulus onset, was significantly degraded as shown by an increased variance of first-spike latencies. Modeling based on in vitro studies of normal and mutant hair cells links these findings to reduced release rates at the synapse. The degradation of response reliability in these mutants suggests that the ribbon and/or Bassoon normally facilitate high rates of exocytosis and that its absence significantly compromises the temporal resolving power of the auditory system.

  11. Moving in time: Bayesian causal inference explains movement coordination to auditory beats.

    PubMed

    Elliott, Mark T; Wing, Alan M; Welchman, Andrew E

    2014-07-07

    Many everyday skilled actions depend on moving in time with signals that are embedded in complex auditory streams (e.g. musical performance, dancing or simply holding a conversation). Such behaviour is apparently effortless; however, it is not known how humans combine auditory signals to support movement production and coordination. Here, we test how participants synchronize their movements when there are potentially conflicting auditory targets to guide their actions. Participants tapped their fingers in time with two simultaneously presented metronomes of equal tempo, but differing in phase and temporal regularity. Synchronization therefore depended on integrating the two timing cues into a single-event estimate or treating the cues as independent and thereby selecting one signal over the other. We show that a Bayesian inference process explains the situations in which participants choose to integrate or separate signals, and predicts motor timing errors. Simulations of this causal inference process demonstrate that this model provides a better description of the data than other plausible models. Our findings suggest that humans exploit a Bayesian inference process to control movement timing in situations where the origin of auditory signals needs to be resolved.

  12. Probing the time course of head-motion cues integration during auditory scene analysis.

    PubMed

    Kondo, Hirohito M; Toshima, Iwaki; Pressnitzer, Daniel; Kashino, Makio

    2014-01-01

    The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and rate their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues), we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.

  13. Auditory Imagery Shapes Movement Timing and Kinematics: Evidence from a Musical Task

    ERIC Educational Resources Information Center

    Keller, Peter E.; Dalla Bella, Simone; Koch, Iring

    2010-01-01

    The role of anticipatory auditory imagery in music-like sequential action was investigated by examining timing accuracy and kinematics using a motion capture system. Musicians responded to metronomic pacing signals by producing three unpaced taps on three vertically aligned keys at the given tempo. Taps triggered tones in two out of three blocked…

  14. Auditory Attention to Frequency and Time: An Analogy to Visual Local-Global Stimuli

    ERIC Educational Resources Information Center

    Justus, Timothy; List, Alexandra

    2005-01-01

    Two priming experiments demonstrated exogenous attentional persistence to the fundamental auditory dimensions of frequency (Experiment 1) and time (Experiment 2). In a divided-attention task, participants responded to an independent dimension, the identification of three-tone sequence patterns, for both prime and probe stimuli. The stimuli were…

  15. System Measures Errors Between Time-Code Signals

    NASA Technical Reports Server (NTRS)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  16. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar

    PubMed Central

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-01-01

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power. PMID:27483261

  17. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar.

    PubMed

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-07-28

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power.

  18. Activity in the left auditory cortex is associated with individual impulsivity in time discounting.

    PubMed

    Han, Ruokang; Takahashi, Taiki; Miyazaki, Akane; Kadoya, Tomoka; Kato, Shinya; Yokosawa, Koichi

    2015-01-01

    Impulsivity dictates individual decision-making behavior. Therefore, it can reflect consumption behavior and risk of addiction and thus underlies social activities as well. Neuroscience has been applied to explain social activities; however, the brain function controlling impulsivity has remained unclear. It is known that impulsivity is related to individual time perception, i.e., a person who perceives a certain physical time as being longer is impulsive. Here we show that activity of the left auditory cortex is related to individual impulsivity. Individual impulsivity was evaluated by a self-answered questionnaire in twelve healthy right-handed adults, and activities of the auditory cortices of bilateral hemispheres when listening to continuous tones were recorded by magnetoencephalography. Sustained activity of the left auditory cortex was significantly correlated to impulsivity, that is, larger sustained activity indicated stronger impulsivity. The results suggest that the left auditory cortex represent time perception, probably because the area is involved in speech perception, and that it represents impulsivity indirectly.

  19. Auditory-evoked spike firing in the lateral amygdala and Pavlovian fear conditioning: mnemonic code or fear bias?

    PubMed

    Goosens, Ki A; Hobin, Jennifer A; Maren, Stephen

    2003-12-04

    Amygdala neuroplasticity has emerged as a candidate substrate for Pavlovian fear memory. By this view, conditional stimulus (CS)-evoked activity represents a mnemonic code that initiates the expression of fear behaviors. However, a fear state may nonassociatively enhance sensory processing, biasing CS-evoked activity in amygdala neurons. Here we describe experiments that dissociate auditory CS-evoked spike firing in the lateral amygdala (LA) and both conditional fear behavior and LA excitability in rats. We found that the expression of conditional freezing and increased LA excitability was neither necessary nor sufficient for the expression of conditional increases in CS-evoked spike firing. Rather, conditioning-related changes in CS-evoked spike firing were solely determined by the associative history of the CS. Thus, our data support a model in which associative activity in the LA encodes fear memory and contributes to the expression of learned fear behaviors.

  20. Space-Time Network Codes Utilizing Transform-Based Coding

    DTIC Science & Technology

    2010-12-01

    1− prn if βrn = 1 prn if βrn = 0 , (17) where prn is the symbol error rate (SER) for detecting xn at Ur. For M- QAM modulation , it can be shown...time, time-division multiple access (TDMA) would be the most commonly-used technique in many applications . However, TDMA is extremely inefficient in...r 6= n, where xn is from an M- QAM constellation X. At the end of this phase, each client node Ur for r = 1, 2, ..., N possesses a set of N symbols

  1. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses

    PubMed Central

    Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli

    2015-01-01

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in

  2. Neural codes for perceptual discrimination of acoustic flutter in the primate auditory cortex

    PubMed Central

    Lemus, Luis; Hernández, Adrián; Romo, Ranulfo

    2009-01-01

    We recorded from single neurons of the primary auditory cortex (A1), while trained monkeys reported a decision based on the comparison of 2 acoustic flutter stimuli. Crucially, to form the decision, monkeys had to compare the second stimulus rate to the memory trace of the first stimulus rate. We found that the responses of A1 neurons encode stimulus rates both through their periodicity and through their firing rates during the stimulation periods, but not during the working memory and decision components of this task. Neurometric thresholds based on firing rate were very similar to the monkey's discrimination thresholds, whereas neurometric thresholds based on periodicity were lower than the experimental thresholds. Thus, an observer could solve this task with a precision similar to that of the monkey based only on the firing rates evoked by the stimuli. These results suggest that the A1 is exclusively associated with the sensory and not with the cognitive components of this task. PMID:19458263

  3. Auditory forebrain neurons track temporal features of time-warped natural stimuli.

    PubMed

    Maddox, Ross K; Sen, Kamal; Billimoria, Cyrus P

    2014-02-01

    A fundamental challenge for sensory systems is to recognize natural stimuli despite stimulus variations. A compelling example occurs in speech, where the auditory system can recognize words spoken at a wide range of speeds. To date, there have been more computational models for time-warp invariance than experimental studies that investigate responses to time-warped stimuli at the neural level. Here, we address this problem in the model system of zebra finches anesthetized with urethane. In behavioral experiments, we found high discrimination accuracy well beyond the observed natural range of song variations. We artificially sped up or slowed down songs (preserving pitch) and recorded auditory responses from neurons in field L, the avian primary auditory cortex homolog. We found that field L neurons responded robustly to time-warped songs, tracking the temporal features of the stimuli over a broad range of warp factors. Time-warp invariance was not observed per se, but there was sufficient information in the neural responses to reliably classify which of two songs was presented. Furthermore, the average spike rate was close to constant over the range of time warps, contrary to recent modeling predictions. We discuss how this response pattern is surprising given current computational models of time-warp invariance and how such a response could be decoded downstream to achieve time-warp-invariant recognition of sounds.

  4. Effect of vestibular stimulation on auditory and visual reaction time in relation to stress

    PubMed Central

    Rajagopalan, Archana; Kumar, Sai Sailesh; Mukkadan, Joseph Kurien

    2017-01-01

    The present study was undertaken to provide scientific evidence and for beneficial effects of vestibular stimulation for the management of stress-induced changes in auditory and visual reaction time (RT). A total of 240 healthy college students of the age group of 18–24 of either gender were a part of this research after obtaining written consent from them. RT for right and left response was measured for two auditory stimuli (low and high pitch) and visual stimuli (red and green) were recorded. A significant decrease in the visual RT for green light and red light was observed and stress-induced changes was effectively prevented followed by vestibular stimulation. Auditory RT for high pitch right and left response was significantly decreased and stress-induced changes was effectively prevented followed by vestibular stimulation. Vestibular stimulation is effective in boosting auditory and visual RT and preventing stress-induced changes in RT in males and females. We recommend incorporation of vestibular stimulation by swinging in our lifestyle for improving cognitive functions. PMID:28217553

  5. Space-Time Code Designs for Broadband Wireless Communications

    DTIC Science & Technology

    2005-03-01

    Decoding Algorithms (i). Fast iterative decoding algorithms for lattice based space-time coded MIMO systems and single antenna vector OFDM systems: We...Information Theory, vol. 49, p.313, Jan. 2003. 5. G. Fan and X.-G. Xia, " Wavelet - Based Texture Analysis and Synthesis Using Hidden Markov Models," IEEE...PSK, and CPM signals, lattice based space-time codes, and unitary differential space-time codes for large number of transmit antennas. We want to

  6. Event-related EEG time-frequency analysis and the Orienting Reflex to auditory stimuli.

    PubMed

    Barry, Robert J; Steiner, Genevieve Z; De Blasio, Frances M

    2012-06-01

    Sokolov's classic works discussed electroencephalogram (EEG) alpha desynchronization as a measure of the Orienting Reflex (OR). Early studies confirmed that this reduced with repeated auditory stimulation, but without reliable stimulus-significance effects. We presented an auditory habituation series with counterbalanced indifferent and significant (counting) instructions. Time-frequency analysis of electrooculogram (EOG)-corrected EEG was used to explore prestimulus levels and the timing and amplitude of event-related increases and decreases in 4 classic EEG bands. Decrement over trials and response recovery were substantial for the transient increase (in delta, theta, and alpha) and subsequent desynchronization (in theta, alpha, and beta). There was little evidence of dishabituation and few effects of counting. Expected effects in stimulus-induced alpha desynchronization were confirmed. Two EEG response patterns over trials and conditions, distinct from the full OR pattern, warrant further research.

  7. SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code

    SciTech Connect

    Hua, D; Fowler, T

    2004-06-15

    A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.

  8. Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation.

    PubMed

    Kristjánsson, Tómas; Thorvaldsson, Tómas Páll; Kristjánsson, Arni

    2014-01-01

    Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.

  9. Timing of cochlear responses inferred from frequency-threshold tuning curves of auditory-nerve fibers

    PubMed Central

    Temchin, Andrei N.; Recio-Spinoso, Alberto; Ruggero, Mario A.

    2010-01-01

    Links between frequency tuning and timing were explored in the responses to sound of auditory-nerve fibers. Synthetic transfer functions were constructed by combining filter functions, derived via minimum-phase computations from average frequency-threshold tuning curves of chinchilla auditory-nerve fibers with high spontaneous activity (A. N. Temchin et al., J. Neurophysiol. 100: 2889–2898, 2008), and signal-front delays specified by the latencies of basilar-membrane and auditory-nerve fiber responses to intense clicks (A. N. Temchin et al., J. Neurophysiol. 93: 3635–3648, 2005). The transfer functions predict several features of the phase-frequency curves of cochlear responses to tones, including their shape transitions in the regions with characteristic frequencies of 1 kHz and 3–4 kHz (A. N. Temchin and M. A. Ruggero, JARO 11: 297–318, 2010). The transfer functions also predict the shapes of cochlear impulse responses, including the polarities of their frequency sweeps and their transition at characteristic frequencies around 1 kHz. Predictions are especially accurate for characteristic frequencies < 1 kHz. PMID:20951191

  10. Changes across time in the temporal responses of auditory nerve fibers stimulated by electric pulse trains.

    PubMed

    Miller, Charles A; Hu, Ning; Zhang, Fawen; Robinson, Barbara K; Abbas, Paul J

    2008-03-01

    Most auditory prostheses use modulated electric pulse trains to excite the auditory nerve. There are, however, scant data regarding the effects of pulse trains on auditory nerve fiber (ANF) responses across the duration of such stimuli. We examined how temporal ANF properties changed with level and pulse rate across 300-ms pulse trains. Four measures were examined: (1) first-spike latency, (2) interspike interval (ISI), (3) vector strength (VS), and (4) Fano factor (FF, an index of the temporal variability of responsiveness). Data were obtained using 250-, 1,000-, and 5,000-pulse/s stimuli. First-spike latency decreased with increasing spike rate, with relatively small decrements observed for 5,000-pulse/s trains, presumably reflecting integration. ISIs to low-rate (250 pulse/s) trains were strongly locked to the stimuli, whereas ISIs evoked with 5,000-pulse/s trains were dominated by refractory and adaptation effects. Across time, VS decreased for low-rate trains but not for 5,000-pulse/s stimuli. At relatively high spike rates (>200 spike/s), VS values for 5,000-pulse/s trains were lower than those obtained with 250-pulse/s stimuli (even after accounting for the smaller periods of the 5,000-pulse/s stimuli), indicating a desynchronizing effect of high-rate stimuli. FF measures also indicated a desynchronizing effect of high-rate trains. Across a wide range of response rates, FF underwent relatively fast increases (i.e., within 100 ms) for 5,000-pulse/s stimuli. With a few exceptions, ISI, VS, and FF measures approached asymptotic values within the 300-ms duration of the low- and high-rate trains. These findings may have implications for designs of cochlear implant stimulus protocols, understanding electrically evoked compound action potentials, and interpretation of neural measures obtained at central nuclei, which depend on understanding the output of the auditory nerve.

  11. Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Squires, N.; Galambos, R.

    1975-01-01

    Short latency (under 10 msec) responses elicited by bursts of white noise were recorded from the scalps of human subjects. Response alterations produced by changes in the noise burst duration (on-time), inter-burst interval (off-time), and onset and offset shapes were analyzed. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise time but was unaffected by changes in fall time. Increases in stimulus duration, and therefore in loudness, resulted in a systematic increase in latency. This was probably due to response recovery processes, since the effect was eliminated with increases in stimulus off-time. The amplitude of wave V was insensitive to changes in signal rise and fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It was concluded that wave V of the human auditory brainstem evoked response is solely an onset response.

  12. Auditory detection of ultrasonic coded transmitters by seals and sea lions.

    PubMed

    Cunningham, Kane A; Hayes, Sean A; Michelle Wargo Rub, A; Reichmuth, Colleen

    2014-04-01

    Ultrasonic coded transmitters (UCTs) are high-frequency acoustic tags that are often used to conduct survivorship studies of vulnerable fish species. Recent observations of differential mortality in tag control studies suggest that fish instrumented with UCTs may be selectively targeted by marine mammal predators, thereby skewing valuable survivorship data. In order to better understand the ability of pinnipeds to detect UCT outputs, behavioral high-frequency hearing thresholds were obtained from a trained harbor seal (Phoca vitulina) and a trained California sea lion (Zalophus californianus). Thresholds were measured for extended (500 ms) and brief (10 ms) 69 kHz narrowband stimuli, as well as for a stimulus recorded directly from a Vemco V16-3H UCT, which consisted of eight 10 ms, 69 kHz pure-tone pulses. Detection thresholds for the harbor seal were as expected based on existing audiometric data for this species, while the California sea lion was much more sensitive than predicted. Given measured detection thresholds of 113 dB re 1 μPa and 124 dB re 1 μPa, respectively, both species are likely able to detect acoustic outputs of the Vemco V16-3H under water from distances exceeding 200 m in typical natural conditions, suggesting that these species are capable of using UCTs to detect free-ranging fish.

  13. Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Squires, N.; Galambos, R.

    1975-01-01

    Short latency (under 10 msec) evoked responses elicited by bursts of white noise were recorded from the scalp of human subjects. Response alterations produced by changes in the noise burst duration (on-time) inter-burst interval (off-time), and onset and offset shapes are reported and evaluated. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise-time but was unaffected by changes in fall-time. The amplitude of wave V was insensitive to changes in signal rise-and-fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It is concluded that wave V of the human auditory brainstem evoked response is solely an onset response.

  14. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields.

    PubMed

    Lee, Christopher M; Osman, Ahmad F; Volgushev, Maxim; Escabí, Monty A; Read, Heather L

    2016-04-01

    Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices.

  15. The Visual and Auditory Reaction Time of Adolescents with Respect to Their Academic Achievements

    ERIC Educational Resources Information Center

    Taskin, Cengiz

    2016-01-01

    The aim of this study was to examine in visual and auditory reaction time of adolescents with respect to their academic achievement level. Five hundred adolescent children from the Turkey, (age=15.24±0.78 years; height=168.80±4.89 cm; weight=65.24±4.30 kg) for two hundred fifty male and (age=15.28±0.74; height=160.40±5.77 cm; weight=55.32±4.13 kg)…

  16. Auditory time-interval perception as causal inference on sound sources.

    PubMed

    Sawai, Ken-Ichi; Sato, Yoshiyuki; Aihara, Kazuyuki

    2012-01-01

    Perception of a temporal pattern in a sub-second time scale is fundamental to conversation, music perception, and other kinds of sound communication. However, its mechanism is not fully understood. A simple example is hearing three successive sounds with short time intervals. The following misperception of the latter interval is known: underestimation of the latter interval when the former is a little shorter or much longer than the latter, and overestimation of the latter when the former is a little longer or much shorter than the latter. Although this misperception of auditory time intervals for simple stimuli might be a cue to understanding the mechanism of time-interval perception, there exists no model that comprehensively explains it. Considering a previous experiment demonstrating that illusory perception does not occur for stimulus sounds with different frequencies, it might be plausible to think that the underlying mechanism of time-interval perception involves a causal inference on sound sources: herein, different frequencies provide cues for different causes. We construct a Bayesian observer model of this time-interval perception. We introduce a probabilistic variable representing the causality of sounds in the model. As prior knowledge, the observer assumes that a single sound source produces periodic and short time intervals, which is consistent with several previous works. We conducted numerical simulations and confirmed that our model can reproduce the misperception of auditory time intervals. A similar phenomenon has also been reported in visual and tactile modalities, though the time ranges for these are wider. This suggests the existence of a common mechanism for temporal pattern perception over modalities. This is because these different properties can be interpreted as a difference in time resolutions, given that the time resolutions for vision and touch are lower than those for audition.

  17. Neural time and movement time in choice of whistle or pulse burst responses to different auditory stimuli by dolphins.

    PubMed

    Ridgway, Sam H

    2011-02-01

    Echolocating dolphins emit trains of clicks and receive echoes from ocean targets. They often emit each successive ranging click about 20 ms after arrival of the target echo. In echolocation, decisions must be made about the target--fish or fowl, predator or food. In the first test of dolphin auditory decision speed, three bottlenose dolphins (Tursiops truncatus) chose whistle or pulse burst responses to different auditory stimuli randomly presented without warning in rapid succession under computer control. The animals were trained to hold pressure catheters in the nasal cavity so that pressure increases required for sound production could be used to split response time (RT) into neural time and movement time. Mean RT in the youngest and fastest dolphin ranged from 175 to 213 ms when responding to tones and from 213 to 275 ms responding to pulse trains. The fastest neural times and movement times were around 60 ms. The results suggest that echolocating dolphins tune to a rhythm so that succeeding pulses in a train are produced about 20 ms over target round-trip travel time. The dolphin nervous system has evolved for rapid processing of acoustic stimuli to accommodate for the more rapid sound speed in water compared to air.

  18. Coding for Communication Channels with Dead-Time Constraints

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2004-01-01

    Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM

  19. Perceptual consequences of disrupted auditory nerve activity.

    PubMed

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  20. Event-related EEG time-frequency PCA and the orienting reflex to auditory stimuli.

    PubMed

    Barry, Robert J; De Blasio, Frances M; Bernat, Edward M; Steiner, Genevieve Z

    2015-04-01

    We recently reported an auditory habituation series with counterbalanced indifferent and significant (counting) instructions. Time-frequency (t-f) analysis of electrooculogram-corrected EEG was used to explore event-related synchronization (ERS)/desynchronization (ERD) in four EEG bands using arbitrarily selected time epochs and traditional frequency ranges. ERS in delta, theta, and alpha, and subsequent ERD in theta, alpha, and beta, showed substantial decrement over trials, yet effects of stimulus significance (count vs. no-task) were minimal. Here, we used principal components analysis (PCA) of the t-f data to investigate the natural frequency and time combinations involved in such stimulus processing. We identified four ERS and four ERD t-f components: six showed decrement over trials, four showed count > no-task effects, and six showed Significance × Trial interactions. This increased sensitivity argues for the wider use of our data-driven t-f PCA approach.

  1. Effect of red bull energy drink on auditory reaction time and maximal voluntary contraction.

    PubMed

    Goel, Vartika; Manjunatha, S; Pai, Kirtana M

    2014-01-01

    The use of "Energy Drinks" (ED) is increasing in India. Students specially use these drinks to rejuvenate after strenuous exercises or as a stimulant during exam times. The most common ingredient in EDs is caffeine and a popular ED available and commonly used is Red Bull, containing 80 mg of caffeine in 250 ml bottle. The primary aim of this study was to investigate the effects of Red Bull energy drink on Auditory reaction time and Maximal voluntary contraction. A homogeneous group containing twenty medical students (10 males, 10 females) participated in a crossover study in which they were randomized to supplement with Red Bull (2 mg/kg body weight of caffeine) or isoenergetic isovolumetric noncaffeinated control drink (a combination of Appy Fizz, Cranberry juice and soda) separated by 7 days. Maximal voluntary contraction (MVC) was recorded as the highest of the 3 values of maximal isometric force generated from the dominant hand using hand grip dynamometer (Biopac systems). Auditory reaction time (ART) was the average of 10 values of the time interval between the click sound and response by pressing the push button using hand held switch (Biopac systems). The energy and control drinks after one hour of consumption significantly reduced the Auditory reaction time in males (ED 232 ± 59 Vs 204 ± 34 s and Control 223 ± 57 Vs 210 ± 51 s; p < 0.05) as well as in females (ED 227 ± 56 Vs 214 ± 48 s and Control 224 ± 45 Vs 215 ± 36 s; p < 0.05) but had no effect on MVC in either sex (males ED 381 ± 37 Vs 371 ± 36 and Control 375 ± 61 Vs 363 ± 36 Newton, females ED 227 ± 23 Vs 227 ± 32 and Control 234 ± 46 Vs 228 ± 37 Newton). When compared across the gender groups, there was no significant difference between males and females in the effects of any of the drinks on the ART but there was an overall significantly lower MVC in females compared to males. Both energy drink and the control drink significantly improve the reaction time but may not have any effect

  2. The GOES Time Code Service, 1974–2004: A Retrospective

    PubMed Central

    Lombardi, Michael A.; Hanson, D. Wayne

    2005-01-01

    NIST ended its Geostationary Operational Environmental Satellites (GOES) time code service at 0 hours, 0 minutes Coordinated Universal Time (UTC) on January 1, 2005. To commemorate the end of this historically significant service, this article provides a retrospective look at the GOES service and the important role it played in the history of satellite timekeeping. PMID:27308105

  3. BAASTA: Battery for the Assessment of Auditory Sensorimotor and Timing Abilities.

    PubMed

    Dalla Bella, Simone; Farrugia, Nicolas; Benoit, Charles-Etienne; Begel, Valentin; Verga, Laura; Harding, Eleanor; Kotz, Sonja A

    2016-07-21

    The Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA) is a new tool for the systematic assessment of perceptual and sensorimotor timing skills. It spans a broad range of timing skills aimed at differentiating individual timing profiles. BAASTA consists of sensitive time perception and production tasks. Perceptual tasks include duration discrimination, anisochrony detection (with tones and music), and a version of the Beat Alignment Task. Perceptual thresholds for duration discrimination and anisochrony detection are estimated with a maximum likelihood procedure (MLP) algorithm. Production tasks use finger tapping and include unpaced and paced tapping (with tones and music), synchronization-continuation, and adaptive tapping to a sequence with a tempo change. BAASTA was tested in a proof-of-concept study with 20 non-musicians (Experiment 1). To validate the results of the MLP procedure, less widespread than standard staircase methods, three perceptual tasks of the battery (duration discrimination, anisochrony detection with tones, and with music) were further tested in a second group of non-musicians using 2 down / 1 up and 3 down / 1 up staircase paradigms (n = 24) (Experiment 2). The results show that the timing profiles provided by BAASTA allow to detect cases of timing/rhythm disorders. In addition, perceptual thresholds yielded by the MLP algorithm, although generally comparable to the results provided by standard staircase, tend to be slightly lower. In sum, BAASTA provides a comprehensive battery to test perceptual and sensorimotor timing skills, and to detect timing/rhythm deficits.

  4. Discrimination of time intervals presented in sequences: spatial effects with multiple auditory sources.

    PubMed

    Grondin, Simon; Plourde, Marilyn

    2007-10-01

    This article discusses two experiments on the discrimination of time intervals presented in sequences marked by brief auditory signals. Participants had to indicate whether the last interval in a series of three intervals marked by four auditory signals was shorter or longer than the previous intervals. Three base durations were under investigation: 75, 150, and 225 ms. In Experiment 1, sounds were presented through headphones, from a single-speaker in front of the participants or by four equally spaced speakers. In all three presentation modes, the highest different threshold was obtained in the lower base duration condition (75 ms), thus indicating an impairment of temporal processing when sounds are presented too rapidly. The results also indicate the presence, in each presentation mode, of a 'time-shrinking effect' (i.e., with the last interval being perceived as briefer than the preceding ones) at 75 ms, but not at 225 ms. Lastly, using different sound sources to mark time did not significantly impair discrimination. In Experiment 2, three signals were presented from the same source, and the last signal was presented at one of two locations, either close or far. The perceived duration was not influenced by the location of the fourth signal when the participant knew before each trial where the sounds would be delivered. However, when the participant was uncertain as to its location, more space between markers resulted in longer perceived duration, a finding that applies only at 150 and 225 ms. Moreover, the perceived duration was affected by the direction of the sequences (left-right vs. right-left).

  5. Using LDPC Code Constraints to Aid Recovery of Symbol Timing

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban

    2008-01-01

    A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation

  6. Weighted adaptively grouped multilevel space time trellis codes

    NASA Astrophysics Data System (ADS)

    Jain, Dharmvir; Sharma, Sanjay

    2015-05-01

    In existing grouped multilevel space-time trellis codes (GMLSTTCs), the groups of transmit antennas are predefined, and the transmit power is equally distributed across all transmit antennas. When the channel parameters are perfectly known at the transmitter, adaptive antenna grouping and beamforming scheme can achieve the better performance by optimum grouping of transmit antennas and properly weighting transmitted signals based on the available channel information. In this paper, we present a new code designed by combining GMLSTTCs, adaptive antenna grouping and beamforming using the channel state information at transmitter (CSIT), henceforth referred to as weighted adaptively grouped multilevel space time trellis codes (WAGMLSTTCs). The CSIT is used to adaptively group the transmitting antennas and provide a beamforming scheme by allocating the different powers to the transmit antennas. Simulation results show that WAGMLSTTCs provide improvement in error performance of 2.6 dB over GMLSTTCs.

  7. Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli

    PubMed Central

    Laback, Bernhard; Savel, Sophie; Ystad, Sølvi; Balazs, Peter; Meunier, Sabine; Kronland-Martinet, Richard

    2016-01-01

    Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally

  8. Coded cause of death and timing of COPD diagnosis.

    PubMed

    Pickard, A Simon; Jung, Eunmi; Bartle, Brian; Weiss, Kevin B; Lee, Todd A

    2009-02-01

    The aims of this study were to characterize causes of death among veterans with COPD using multiple cause of death coding, and to examine whether causes of death differed according to timing of COPD diagnosis. Veterans with COPD who died during a five-year follow-up period were identified from national VA databases linked to National Death Index files. Primary, secondary, underlying, and all-coded causes of death were compared between recent and preexistent COPD cohorts using proportional mortality ratios (PMRs), which compares proportion dying from specific causes as opposed to absolute risk of death. Of 26,357 decedents, 7,729 were categorized preexistent and 18,628 were recent COPD cases. Unspecified COPD was listed as underlying cause of death in a significantly greater proportion of preexistent COPD cases compared to recent cases, 20% vs 10%, PMR = 2.0 (95% CI: 1.9-2.1). A relatively higher proportion of recently diagnosed cases died from lung/bronchus, prostate, and site-unspecified cancers. Respiratory failure (J969) was rarely coded as an underlying or primary cause (< 1%), but was a second-code cause of death in 9% of recent and 12% of preexistent cases. Differences in coded causes of death between patients with a recent diagnosis of COPD compared to a preexistent diagnosis of COPD suggests that there is either coded cause-related bias or true differences in cause of death related to length of time with diagnosis. Thus, methods used to identify cohorts of COPD patients, i.e., incidence versus prevalence-based approaches, and coded cause of death can affect estimates of cause-specific mortality.

  9. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    ERIC Educational Resources Information Center

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  10. Effects of location and timing of co-activated neurons in the auditory midbrain on cortical activity: implications for a new central auditory prosthesis

    NASA Astrophysics Data System (ADS)

    Straka, Małgorzata M.; McMahon, Melissa; Markovitz, Craig D.; Lim, Hubert H.

    2014-08-01

    Objective. An increasing number of deaf individuals are being implanted with central auditory prostheses, but their performance has generally been poorer than for cochlear implant users. The goal of this study is to investigate stimulation strategies for improving hearing performance with a new auditory midbrain implant (AMI). Previous studies have shown that repeated electrical stimulation of a single site in each isofrequency lamina of the central nucleus of the inferior colliculus (ICC) causes strong suppressive effects in elicited responses within the primary auditory cortex (A1). Here we investigate if improved cortical activity can be achieved by co-activating neurons with different timing and locations across an ICC lamina and if this cortical activity varies across A1. Approach. We electrically stimulated two sites at different locations across an isofrequency ICC lamina using varying delays in ketamine-anesthetized guinea pigs. We recorded and analyzed spike activity and local field potentials across different layers and locations of A1. Results. Co-activating two sites within an isofrequency lamina with short inter-pulse intervals (<5 ms) could elicit cortical activity that is enhanced beyond a linear summation of activity elicited by the individual sites. A significantly greater extent of normalized cortical activity was observed for stimulation of the rostral-lateral region of an ICC lamina compared to the caudal-medial region. We did not identify any location trends across A1, but the most cortical enhancement was observed in supragranular layers, suggesting further integration of the stimuli through the cortical layers. Significance. The topographic organization identified by this study provides further evidence for the presence of functional zones across an ICC lamina with locations consistent with those identified by previous studies. Clinically, these results suggest that co-activating different neural populations in the rostral-lateral ICC rather

  11. Time Shifted PN Codes for CW Lidar, Radar, and Sonar

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F. (Inventor); Prasad, Narasimha S. (Inventor); Harrison, Fenton W. (Inventor); Flood, Michael A. (Inventor)

    2013-01-01

    A continuous wave Light Detection and Ranging (CW LiDAR) system utilizes two or more laser frequencies and time or range shifted pseudorandom noise (PN) codes to discriminate between the laser frequencies. The performance of these codes can be improved by subtracting out the bias before processing. The CW LiDAR system may be mounted to an artificial satellite orbiting the earth, and the relative strength of the return signal for each frequency can be utilized to determine the concentration of selected gases or other substances in the atmosphere.

  12. Independent or integrated processing of interaural time and level differences in human auditory cortex?

    PubMed

    Altmann, Christian F; Terada, Satoshi; Kashino, Makio; Goto, Kazuhiro; Mima, Tatsuya; Fukuyama, Hidenao; Furukawa, Shigeto

    2014-06-01

    Sound localization in the horizontal plane is mainly determined by interaural time differences (ITD) and interaural level differences (ILD). Both cues result in an estimate of sound source location and in many real-life situations these two cues are roughly congruent. When stimulating listeners with headphones it is possible to counterbalance the two cues, so called ITD/ILD trading. This phenomenon speaks for integrated ITD/ILD processing at the behavioral level. However, it is unclear at what stages of the auditory processing stream ITD and ILD cues are integrated to provide a unified percept of sound lateralization. Therefore, we set out to test with human electroencephalography for integrated versus independent ITD/ILD processing at the level of preattentive cortical processing by measuring the mismatch negativity (MMN) to changes in sound lateralization. We presented a series of diotic standards (perceived at a midline position) that were interrupted by deviants that entailed either a change in a) ITD only, b) ILD only, c) congruent ITD and ILD, or d) counterbalanced ITD/ILD (ITD/ILD trading). The sound stimuli were either i) pure tones with a frequency of 500 Hz, or ii) amplitude modulated tones with a carrier frequency of 4000 Hz and a modulation frequency of 125 Hz. We observed significant MMN for the ITD/ILD traded deviants in case of the 500 Hz pure tones, and for the 4000 Hz amplitude-modulated tone. This speaks for independent processing of ITD and ILD at the level of the MMN within auditory cortex. However, the combined ITD/ILD cues elicited smaller MMN than the sum of the MMN induced in response to ITD and ILD cues presented in isolation for 500 Hz, but not 4000 Hz, suggesting independent processing for the higher frequency only. Thus, the two markers for independent processing - additivity and cue-conflict - resulted in contradicting conclusions with a dissociation between the lower (500 Hz) and higher frequency (4000 Hz) bands.

  13. On the Application of Time-Reversed Space-Time Block Code to Aeronautical Telemetry

    DTIC Science & Technology

    2014-06-01

    Keying (SOQPSK), bit error rate (BER), Orthogonal Frequency Division Multiplexing ( OFDM ), Generalized time-reversed space-time block codes (GTR-STBC) 16...Alamouti code [4]) is optimum [2]. Although OFDM is generally applied on a per subcarrier basis in frequency selective fading, it is not a viable

  14. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  15. Secular Slowing of Auditory Simple Reaction Time in Sweden (1959–1985)

    PubMed Central

    Madison, Guy; Woodley of Menie, Michael A.; Sänger, Justus

    2016-01-01

    There are indications that simple reaction time might have slowed in Western populations, based on both cohort- and multi-study comparisons. A possible limitation of the latter method in particular is measurement error stemming from methods variance, which results from the fact that instruments and experimental conditions change over time and between studies. We therefore set out to measure the simple auditory reaction time (SRT) of 7,081 individuals (2,997 males and 4,084 females) born in Sweden 1959–1985 (subjects were aged between 27 and 54 years at time of measurement). Depending on age cut-offs and adjustment for aging related slowing of SRT, the data indicate that SRT has increased by between 3 and 16 ms in the 27 birth years covered in the present sample. This slowing is unlikely to be explained by attrition, which was evaluated by comparing the general intelligence × birth-year interactions and standard deviations for both male participants and dropouts, utilizing military conscript cognitive ability data. The present result is consistent with previous studies employing alternative methods, and may indicate the operation of several synergistic factors, such as recent micro-evolutionary trends favoring lower g in Sweden and the effects of industrially produced neurotoxic substances on peripheral nerve conduction velocity. PMID:27588000

  16. Inhibitory and Excitatory Spike-Timing-Dependent Plasticity in the Auditory Cortex

    PubMed Central

    D'amour, James A.; Froemke, Robert C.

    2015-01-01

    Summary Synapses are plastic and can be modified by changes of spike timing. While most studies of long-term synaptic plasticity focus on excitation, inhibitory plasticity may be critical for controlling information processing, memory storage, and overall excitability in neural circuits. Here we examine spike-timing-dependent plasticity (STDP) of inhibitory synapses onto layer 5 neurons in slices of mouse auditory cortex, together with concomitant STDP of excitatory synapses. Pairing pre- and postsynaptic spikes potentiated inhibitory inputs irrespective of precise temporal order within ~10 msec. This was in contrast to excitatory inputs, which displayed an asymmetrical STDP time window. These combined synaptic modifications both required NMDA receptor activation, and adjusted the excitatory-inhibitory ratio of events paired together with postsynaptic spiking. Finally, subthreshold events became suprathreshold, and the time window between excitation and inhibition became more precise. These findings demonstrate that cortical inhibitory plasticity requires interactions with co-activated excitatory synapses to properly regulate excitatory-inhibitory balance. PMID:25843405

  17. Neural Basis of the Time Window for Subjective Motor-Auditory Integration

    PubMed Central

    Toida, Koichi; Ueno, Kanako; Shimada, Sotaro

    2016-01-01

    Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs) elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN) and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2) and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms), and hence reduction in the feeling of authorship of the sound (the sense of agency). In contrast, the enhanced-P2 was most prominent in short-delay (≤200 ms) conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components. PMID:26779000

  18. Effects of sensorineural hearing loss on temporal coding of harmonic and inharmonic tone complexes in the auditory nerve.

    PubMed

    Kale, Sushrut; Micheyl, Christophe; Heinz, Michael G

    2013-01-01

    Listeners with sensorineural hearing loss (SNHL) often show poorer thresholds for fundamental-frequency (F0) discrimination and poorer discrimination between harmonic and frequency-shifted (inharmonic) complex tones, than normal-hearing (NH) listeners-especially when these tones contain resolved or partially resolved components. It has been suggested that these perceptual deficits reflect reduced access to temporal-fine-structure (TFS) information and could be due to degraded phase locking in the auditory nerve (AN) with SNHL. In the present study, TFS and temporal-envelope (ENV) cues in single AN-fiber responses to band-pass-filtered harmonic and inharmonic complex tones were -measured in chinchillas with either normal-hearing or noise-induced SNHL. The stimuli were comparable to those used in recent psychophysical studies of F0 and harmonic/inharmonic discrimination. As in those studies, the rank of the center component was manipulated to produce -different resolvability conditions, different phase relationships (cosine and random phase) were tested, and background noise was present. Neural TFS and ENV cues were quantified using cross-correlation coefficients computed using shuffled cross correlograms between neural responses to REF (harmonic) and TEST (F0- or frequency-shifted) stimuli. In animals with SNHL, AN-fiber tuning curves showed elevated thresholds, broadened tuning, best-frequency shifts, and downward shifts in the dominant TFS response component; however, no significant degradation in the ability of AN fibers to encode TFS or ENV cues was found. Consistent with optimal-observer analyses, the results indicate that TFS and ENV cues depended only on the relevant frequency shift in Hz and thus were not degraded because phase locking remained intact. These results suggest that perceptual "TFS-processing" deficits do not simply reflect degraded phase locking at the level of the AN. To the extent that performance in F0- and harmonic/inharmonic discrimination

  19. Auditory Distance Coding in Rabbit Midbrain Neurons and Human Perception: Monaural Amplitude Modulation Depth as a Cue

    PubMed Central

    Zahorik, Pavel; Carney, Laurel H.; Bishop, Brian B.; Kuwada, Shigeyuki

    2015-01-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35–200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060

  20. Static Enforcement of Timing Policies Using Code Certification

    DTIC Science & Technology

    2006-08-07

    13 2.2 TBF File Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3...each its due in space and time. —Guy L. Steele Jr. [65] Computers are useful precisely because they can be programmed. The success of program- ming...pattern for defining the patterns that programmers can use for their real work and their main goal. — Guy Steele [65] The code certification machinery

  1. Code-Time Diversity for Direct Sequence Spread Spectrum Systems

    PubMed Central

    Hassan, A. Y.

    2014-01-01

    Time diversity is achieved in direct sequence spread spectrum by receiving different faded delayed copies of the transmitted symbols from different uncorrelated channel paths when the transmission signal bandwidth is greater than the coherence bandwidth of the channel. In this paper, a new time diversity scheme is proposed for spread spectrum systems. It is called code-time diversity. In this new scheme, N spreading codes are used to transmit one data symbol over N successive symbols interval. The diversity order in the proposed scheme equals to the number of the used spreading codes N multiplied by the number of the uncorrelated paths of the channel L. The paper represents the transmitted signal model. Two demodulators structures will be proposed based on the received signal models from Rayleigh flat and frequency selective fading channels. Probability of error in the proposed diversity scheme is also calculated for the same two fading channels. Finally, simulation results are represented and compared with that of maximal ration combiner (MRC) and multiple-input and multiple-output (MIMO) systems. PMID:24982925

  2. A scalable population code for time in the striatum.

    PubMed

    Mello, Gustavo B M; Soares, Sofia; Paton, Joseph J

    2015-05-04

    To guide behavior and learn from its consequences, the brain must represent time over many scales. Yet, the neural signals used to encode time in the seconds-to-minute range are not known. The striatum is a major input area of the basal ganglia associated with learning and motor function. Previous studies have also shown that the striatum is necessary for normal timing behavior. To address how striatal signals might be involved in timing, we recorded from striatal neurons in rats performing an interval timing task. We found that neurons fired at delays spanning tens of seconds and that this pattern of responding reflected the interaction between time and the animals' ongoing sensorimotor state. Surprisingly, cells rescaled responses in time when intervals changed, indicating that striatal populations encoded relative time. Moreover, time estimates decoded from activity predicted timing behavior as animals adjusted to new intervals, and disrupting striatal function led to a decrease in timing performance. These results suggest that striatal activity forms a scalable population code for time, providing timing signals that animals use to guide their actions.

  3. Predicting spike timing in highly synchronous auditory neurons at different sound levels

    PubMed Central

    Fontaine, Bertrand; Benichoux, Victor; Joris, Philip X.

    2013-01-01

    A challenge for sensory systems is to encode natural signals that vary in amplitude by orders of magnitude. The spike trains of neurons in the auditory system must represent the fine temporal structure of sounds despite a tremendous variation in sound level in natural environments. It has been shown in vitro that the transformation from dynamic signals into precise spike trains can be accurately captured by simple integrate-and-fire models. In this work, we show that the in vivo responses of cochlear nucleus bushy cells to sounds across a wide range of levels can be precisely predicted by deterministic integrate-and-fire models with adaptive spike threshold. Our model can predict both the spike timings and the firing rate in response to novel sounds, across a large input level range. A noisy version of the model accounts for the statistical structure of spike trains, including the reliability and temporal precision of responses. Spike threshold adaptation was critical to ensure that predictions remain accurate at different levels. These results confirm that simple integrate-and-fire models provide an accurate phenomenological account of spike train statistics and emphasize the functional relevance of spike threshold adaptation. PMID:23864375

  4. Interaural timing difference circuits in the auditory brainstem of the emu (Dromaius novaehollandiae)

    PubMed Central

    MacLeod, Katrina M.; Soares, Daphne; Carr, Catherine E.

    2010-01-01

    In the auditory system, precise encoding of temporal information is critical for sound localization, a task with direct behavioral relevance. Interaural timing differences are computed using axonal delay lines and cellular coincidence detectors in nucleus laminaris (NL). We present morphological and physiological data on the timing circuits in the emu, Dromaius novaehollandiae, and compare these results with those from the barn owl (Tyto alba) and the domestic chick (Gallus gallus). Emu NL was composed of a compact monolayer of bitufted neurons whose two thick primary dendrites were oriented dorsoventrally. They showed a gradient in dendritic length along the presumed tonotopic axis. The NL and nucleus magnocellularis (NM) neurons were strongly immunoreactive for parvalbumin, a calcium-binding protein. Antibodies against synaptic vesicle protein 2 and glutamic acid decarboxlyase revealed that excitatory synapses terminated heavily on the dendritic tufts, while inhibitory terminals were distributed more uniformly. Physiological recordings from brainstem slices demonstrated contralateral delay lines from NM to NL. During whole-cell patch-clamp recordings, NM and NL neurons fired single spikes and were doubly-rectifying. NL and NM neurons had input resistances of 30.0 ± 19.9 MΩ and 49.0 ± 25.6 MΩ, respectively, and membrane time constants of 12.8 ± 3.8 ms and 3.9 ± 0.2 ms. These results provide further support for the Jeffress model for sound localization in birds. The emu timing circuits showed the ancestral (plesiomorphic) pattern in their anatomy and physiology, while differences in dendritic structure compared to chick and owl may indicate specialization for encoding ITDs at low best frequencies. PMID:16435285

  5. Learning impaired children exhibit timing deficits and training-related improvements in auditory cortical responses to speech in noise.

    PubMed

    Warrier, Catherine M; Johnson, Krista L; Hayes, Erin A; Nicol, Trent; Kraus, Nina

    2004-08-01

    The physiological mechanisms that contribute to abnormal encoding of speech in children with learning problems are yet to be well understood. Furthermore, speech perception problems appear to be particularly exacerbated by background noise in this population. This study compared speech-evoked cortical responses recorded in a noisy background to those recorded in quiet in normal children (NL) and children with learning problems (LP). Timing differences between responses recorded in quiet and in background noise were assessed by cross-correlating the responses with each other. Overall response magnitude was measured with root-mean-square (RMS) amplitude. Cross-correlation scores indicated that 23% of LP children exhibited cortical neural timing abnormalities such that their neurophysiological representation of speech sounds became distorted in the presence of background noise. The latency of the N2 response in noise was isolated as being the root of this distortion. RMS amplitudes in these children did not differ from NL children, indicating that this result was not due to a difference in response magnitude. LP children who participated in a commercial auditory training program and exhibited improved cortical timing also showed improvements in phonological perception. Consequently, auditory pathway timing deficits can be objectively observed in LP children, and auditory training can diminish these deficits.

  6. Bimodal stimulus timing-dependent plasticity in primary auditory cortex is altered after noise exposure with and without tinnitus.

    PubMed

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2015-12-01

    Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit.

  7. Bimodal stimulus timing-dependent plasticity in primary auditory cortex is altered after noise exposure with and without tinnitus

    PubMed Central

    Koehler, Seth D.; Shore, Susan E.

    2015-01-01

    Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit. PMID:26289461

  8. Perceptual distortions in pitch and time reveal active prediction and support for an auditory pitch-motion hypothesis.

    PubMed

    Henry, Molly J; McAuley, J Devin

    2013-01-01

    A number of accounts of human auditory perception assume that listeners use prior stimulus context to generate predictions about future stimulation. Here, we tested an auditory pitch-motion hypothesis that was developed from this perspective. Listeners judged either the time change (i.e., duration) or pitch change of a comparison frequency glide relative to a standard (referent) glide. Under a constant-velocity assumption, listeners were hypothesized to use the pitch velocity (Δf/Δt) of the standard glide to generate predictions about the pitch velocity of the comparison glide, leading to perceptual distortions along the to-be-judged dimension when the velocities of the two glides differed. These predictions were borne out in the pattern of relative points of subjective equality by a significant three-way interaction between the velocities of the two glides and task. In general, listeners' judgments along the task-relevant dimension (pitch or time) were affected by expectations generated by the constant-velocity standard, but in an opposite manner for the two stimulus dimensions. When the comparison glide velocity was faster than the standard, listeners overestimated time change, but underestimated pitch change, whereas when the comparison glide velocity was slower than the standard, listeners underestimated time change, but overestimated pitch change. Perceptual distortions were least evident when the velocities of the standard and comparison glides were matched. Fits of an imputed velocity model further revealed increasingly larger distortions at faster velocities. The present findings provide support for the auditory pitch-motion hypothesis and add to a larger body of work revealing a role for active prediction in human auditory perception.

  9. Effect of Eight Weekly Aerobic Training Program on Auditory Reaction Time and MaxVO[subscript 2] in Visual Impairments

    ERIC Educational Resources Information Center

    Taskin, Cengiz

    2016-01-01

    The aim of study was to examine the effect of eight weekly aerobic exercises on auditory reaction time and MaxVO[subscript 2] in visual impairments. Forty visual impairment children that have blind 3 classification from the Turkey, experimental group; (age = 15.60 ± 1.10 years; height = 164.15 ± 4.88 cm; weight = 66.60 ± 4.77 kg) for twenty…

  10. Recursive time-varying filter banks for subband image coding

    NASA Technical Reports Server (NTRS)

    Smith, Mark J. T.; Chung, Wilson C.

    1992-01-01

    Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.

  11. Effects of Location, Frequency Region, and Time Course of Selective Attention on Auditory Scene Analysis

    ERIC Educational Resources Information Center

    Cusack, Rhodri; Decks, John; Aikman, Genevieve; Carlyon, Robert P.

    2004-01-01

    Often, the sound arriving at the ears is a mixture from many different sources, but only 1 is of interest. To assist with selection, the auditory system structures the incoming input into streams, each of which ideally corresponds to a single source. Some authors have argued that this process of streaming is automatic and invariant, but recent…

  12. A Latent Consolidation Phase in Auditory Identification Learning: Time in the Awake State Is Sufficient

    ERIC Educational Resources Information Center

    Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi

    2005-01-01

    Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…

  13. Rapid Increase in Neural Conduction Time in the Adult Human Auditory Brainstem Following Sudden Unilateral Deafness.

    PubMed

    Maslin, M R D; Lloyd, S K; Rutherford, S; Freeman, S; King, A; Moore, D R; Munro, K J

    2015-10-01

    Individuals with sudden unilateral deafness offer a unique opportunity to study plasticity of the binaural auditory system in adult humans. Stimulation of the intact ear results in increased activity in the auditory cortex. However, there are no reports of changes at sub-cortical levels in humans. Therefore, the aim of the present study was to investigate changes in sub-cortical activity immediately before and after the onset of surgically induced unilateral deafness in adult humans. Click-evoked auditory brainstem responses (ABRs) to stimulation of the healthy ear were recorded from ten adults during the course of translabyrinthine surgery for the removal of a unilateral acoustic neuroma. This surgical technique always results in abrupt deafferentation of the affected ear. The results revealed a rapid (within minutes) reduction in latency of wave V (mean pre = 6.55 ms; mean post = 6.15 ms; p < 0.001). A latency reduction was also observed for wave III (mean pre = 4.40 ms; mean post = 4.13 ms; p < 0.001). These reductions in response latency are consistent with functional changes including disinhibition or/and more rapid intra-cellular signalling affecting binaurally sensitive neurons in the central auditory system. The results are highly relevant for improved understanding of putative physiological mechanisms underlying perceptual disorders such as tinnitus and hyperacusis.

  14. Effect of Auditory Motion Velocity on Reaction Time and Cortical Processes

    ERIC Educational Resources Information Center

    Getzmann, Stephan

    2009-01-01

    The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…

  15. Dependency Structures in Differentially Coded Cardiovascular Time Series

    PubMed Central

    Tasic, Tatjana; Jovanovic, Sladjana; Mohamoud, Omer; Skoric, Tamara; Japundzic-Zigon, Nina

    2017-01-01

    Objectives. This paper analyses temporal dependency in the time series recorded from aging rats, the healthy ones and those with early developed hypertension. The aim is to explore effects of age and hypertension on mutual sample relationship along the time axis. Methods. A copula method is applied to raw and to differentially coded signals. The latter ones were additionally binary encoded for a joint conditional entropy application. The signals were recorded from freely moving male Wistar rats and from spontaneous hypertensive rats, aged 3 months and 12 months. Results. The highest level of comonotonic behavior of pulse interval with respect to systolic blood pressure is observed at time lags τ = 0, 3, and 4, while a strong counter-monotonic behavior occurs at time lags τ = 1 and 2. Conclusion. Dynamic range of aging rats is considerably reduced in hypertensive groups. Conditional entropy of systolic blood pressure signal, compared to unconditional, shows an increased level of discrepancy, except for a time lag 1, where the equality is preserved in spite of the memory of differential coder. The antiparallel streams play an important role at single beat time lag. PMID:28127384

  16. Time-Dependent, Parallel Neutral Particle Transport Code System.

    SciTech Connect

    BAKER, RANDAL S.

    2009-09-10

    Version 00 PARTISN (PARallel, TIme-Dependent SN) is the evolutionary successor to CCC-547/DANTSYS. The PARTISN code package is a modular computer program package designed to solve the time-independent or dependent multigroup discrete ordinates form of the Boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, the Solver Module, and the Edit Module, respectively. PARTISN is the evolutionary successor to the DANTSYSTM code system package. The Input and Edit Modules in PARTISN are very similar to those in DANTSYS. However, unlike DANTSYS, the Solver Module in PARTISN contains one, two, and three-dimensional solvers in a single module. In addition to the diamond-differencing method, the Solver Module also has Adaptive Weighted Diamond-Differencing (AWDD), Linear Discontinuous (LD), and Exponential Discontinuous (ED) spatial differencing methods. The spatial mesh may consist of either a standard orthogonal mesh or a block adaptive orthogonal mesh. The Solver Module may be run in parallel for two and three dimensional problems. One can now run 1-D problems in parallel using Energy Domain Decomposition (triggered by Block 5 input keyword npeg>0). EDD can also be used in 2-D/3-D with or without our standard Spatial Domain Decomposition. Both the static (fixed source or eigenvalue) and time-dependent forms of the transport equation are solved in forward or adjoint mode. In addition, PARTISN now has a probabilistic mode for Probability of Initiation (static) and Probability of Survival (dynamic) calculations. Vacuum, reflective, periodic, white, or inhomogeneous boundary conditions are solved. General anisotropic scattering and inhomogeneous sources are permitted. PARTISN solves the transport equation on orthogonal (single level or block-structured AMR) grids in 1-D (slab, two

  17. Time and Category Information in Pattern-Based Codes

    PubMed Central

    Eyherabide, Hugo Gabriel; Samengo, Inés

    2010-01-01

    Sensory stimuli are usually composed of different features (the what) appearing at irregular times (the when). Neural responses often use spike patterns to represent sensory information. The what is hypothesized to be encoded in the identity of the elicited patterns (the pattern categories), and the when, in the time positions of patterns (the pattern timing). However, this standard view is oversimplified. In the real world, the what and the when might not be separable concepts, for instance, if they are correlated in the stimulus. In addition, neuronal dynamics can condition the pattern timing to be correlated with the pattern categories. Hence, timing and categories of patterns may not constitute independent channels of information. In this paper, we assess the role of spike patterns in the neural code, irrespective of the nature of the patterns. We first define information-theoretical quantities that allow us to quantify the information encoded by different aspects of the neural response. We also introduce the notion of synergy/redundancy between time positions and categories of patterns. We subsequently establish the relation between the what and the when in the stimulus with the timing and the categories of patterns. To that aim, we quantify the mutual information between different aspects of the stimulus and different aspects of the response. This formal framework allows us to determine the precise conditions under which the standard view holds, as well as the departures from this simple case. Finally, we study the capability of different response aspects to represent the what and the when in the neural response. PMID:21151371

  18. Interval Timing in Children: Effects of Auditory and Visual Pacing Stimuli and Relationships with Reading and Attention Variables

    PubMed Central

    Birkett, Emma E.; Talcott, Joel B.

    2012-01-01

    Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent. PMID:22900054

  19. Interval timing in children: effects of auditory and visual pacing stimuli and relationships with reading and attention variables.

    PubMed

    Birkett, Emma E; Talcott, Joel B

    2012-01-01

    Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.

  20. EEG alpha spindles and prolonged brake reaction times during auditory distraction in an on-road driving study.

    PubMed

    Sonnleitner, Andreas; Treder, Matthias Sebastian; Simon, Michael; Willmann, Sven; Ewald, Arne; Buchner, Axel; Schrauf, Michael

    2014-01-01

    Driver distraction is responsible for a substantial number of traffic accidents. This paper describes the impact of an auditory secondary task on drivers' mental states during a primary driving task. N=20 participants performed the test procedure in a car following task with repeated forced braking on a non-public test track. Performance measures (provoked reaction time to brake lights) and brain activity (EEG alpha spindles) were analyzed to describe distracted drivers. Further, a classification approach was used to investigate whether alpha spindles can predict drivers' mental states. Results show that reaction times and alpha spindle rate increased with time-on-task. Moreover, brake reaction times and alpha spindle rate were significantly higher while driving with auditory secondary task opposed to driving only. In single-trial classification, a combination of spindle parameters yielded a median classification error of about 8% in discriminating the distracted from the alert driving. Reduced driving performance (i.e., prolonged brake reaction times) during increased cognitive load is assumed to be indicated by EEG alpha spindles, enabling the quantification of driver distraction in experiments on public roads without verbally assessing the drivers' mental states.

  1. Performance of a space-time block coded code division multiple access system over Nakagami-m fading channels

    NASA Astrophysics Data System (ADS)

    Yu, Xiangbin; Dong, Tao; Xu, Dazhuan; Bi, Guangguo

    2010-09-01

    By introducing an orthogonal space-time coding scheme, multiuser code division multiple access (CDMA) systems with different space time codes are given, and corresponding system performance is investigated over a Nakagami-m fading channel. A low-complexity multiuser receiver scheme is developed for space-time block coded CDMA (STBC-CDMA) systems. The scheme can make full use of the complex orthogonality of space-time block coding to simplify the high decoding complexity of the existing scheme. Compared to the existing scheme with exponential decoding complexity, it has linear decoding complexity. Based on the performance analysis and mathematical calculation, the average bit error rate (BER) of the system is derived in detail for integer m and non-integer m, respectively. As a result, a tight closed-form BER expression is obtained for STBC-CDMA with an orthogonal spreading code, and an approximate closed-form BER expression is attained for STBC-CDMA with a quasi-orthogonal spreading code. Simulation results show that the proposed scheme can achieve almost the same performance as the existing scheme with low complexity. Moreover, the simulation results for average BER are consistent with the theoretical analysis.

  2. Change in Speech Perception and Auditory Evoked Potentials over Time after Unilateral Cochlear Implantation in Postlingually Deaf Adults.

    PubMed

    Purdy, Suzanne C; Kelly, Andrea S

    2016-02-01

    Speech perception varies widely across cochlear implant (CI) users and typically improves over time after implantation. There is also some evidence for improved auditory evoked potentials (shorter latencies, larger amplitudes) after implantation but few longitudinal studies have examined the relationship between behavioral and evoked potential measures after implantation in postlingually deaf adults. The relationship between speech perception and auditory evoked potentials was investigated in newly implanted cochlear implant users from the day of implant activation to 9 months postimplantation, on five occasions, in 10 adults age 27 to 57 years who had been bilaterally profoundly deaf for 1 to 30 years prior to receiving a unilateral CI24 cochlear implant. Changes over time in middle latency response (MLR), mismatch negativity, and obligatory cortical auditory evoked potentials and word and sentence speech perception scores were examined. Speech perception improved significantly over the 9-month period. MLRs varied and showed no consistent change over time. Three participants aged in their 50s had absent MLRs. The pattern of change in N1 amplitudes over the five visits varied across participants. P2 area increased significantly for 1,000- and 4,000-Hz tones but not for 250 Hz. The greatest change in P2 area occurred after 6 months of implant experience. Although there was a trend for mismatch negativity peak latency to reduce and width to increase after 3 months of implant experience, there was considerable variability and these changes were not significant. Only 60% of participants had a detectable mismatch initially; this increased to 100% at 9 months. The continued change in P2 area over the period evaluated, with a trend for greater change for right hemisphere recordings, is consistent with the pattern of incremental change in speech perception scores over time. MLR, N1, and mismatch negativity changes were inconsistent and hence P2 may be a more robust measure

  3. Auditory spatial processing in the human cortex.

    PubMed

    Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C

    2012-12-01

    The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.

  4. Coded acoustic wave sensors and system using time diversity

    NASA Technical Reports Server (NTRS)

    Solie, Leland P. (Inventor); Hines, Jacqueline H. (Inventor)

    2012-01-01

    An apparatus and method for distinguishing between sensors that are to be wirelessly detected is provided. An interrogator device uses different, distinct time delays in the sensing signals when interrogating the sensors. The sensors are provided with different distinct pedestal delays. Sensors that have the same pedestal delay as the delay selected by the interrogator are detected by the interrogator whereas other sensors with different pedestal delays are not sensed. Multiple sensors with a given pedestal delay are provided with different codes so as to be distinguished from one another by the interrogator. The interrogator uses a signal that is transmitted to the sensor and returned by the sensor for combination and integration with the reference signal that has been processed by a function. The sensor may be a surface acoustic wave device having a differential impulse response with a power spectral density consisting of lobes. The power spectral density of the differential response is used to determine the value of the sensed parameter or parameters.

  5. Incorporating Auditory Models in Speech/Audio Applications

    NASA Astrophysics Data System (ADS)

    Krishnamoorthi, Harish

    2011-12-01

    Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to

  6. Low Complexity Receiver Based Space-Time Codes for Broadband Wireless Communications

    DTIC Science & Technology

    2011-01-31

    STBC family is a combina- tion/overlay between orthogonal STBC and Toeplitz codes, which could be viewed as a generalization of overlapped Alamouti...codes (OAC) and Toeplitz codes recently pro- posed in the literature. It is shown that the newly proposed STBC may outperform the existing codes when...mixed asynchronous signals in the first time-slot by a Toeplitz matrix, and then broadcasts them back to the terminals in the second time-slot. A

  7. Development of the auditory system.

    PubMed

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity.

  8. Development of the auditory system

    PubMed Central

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  9. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation

    PubMed Central

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A.

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  10. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  11. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  12. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  13. Sources of variability in auditory brain stem evoked potential measures over time.

    PubMed

    Edwards, R M; Buchwald, J S; Tanguay, P E; Schwafel, J A

    1982-02-01

    Auditory brain stem EPs elicited in 10 normal adults by monaural clicks delivered at 72 dB HL, 20/sec showed no significant change in wave latencies or in the ratio of wave I to wave Y amplitude across 250 trial subsets, across 250 trial subsets, across 1500 trial blocks within a test session, or across two test sessions separated by several months. Sources of maximum variability were determined by using mean squared differences with all but one condition constant. 'Subjects' was shown to contribute the most variability followed by 'ears', 'sessions' and 'runs'; collapsing across conditions, wave III latencies were found to be the least variable, while wave II showed the most variability. Some EP morphologies showed extra peaks between waves II and IV, missing wave IV or wave IV fused with wave V. Such variations in wave form morphology were independent of EMG amplitude and were characteristic of certain individuals.

  14. Transformation of binaural response properties in the ascending auditory pathway: influence of time-varying interaural phase disparity.

    PubMed

    Spitzer, M W; Semple, M N

    1998-12-01

    Transformation of binaural response properties in the ascending auditory pathway: influence of time-varying interaural phase disparity. J. Neurophysiol. 80: 3062-3076, 1998. Previous studies demonstrated that tuning of inferior colliculus (IC) neurons to interaural phase disparity (IPD) is often profoundly influenced by temporal variation of IPD, which simulates the binaural cue produced by a moving sound source. To determine whether sensitivity to simulated motion arises in IC or at an earlier stage of binaural processing we compared responses in IC with those of two major IPD-sensitive neuronal classes in the superior olivary complex (SOC), neurons whose discharges were phase locked (PL) to tonal stimuli and those that were nonphase locked (NPL). Time-varying IPD stimuli consisted of binaural beats, generated by presenting tones of slightly different frequencies to the two ears, and interaural phase modulation (IPM), generated by presenting a pure tone to one ear and a phase modulated tone to the other. IC neurons and NPL-SOC neurons were more sharply tuned to time-varying than to static IPD, whereas PL-SOC neurons were essentially uninfluenced by the mode of stimulus presentation. Preferred IPD was generally similar in responses to static and time-varying IPD for all unit populations. A few IC neurons were highly influenced by the direction and rate of simulated motion, but the major effect for most IC neurons and all SOC neurons was a linear shift of preferred IPD at high rates-attributable to response latency. Most IC and NPL-SOC neurons were strongly influenced by IPM stimuli simulating motion through restricted ranges of azimuth; simulated motion through partially overlapping azimuthal ranges elicited discharge profiles that were highly discontiguous, indicating that the response associated with a particular IPD is dependent on preceding portions of the stimulus. In contrast, PL-SOC responses tracked instantaneous IPD throughout the trajectory of simulated

  15. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition.

  16. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    NASA Astrophysics Data System (ADS)

    Massimo, F.; Atzeni, S.; Marocchino, A.

    2016-12-01

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for the solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.

  17. Design Report for the Synchronized Position, Velocity, and Time Code Generator

    DTIC Science & Technology

    2015-08-01

    ARL-MR-0901 ● AUG 2015 US Army Research Laboratory Design Report for the Synchronized Position, Velocity, and Time Code ...Synchronized Position, Velocity, and Time Code Generator by Brian T Mays Sensors and Electron Devices Directorate, ARL...Position, Velocity, and Time Code Generator 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Brian T Mays 5d

  18. Persistent perceptual delay for head movement onset relative to auditory stimuli of different durations and rise times.

    PubMed

    Barnett-Cowan, Michael; Raeder, Sophie M; Bülthoff, Heinrich H

    2012-07-01

    The perception of simultaneity between auditory and vestibular information is crucially important for maintaining a coherent representation of the acoustic environment whenever the head moves. It has been recently reported, however, that despite having similar transduction latencies, vestibular stimuli are perceived significantly later than auditory stimuli when simultaneously generated. This suggests that perceptual latency of a head movement is longer than a co-occurring sound. However, these studies paired a vestibular stimulation of long duration (~1 s) and of a continuously changing temporal envelope with a brief (10-50 ms) sound pulse. In the present study, the stimuli were matched for temporal envelope duration and shape. Participants judged the temporal order of the two stimuli, the onset of an active head movement and the onset of brief (50 ms) or long (1,400 ms) sounds with a square- or raised-cosine-shaped envelope. Consistent with previous reports, head movement onset had to precede the onset of a brief sound by about 73 ms in order for the stimuli to be perceived as simultaneous. Head movements paired with long square sounds (~100 ms) were not significantly different than brief sounds. Surprisingly, head movements paired with long raised-cosine sound (~115 ms) had to be presented even earlier than brief stimuli. This additional lead time could not be accounted for by differences in the comparison stimulus characteristics (temporal envelope duration and shape). Rather, differences between sound conditions were found to be attributable to variability in the time for head movement to reach peak velocity: the head moved faster when paired with a brief sound. The persistent lead time required for vestibular stimulation provides further evidence that the perceptual latency of vestibular stimulation is greater than the other senses.

  19. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    PubMed Central

    Pisoni, David B.

    2015-01-01

    Purpose Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method Experiments 1 (N = 37) and 2 (N = 21) used pre- and posttest measures of speech and nonspeech recognition to find evidence of learning (within subject) and to compare the effects of 3 kinds of training (between subject) on the perceptual abilities of adults with normal hearing listening to simulations of cochlear implant processing. Subjects were given interactive, standard lab-based, or control training experience for 1 hr between the pre- and posttest tasks (unique sets across Experiments 1 & 2). Results Subjects receiving interactive training showed significant learning on sentence recognition in quiet task (Experiment 1), outperforming controls but not lab-trained subjects following training. Training groups did not differ significantly on any other task, even those directly involved in the interactive training experience. Conclusions Interactive training has the potential to produce learning in 1 domain (sentence recognition in quiet), but the particulars of the present training method (short duration, high complexity) may have limited benefits to this single criterion task. PMID:25674884

  20. The time course of auditory and language-specific mechanisms in compensation for sibilant assimilation.

    PubMed

    Clayards, Meghan; Niebuhr, Oliver; Gaskell, M Gareth

    2015-01-01

    Models of spoken-word recognition differ on whether compensation for assimilation is language-specific or depends on general auditory processing. English and French participants were taught words that began or ended with the sibilants /s/ and /∫/. Both languages exhibit some assimilation in sibilant sequences (e.g., /s/ becomes like [∫] in dress shop and classe chargée), but they differ in the strength and predominance of anticipatory versus carryover assimilation. After training, participants were presented with novel words embedded in sentences, some of which contained an assimilatory context either preceding or following. A continuum of target sounds ranging from [s] to [∫] was spliced into the novel words, representing a range of possible assimilation strengths. Listeners' perceptions were examined using a visual-world eyetracking paradigm in which the listener clicked on pictures matching the novel words. We found two distinct language-general context effects: a contrastive effect when the assimilating context preceded the target, and flattening of the sibilant categorization function (increased ambiguity) when the assimilating context followed. Furthermore, we found that English but not French listeners were able to resolve the ambiguity created by the following assimilatory context, consistent with their greater experience with assimilation in this context. The combination of these mechanisms allows listeners to deal flexibly with variability in speech forms.

  1. Auditory Imagination.

    ERIC Educational Resources Information Center

    Croft, Martyn

    Auditory imagination is used in this paper to describe a number of issues and activities related to sound and having to do with listening, thinking, recalling, imagining, reshaping, creating, and uttering sounds and words. Examples of auditory imagination in religious and literary works are cited that indicate a belief in an imagined, expected, or…

  2. The cortical modulation of stimulus-specific adaptation in the auditory midbrain and thalamus: a potential neuronal correlate for predictive coding

    PubMed Central

    Malmierca, Manuel S.; Anderson, Lucy A.; Antunes, Flora M.

    2015-01-01

    To follow an ever-changing auditory scene, the auditory brain is continuously creating a representation of the past to form expectations about the future. Unexpected events will produce an error in the predictions that should “trigger” the network’s response. Indeed, neurons in the auditory midbrain, thalamus and cortex, respond to rarely occurring sounds while adapting to frequently repeated ones, i.e., they exhibit stimulus specific adaptation (SSA). SSA cannot be explained solely by intrinsic membrane properties, but likely involves the participation of the network. Thus, SSA is envisaged as a high order form of adaptation that requires the influence of cortical areas. However, present research supports the hypothesis that SSA, at least in its simplest form (i.e., to frequency deviants), can be transmitted in a bottom-up manner through the auditory pathway. Here, we briefly review the underlying neuroanatomy of the corticofugal projections before discussing state of the art studies which demonstrate that SSA present in the medial geniculate body (MGB) and inferior colliculus (IC) is not inherited from the cortex but can be modulated by the cortex via the corticofugal pathways. By modulating the gain of neurons in the thalamus and midbrain, the auditory cortex (AC) would refine SSA subcortically, preventing irrelevant information from reaching the cortex. PMID:25805974

  3. Robust Timing Synchronization for Aviation Communications, and Efficient Modulation and Coding Study for Quantum Communication

    NASA Technical Reports Server (NTRS)

    Xiong, Fugin

    2003-01-01

    One half of Professor Xiong's effort will investigate robust timing synchronization schemes for dynamically varying characteristics of aviation communication channels. The other half of his time will focus on efficient modulation and coding study for the emerging quantum communications.

  4. Central projections of auditory nerve fibers in the barn owl.

    PubMed

    Carr, C E; Boudreau, R E

    1991-12-08

    The central projections of the auditory nerve were examined in the barn owl. Each auditory nerve fiber enters the brain and divides to terminate in both the cochlear nucleus angularis and the cochlear nucleus magnocellularis. This division parallels a functional division into intensity and time coding in the auditory system. The lateral branch of the auditory nerve innervates the nucleus angularis and gives rise to a major and a minor terminal field. The terminals range in size and shape from small boutons to large irregular boutons with thorn-like appendages. The medial branch of the auditory nerve conveys phase information to the cells of the nucleus magnocellularis via large axosomatic endings or end bulbs of Held. Each medial branch divides to form 3-6 end bulbs along the rostrocaudal orientation of a single tonotopic band, and each magnocellular neuron receives 1-4 end bulbs. The end bulb envelops the postsynaptic cell body and forms large numbers of synapses. The auditory nerve profiles contain round clear vesicles and form punctate asymmetric synapses on both somatic spines and the cell body.

  5. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  6. Quantitative Electromyographic Analysis of Reaction Time to External Auditory Stimuli in Drug-Naïve Parkinson's Disease

    PubMed Central

    Kwon, Do-Young; Park, Byung Kyu; Kim, Ji Won; Eom, Gwang-Moon; Hong, Junghwa; Koh, Seong-Beom; Park, Kun-Woo

    2014-01-01

    Evaluation of motor symptoms in Parkinson's disease (PD) is still based on clinical rating scales by clinicians. Reaction time (RT) is the time interval between a specific stimulus and the start of muscle response. The aim of this study was to identify the characteristics of RT responses in PD patients using electromyography (EMG) and to elucidate the relationship between RT and clinical features of PD. The EMG activity of 31 PD patients was recorded during isometric muscle contraction. RT was defined as the time latency between an auditory beep and responsive EMG activity. PD patients demonstrated significant delays in both initiation and termination of muscle contraction compared with controls. Cardinal motor symptoms of PD were closely correlated with RT. RT was longer in more-affected side and in more-advanced PD stages. Frontal cognitive function, which is indicative of motor programming and movement regulation and perseveration, was also closely related with RT. In conclusion, greater RT is the characteristic motor features of PD and it could be used as a sensitive tool for motor function assessment in PD patients. Further investigations are required to clarify the clinical impact of the RT on the activity of daily living of patients with PD. PMID:24724037

  7. Time sequence of auditory nerve and spiral ganglion cell degeneration following chronic kanamycin-induced deafness in the guinea pig.

    PubMed

    Kong, W J; Yin, Z D; Fan, G R; Li, D; Huang, X

    2010-05-17

    We investigated the time sequence of morphological changes of the spiral ganglion cell (SGC) and auditory nerve (AN) following chronic kanamycin-induced deafness. Guinea pigs were treated with kanamycin by subcutaneous injection at 500 mg/kg per day for 7 days. Histological changes in hair cells, SGCs, Schwann cells and the area of the cross-sectional of the AN with vestibular ganglion (VG) in the internal acoustic meatus were quantified at 1, 7, 14, 28, 56 and 70 days after kanamycin treatment. Outer hair cells decreased at 7 and 14 days. Loss of inner hair cells occurred at 14 and 28 days. The cross-sectional area of the AN with VG increased at 1 day and decreased shortly following loss of SGCs and Schwann cells at 7, 14 and 28 days after deafening. There was a similar time course of morphological changes in the overall cochlea and the basal turn. Thus, the effects of kanamycin on hair cells, spiral ganglion and Schwann cells are progressive. Early degeneration of SGC and Schwann cell mainly results from the direct toxic effect of kanamycin. However, multiple factors such as loss of hair cell, degeneration of Schwann cell and the progressive damage of kanamycin, may participate in the late degeneration process of SGCs. The molecular mechanism of the degeneration of SGC and Schwann cell should be investigated in the future. Moreover, there is a different time sequence of cell degeneration between acute and chronic deafness by kanamycin.

  8. juwvid: Julia code for time-frequency analysis

    NASA Astrophysics Data System (ADS)

    Kawahara, Hajime

    2017-02-01

    Juwvid performs time-frequency analysis. Written in Julia, it uses a modified version of the Wigner distribution, the pseudo Wigner distribution, and the short-time Fourier transform from MATLAB GPL programs, tftb-0.2. The modification includes the zero-padding FFT, the non-uniform FFT, the adaptive algorithm by Stankovic, Dakovic, Thayaparan 2013, the S-method, the L-Wigner distribution, and the polynomial Wigner-Ville distribution.

  9. From ear to body: the auditory-motor loop in spatial cognition

    PubMed Central

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space. PMID:25249933

  10. Subjective and Real Time: Coding Under Different Drug States

    PubMed Central

    Sanchez-Castillo, Hugo; Taylor, Kathleen M.; Ward, Ryan D.; Paz-Trejo, Diana B.; Arroyo-Araujo, Maria; Castillo, Oscar Galicia; Balsam, Peter D.

    2016-01-01

    Organisms are constantly extracting information from the temporal structure of the environment, which allows them to select appropriate actions and predict impending changes. Several lines of research have suggested that interval timing is modulated by the dopaminergic system. It has been proposed that higher levels of dopamine cause an internal clock to speed up, whereas less dopamine causes a deceleration of the clock. In most experiments the subjects are first trained to perform a timing task while drug free. Consequently, most of what is known about the influence of dopaminergic modulation of timing is on well-established timing performance. In the current study the impact of altered DA on the acquisition of temporal control was the focal question. Thirty male Sprague-Dawley rats were distributed randomly into three different groups (haloperidol, d-amphetamine or vehicle). Each animal received an injection 15 min prior to the start of every session from the beginning of interval training. The subjects were trained in a Fixed Interval (FI) 16s schedule followed by training on a peak procedure in which 64s non-reinforced peak trials were intermixed with FI trials. In a final test session all subjects were given vehicle injections and 10 consecutive non-reinforced peak trials to see if training under drug conditions altered the encoding of time. The current study suggests that administration of drugs that modulate dopamine do not alter the encoding temporal durations but do acutely affect the initiation of responding. PMID:27087743

  11. Interference between postural control and spatial vs. non-spatial auditory reaction time tasks in older adults.

    PubMed

    Fuhrman, Susan I; Redfern, Mark S; Jennings, J Richard; Furman, Joseph M

    2015-01-01

    This study investigated whether spatial aspects of an information processing task influence dual-task interference. Two groups (Older/Young) of healthy adults participated in dual-task experiments. Two auditory information processing tasks included a frequency discrimination choice reaction time task (non-spatial task) and a lateralization choice reaction time task (spatial task). Postural tasks included combinations of standing with eyes open or eyes closed on either a fixed floor or a sway-referenced floor. Reaction times and postural sway via center of pressure were recorded. Baseline measures of reaction time and sway were subtracted from the corresponding dual-task results to calculate reaction time task costs and postural task costs. Reaction time task cost increased with eye closure (p = 0.01), sway-referenced flooring (p < 0.0001), and the spatial task (p = 0.04). Additionally, a significant (p = 0.05) task x vision x age interaction indicated that older subjects had a significant vision X task interaction whereas young subjects did not. However, when analyzed by age group, the young group showed minimal differences in interference for the spatial and non-spatial tasks with eyes open, but showed increased interference on the spatial relative to non-spatial task with eyes closed. On the contrary, older subjects demonstrated increased interference on the spatial relative to the non-spatial task with eyes open, but not with eyes closed. These findings suggest that visual-spatial interference may occur in older subjects when vision is used to maintain posture.

  12. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.10 Voluntary disclosure of on-time performance codes. (a) Any air carrier may determine, in accordance with the...

  13. Differential Space-Time Coding Scheme Using Star Quadrature Amplitude Modulation Method

    NASA Astrophysics Data System (ADS)

    Yu, Xiangbin; Xu, DaZhuan; Bi, Guangguo

    2006-12-01

    Differential space-time coding (DSTC) has received much interest as it obviates the requirement of the channel state information at the receiver while maintaining the desired properties of space-time coding techniques. In this paper, by introducing star quadrature amplitude modulation (star QAM) method, two kinds of multiple amplitudes DSTC schemes are proposed. One is based on differential unitary space-time coding (DUSTC) scheme, and the other is based on differential orthogonal space-time coding (DOSTC) scheme. Corresponding bit-error-rate (BER) performance and coding-gain analysis are given, respectively. The proposed schemes can avoid the performance loss of conventional DSTC schemes based on phase-shift keying (PSK) modulation in high spectrum efficiency via multiple amplitudes modulation. Compared with conventional PSK-based DSTC schemes, the developed schemes have higher spectrum efficiency via carrying information not only on phases but also on amplitudes, and have higher coding gain. Moreover, the first scheme can implement low-complexity differential modulation and different code rates and be applied to any number of transmit antennas; while the second scheme has simple decoder and high code rate in the case of 3 and 4 antennas. The simulation results show that our schemes have lower BER when compared with conventional DUSTC and DOSTC schemes.

  14. Contextual modulation of primary visual cortex by auditory signals

    PubMed Central

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  15. Contextual modulation of primary visual cortex by auditory signals.

    PubMed

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'.

  16. Auditory neglect.

    PubMed Central

    De Renzi, E; Gentilini, M; Barbieri, C

    1989-01-01

    Auditory neglect was investigated in normal controls and in patients with a recent unilateral hemispheric lesion, by requiring them to detect the interruptions that occurred in one ear in a sound delivered through earphones either mono-aurally or binaurally. Control patients accurately detected interruptions. One left brain damaged (LBD) patient missed only once in the ipsilateral ear while seven of the 30 right brain damaged (RBD) patients missed more than one signal in the monoaural test and nine patients did the same in the binaural test. Omissions were always more marked in the left ear and in the binaural test with a significant ear by test interaction. The lesion of these patients was in the parietal lobe (five patients) and the thalamus (four patients). The relation of auditory neglect to auditory extinction was investigated and found to be equivocal, in that there were seven RBD patients who showed extinction, but not neglect and, more importantly, two patients who exhibited the opposite pattern, thus challenging the view that extinction is a minor form of neglect. Also visual and auditory neglect were not consistently correlated, the former being present in nine RBD patients without auditory neglect and the latter in two RBD patients without visual neglect. The finding that in some RBD patients with auditory neglect omissions also occurred, though with less frequency, in the right ear, points to a right hemisphere participation in the deployment of attention not only to the contralateral, but also to the ipsilateral space. PMID:2732732

  17. Multiplexed fluorescence readout using time responses of color coded signals for biomolecular detection.

    PubMed

    Nishimura, Takahiro; Ogura, Yusuke; Tanida, Jun

    2016-12-01

    Fluorescence readout is an important technique for detecting biomolecules. In this paper, we present a multiplexed fluorescence readout method using time varied fluorescence signals. To generate the fluorescence signals, coded strands and a set of universal molecular beacons are introduced. Each coded strand represents the existence of an assigned target molecule. The coded strands have coded sequences to generate temporary fluorescence signals through binding to the molecular beacons. The signal generating processes are modeled based on the reaction kinetics between the coded strands and molecular beacons. The model is used to decode the detected fluorescence signals using maximum likelihood estimation. Multiplexed fluorescence readout was experimentally demonstrated with three molecular beacons. Numerical analysis showed that the readout accuracy was enhanced by the use of time-varied fluorescence signals.

  18. Multiplexed fluorescence readout using time responses of color coded signals for biomolecular detection

    PubMed Central

    Nishimura, Takahiro; Ogura, Yusuke; Tanida, Jun

    2016-01-01

    Fluorescence readout is an important technique for detecting biomolecules. In this paper, we present a multiplexed fluorescence readout method using time varied fluorescence signals. To generate the fluorescence signals, coded strands and a set of universal molecular beacons are introduced. Each coded strand represents the existence of an assigned target molecule. The coded strands have coded sequences to generate temporary fluorescence signals through binding to the molecular beacons. The signal generating processes are modeled based on the reaction kinetics between the coded strands and molecular beacons. The model is used to decode the detected fluorescence signals using maximum likelihood estimation. Multiplexed fluorescence readout was experimentally demonstrated with three molecular beacons. Numerical analysis showed that the readout accuracy was enhanced by the use of time-varied fluorescence signals. PMID:28018742

  19. Differential Cooperative Communications with Space-Time Network Coding

    DTIC Science & Technology

    2010-01-01

    The received signal at Un in the mth time slot of Phase I is ykmn = √ Ptg k mnv k m + w k mn, (1) where Pt is the power constraint of the user nodes, w...rate (SER) at Un for the symbols from Um is pmn , βmn’s are independent Bernoulli random variables with a distribution P (βmn) = { 1− pmn , for βmn = 1... pmn , for βmn = 0 . (17) The SER for M-QAM modulation can be expressed as [12] pmn = F2 ( 1 + bqγmn sin2 θ ) , (18) where bq = bQAM 2 = 3 2(M+1) and γmn

  20. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  1. MEG dual scanning: a procedure to study real-time auditory interaction between two persons

    PubMed Central

    Baess, Pamela; Zhdanov, Andrey; Mandel, Anne; Parkkonen, Lauri; Hirvenkari, Lotta; Mäkelä, Jyrki P.; Jousmäki, Veikko; Hari, Riitta

    2012-01-01

    Social interactions fill our everyday life and put strong demands on our brain function. However, the possibilities for studying the brain basis of social interaction are still technically limited, and even modern brain imaging studies of social cognition typically monitor just one participant at a time. We present here a method to connect and synchronize two faraway neuromagnetometers. With this method, two participants at two separate sites can interact with each other through a stable real-time audio connection with minimal delay and jitter. The magnetoencephalographic (MEG) and audio recordings of both laboratories are accurately synchronized for joint offline analysis. The concept can be extended to connecting multiple MEG devices around the world. As a proof of concept of the MEG-to-MEG link, we report the results of time-sensitive recordings of cortical evoked responses to sounds delivered at laboratories separated by 5 km. PMID:22514530

  2. Behind the scenes of auditory perception.

    PubMed

    Shamma, Shihab A; Micheyl, Christophe

    2010-06-01

    'Auditory scenes' often contain contributions from multiple acoustic sources. These are usually heard as separate auditory 'streams', which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the past two years indicate that both cortical and subcortical processes contribute to the formation of auditory streams, and they raise important questions concerning the roles of primary and secondary areas of auditory cortex in this phenomenon. In addition, these findings underline the importance of taking into account the relative timing of neural responses, and the influence of selective attention, in the search for neural correlates of the perception of auditory streams.

  3. Real-time C Code Generation in Ptolemy II for the Giotto Model of Computation

    DTIC Science & Technology

    2009-05-20

    Real-time C Code Generation in Ptolemy II for the Giotto Model of Computation Shanna-Shaye Forbes Electrical Engineering and Computer Sciences...MAY 2009 2. REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Real-time C Code Generation in Ptolemy II for the Giotto...periodic and there are multiple modes of operation. Ptolemy II is a university based open source modeling and simulation framework that supports model

  4. The Effect of Dopaminergic Medication on Beat-Based Auditory Timing in Parkinson’s Disease

    PubMed Central

    Cameron, Daniel J.; Pickett, Kristen A.; Earhart, Gammon M.; Grahn, Jessica A.

    2016-01-01

    Parkinson’s disease (PD) adversely affects timing abilities. Beat-based timing is a mechanism that times events relative to a regular interval, such as the “beat” in musical rhythm, and is impaired in PD. It is unknown if dopaminergic medication influences beat-based timing in PD. Here, we tested beat-based timing over two sessions in participants with PD (OFF then ON dopaminergic medication) and in unmedicated control participants. People with PD and control participants completed two tasks. The first was a discrimination task in which participants compared two rhythms and determined whether they were the same or different. Rhythms either had a beat structure (metric simple rhythms) or did not (metric complex rhythms), as in previous studies. Discrimination accuracy was analyzed to test for the effects of beat structure, as well as differences between participants with PD and controls, and effects of medication (PD group only). The second task was the Beat Alignment Test (BAT), in which participants listened to music with regular tones superimposed, and responded as to whether the tones were “ON” or “OFF” the beat of the music. Accuracy was analyzed to test for differences between participants with PD and controls, and for an effect of medication in patients. Both patients and controls discriminated metric simple rhythms better than metric complex rhythms. Controls also improved at the discrimination task in the second vs. first session, whereas people with PD did not. For participants with PD, the difference in performance between metric simple and metric complex rhythms was greater (sensitivity to changes in simple rhythms increased and sensitivity to changes in complex rhythms decreased) when ON vs. OFF medication. Performance also worsened with disease severity. For the BAT, no group differences or effects of medication were found. Overall, these findings suggest that timing is impaired in PD, and that dopaminergic medication influences beat

  5. Feature Assignment in Perception of Auditory Figure

    ERIC Educational Resources Information Center

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  6. Central auditory imperception.

    PubMed

    Snow, J B; Rintelmann, W F; Miller, J M; Konkle, D F

    1977-09-01

    The development of clinically applicable techniques for the evaluation of hearing impairment caused by lesions of the central auditory pathways has increased clinical interest in the anatomy and physiology of these pathways. A conceptualization of present understanding of the anatomy and physiology of the central auditory pathways is presented. Clinical tests based on reduction of redundancy of the speech message, degradation of speech and binaural interations are presented. Specifically performance-intensity functions, filtered speech tests, competing message tests and time-compressed speech tests are presented with the emphasis on our experience with time-compressed speech tests. With proper use of these tests not only can central auditory impairments by detected, but brain stem lesions can be distinguished from cortical lesions.

  7. Accuracy and time requirements of a bar-code inventory system for medical supplies.

    PubMed

    Hanson, L B; Weinswig, M H; De Muth, J E

    1988-02-01

    The effects of implementing a bar-code system for issuing medical supplies to nursing units at a university teaching hospital were evaluated. Data on the time required to issue medical supplies to three nursing units at a 480-bed, tertiary-care teaching hospital were collected (1) before the bar-code system was implemented (i.e., when the manual system was in use), (2) one month after implementation, and (3) four months after implementation. At the same times, the accuracy of the central supply perpetual inventory was monitored using 15 selected items. One-way analysis of variance tests were done to determine any significant differences between the bar-code and manual systems. Using the bar-code system took longer than using the manual system because of a significant difference in the time required for order entry into the computer. Multiple-use requirements of the central supply computer system made entering bar-code data a much slower process. There was, however, a significant improvement in the accuracy of the perpetual inventory. Using the bar-code system for issuing medical supplies to the nursing units takes longer than using the manual system. However, the accuracy of the perpetual inventory was significantly improved with the implementation of the bar-code system.

  8. Alamouti-Type Space-Time Coding for Free-Space Optical Communication with Direct Detection

    NASA Astrophysics Data System (ADS)

    Simon, M. K.; Vilnrotter, V.

    2003-11-01

    In optical communication systems employing direct detection at the receiver, intensity modulations such as on-off keying (OOK) or pulse-position modulation (PPM) are commonly used to convey the information. Consider the possibility of applying space-time coding in such a scenario, using, for example, an Alamouti-type coding scheme [1]. Implicit in the Alamouti code is the fact that the modulation that defines the signal set is such that it is meaningful to transmit and detect both the signal and its negative. While modulations such as phase-shift keying (PSK) and quadrature amplitude modulation (QAM) naturally fall into this class, OOK and PPM do not since the signal polarity (phase) would not be detected at the receiver. We investigate a modification of the Alamouti code to be used with such modulations that has the same desirable properties as the conventional Alamouti code but does not rely on the necessity of transmitting the negative of a signal.

  9. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    PubMed Central

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  10. Auditory adaptation in voice perception.

    PubMed

    Schweinberger, Stefan R; Casper, Christoph; Hauthal, Nadine; Kaufmann, Jürgen M; Kawahara, Hideki; Kloth, Nadine; Robertson, David M C; Simpson, Adrian P; Zäske, Romi

    2008-05-06

    Perceptual aftereffects following adaptation to simple stimulus attributes (e.g., motion, color) have been studied for hundreds of years. A striking recent discovery was that adaptation also elicits contrastive aftereffects in visual perception of complex stimuli and faces [1-6]. Here, we show for the first time that adaptation to nonlinguistic information in voices elicits systematic auditory aftereffects. Prior adaptation to male voices causes a voice to be perceived as more female (and vice versa), and these auditory aftereffects were measurable even minutes after adaptation. By contrast, crossmodal adaptation effects were absent, both when male or female first names and when silently articulating male or female faces were used as adaptors. When sinusoidal tones (with frequencies matched to male and female voice fundamental frequencies) were used as adaptors, no aftereffects on voice perception were observed. This excludes explanations for the voice aftereffect in terms of both pitch adaptation and postperceptual adaptation to gender concepts and suggests that contrastive voice-coding mechanisms may routinely influence voice perception. The role of adaptation in calibrating properties of high-level voice representations indicates that adaptation is not confined to vision but is a ubiquitous mechanism in the perception of nonlinguistic social information from both faces and voices.

  11. Effects of spatial response coding on distractor processing: evidence from auditory spatial negative priming tasks with keypress, joystick, and head movement responses.

    PubMed

    Möller, Malte; Mayr, Susanne; Buchner, Axel

    2015-01-01

    Prior studies of spatial negative priming indicate that distractor-assigned keypress responses are inhibited as part of visual, but not auditory, processing. However, recent evidence suggests that static keypress responses are not directly activated by spatially presented sounds and, therefore, might not call for an inhibitory process. In order to investigate the role of response inhibition in auditory processing, we used spatially directed responses that have been shown to result in direct response activation to irrelevant sounds. Participants localized a target sound by performing manual joystick responses (Experiment 1) or head movements (Experiment 2B) while ignoring a concurrent distractor sound. Relations between prime distractor and probe target were systematically manipulated (repeated vs. changed) with respect to identity and location. Experiment 2A investigated the influence of distractor sounds on spatial parameters of head movements toward target locations and showed that distractor-assigned responses are immediately inhibited to prevent false responding in the ongoing trial. Interestingly, performance in Experiments 1 and 2B was not generally impaired when the probe target appeared at the location of the former prime distractor and required a previously withheld and presumably inhibited response. Instead, performance was impaired only when prime distractor and probe target mismatched in terms of location or identity, which fully conforms to the feature-mismatching hypothesis. Together, the results suggest that response inhibition operates in auditory processing when response activation is provided but is presumably too short-lived to affect responding on the subsequent trial.

  12. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  13. One hundred ways to process time, frequency, rate and scale in the central auditory system: a pattern-recognition meta-analysis.

    PubMed

    Hemery, Edgar; Aucouturier, Jean-Julien

    2015-01-01

    The mammalian auditory system extracts features from the acoustic environment based on the responses of spatially distributed sets of neurons in the subcortical and cortical auditory structures. The characteristic responses of these neurons (linearly approximated by their spectro-temporal receptive fields, or STRFs) suggest that auditory representations are formed, as early as in the inferior colliculi, on the basis of a time, frequency, rate (temporal modulations) and scale (spectral modulations) analysis of sound. However, how these four dimensions are integrated and processed in subsequent neural networks remains unclear. In this work, we present a new methodology to generate computational insights into the functional organization of such processes. We first propose a systematic framework to explore more than a hundred different computational strategies proposed in the literature to process the output of a generic STRF model. We then evaluate these strategies on their ability to compute perceptual distances between pairs of environmental sounds. Finally, we conduct a meta-analysis of the dataset of all these algorithms' accuracies to examine whether certain combinations of dimensions and certain ways to treat such dimensions are, on the whole, more computationally effective than others. We present an application of this methodology to a dataset of ten environmental sound categories, in which the analysis reveals that (1) models are most effective when they organize STRF data into frequency groupings-which is consistent with the known tonotopic organization of receptive fields in auditory structures -, and that (2) models that treat STRF data as time series are no more effective than models that rely only on summary statistics along time-which corroborates recent experimental evidence on texture discrimination by summary statistics.

  14. EBR-II time constant calculation using the EROS kinetics code

    SciTech Connect

    Grimm, K.N.; Meneghetti, D.

    1986-01-01

    System time constants are important parameters in determining the dynamic behavior of reactors. One method of determining basic time constants is to apply a step change in power level and determine the resulting temperature change. This methodology can be done using any computer code that calculates temperature versus time given either a power input or a reactivity input. In the current analysis this is done using the reactor kinetics code EROS. As an example of this methodology, the time constant is calculated for an Experimental Breeder Reactor II (EBR-II) fuel pin.

  15. Power optimization of wireless media systems with space-time block codes.

    PubMed

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-07-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.

  16. Auditory Reserve and the Legacy of Auditory Experience

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function. PMID:25405381

  17. One hundred ways to process time, frequency, rate and scale in the central auditory system: a pattern-recognition meta-analysis

    PubMed Central

    Hemery, Edgar; Aucouturier, Jean-Julien

    2015-01-01

    The mammalian auditory system extracts features from the acoustic environment based on the responses of spatially distributed sets of neurons in the subcortical and cortical auditory structures. The characteristic responses of these neurons (linearly approximated by their spectro-temporal receptive fields, or STRFs) suggest that auditory representations are formed, as early as in the inferior colliculi, on the basis of a time, frequency, rate (temporal modulations) and scale (spectral modulations) analysis of sound. However, how these four dimensions are integrated and processed in subsequent neural networks remains unclear. In this work, we present a new methodology to generate computational insights into the functional organization of such processes. We first propose a systematic framework to explore more than a hundred different computational strategies proposed in the literature to process the output of a generic STRF model. We then evaluate these strategies on their ability to compute perceptual distances between pairs of environmental sounds. Finally, we conduct a meta-analysis of the dataset of all these algorithms' accuracies to examine whether certain combinations of dimensions and certain ways to treat such dimensions are, on the whole, more computationally effective than others. We present an application of this methodology to a dataset of ten environmental sound categories, in which the analysis reveals that (1) models are most effective when they organize STRF data into frequency groupings—which is consistent with the known tonotopic organization of receptive fields in auditory structures -, and that (2) models that treat STRF data as time series are no more effective than models that rely only on summary statistics along time—which corroborates recent experimental evidence on texture discrimination by summary statistics. PMID:26190996

  18. Change in the coding of interaural time difference along the tonotopic axis of the chicken nucleus laminaris.

    PubMed

    Palanca-Castan, Nicolas; Köppl, Christine

    2015-01-01

    Interaural time differences (ITDs) are an important cue for the localization of sounds in azimuthal space. Both birds and mammals have specialized, tonotopically organized nuclei in the brain stem for the processing of ITD: medial superior olive in mammals and nucleus laminaris (NL) in birds. The specific way in which ITDs are derived was long assumed to conform to a delay-line model in which arrays of systematically arranged cells create a representation of auditory space with different cells responding maximally to specific ITDs. This model was supported by data from barn owl NL taken from regions above 3 kHz and from chicken above 1 kHz. However, data from mammals often do not show defining features of the Jeffress model such as a systematic topographic representation of best ITDs or the presence of axonal delay lines, and an alternative has been proposed in which neurons are not topographically arranged with respect to ITD and coding occurs through the assessment of the overall response of two large neuron populations, one in each hemisphere. Modeling studies have suggested that the presence of different coding systems could be related to the animal's head size and frequency range rather than their phylogenetic group. Testing this hypothesis requires data from across the tonotopic range of both birds and mammals. The aim of this study was to obtain in vivo recordings from neurons in the low-frequency range (<1000 Hz) of chicken NL. Our data argues for the presence of a modified Jeffress system that uses the slopes of ITD-selective response functions instead of their peaks to topographically represent ITD at mid- to high frequencies. At low frequencies, below several 100 Hz, the data did not support any current model of ITD coding. This is different to what was previously shown in the barn owl and suggests that constraints in optimal ITD processing may be associated with the particular demands on sound localization determined by the animal's ecological niche in

  19. Change in the coding of interaural time difference along the tonotopic axis of the chicken nucleus laminaris

    PubMed Central

    Palanca-Castan, Nicolas; Köppl, Christine

    2015-01-01

    Interaural time differences (ITDs) are an important cue for the localization of sounds in azimuthal space. Both birds and mammals have specialized, tonotopically organized nuclei in the brain stem for the processing of ITD: medial superior olive in mammals and nucleus laminaris (NL) in birds. The specific way in which ITDs are derived was long assumed to conform to a delay-line model in which arrays of systematically arranged cells create a representation of auditory space with different cells responding maximally to specific ITDs. This model was supported by data from barn owl NL taken from regions above 3 kHz and from chicken above 1 kHz. However, data from mammals often do not show defining features of the Jeffress model such as a systematic topographic representation of best ITDs or the presence of axonal delay lines, and an alternative has been proposed in which neurons are not topographically arranged with respect to ITD and coding occurs through the assessment of the overall response of two large neuron populations, one in each hemisphere. Modeling studies have suggested that the presence of different coding systems could be related to the animal’s head size and frequency range rather than their phylogenetic group. Testing this hypothesis requires data from across the tonotopic range of both birds and mammals. The aim of this study was to obtain in vivo recordings from neurons in the low-frequency range (<1000 Hz) of chicken NL. Our data argues for the presence of a modified Jeffress system that uses the slopes of ITD-selective response functions instead of their peaks to topographically represent ITD at mid- to high frequencies. At low frequencies, below several 100 Hz, the data did not support any current model of ITD coding. This is different to what was previously shown in the barn owl and suggests that constraints in optimal ITD processing may be associated with the particular demands on sound localization determined by the animal’s ecological niche

  20. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  1. Auditory system

    NASA Technical Reports Server (NTRS)

    Ades, H. W.

    1973-01-01

    The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.

  2. A comparative study of time-marching and space-marching numerical methods. [for flowfield codes

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Moss, J. N.; Simmonds, A. L.

    1983-01-01

    Menees (1981) has conducted an evaluation of three different flowfield codes for the Jupiter entry conditions. However, a comparison of the codes has been made difficult by the fact that the three codes use different solution procedures, different computational mesh sizes, and a different convergence criterion. There are also other differences. For an objective evaluation of the different numerical solution methods employed by the codes, it would be desirable to select a simple no-blowing perfect-gas flowfield case for which the turbulent models are well established. The present investigation is concerned with the results of such a study. It is found that the choice of the numerical method is rather problem dependent. The time-marching and the space-marching method provide both comparable results if care is taken in selecting the appropriate mesh size near the body surface.

  3. Identification of Dynamic Patterns of Speech-Evoked Auditory Brainstem Response Based on Ensemble Empirical Mode Decomposition and Nonlinear Time Series Analysis Methods

    NASA Astrophysics Data System (ADS)

    Mozaffarilegha, Marjan; Esteki, Ali; Ahadi, Mohsen; Nazeri, Ahmadreza

    The speech-evoked auditory brainstem response (sABR) shows how complex sounds such as speech and music are processed in the auditory system. Speech-ABR could be used to evaluate particular impairments and improvements in auditory processing system. Many researchers used linear approaches for characterizing different components of sABR signal, whereas nonlinear techniques are not applied so commonly. The primary aim of the present study is to examine the underlying dynamics of normal sABR signals. The secondary goal is to evaluate whether some chaotic features exist in this signal. We have presented a methodology for determining various components of sABR signals, by performing Ensemble Empirical Mode Decomposition (EEMD) to get the intrinsic mode functions (IMFs). Then, composite multiscale entropy (CMSE), the largest Lyapunov exponent (LLE) and deterministic nonlinear prediction are computed for each extracted IMF. EEMD decomposes sABR signal into five modes and a residue. The CMSE results of sABR signals obtained from 40 healthy people showed that 1st, and 2nd IMFs were similar to the white noise, IMF-3 with synthetic chaotic time series and 4th, and 5th IMFs with sine waveform. LLE analysis showed positive values for 3rd IMFs. Moreover, 1st, and 2nd IMFs showed overlaps with surrogate data and 3rd, 4th and 5th IMFs showed no overlap with corresponding surrogate data. Results showed the presence of noisy, chaotic and deterministic components in the signal which respectively corresponded to 1st, and 2nd IMFs, IMF-3, and 4th and 5th IMFs. While these findings provide supportive evidence of the chaos conjecture for the 3rd IMF, they do not confirm any such claims. However, they provide a first step towards an understanding of nonlinear behavior of auditory system dynamics in brainstem level.

  4. Adapted wavelet transform improves time-frequency representations: a study of auditory elicited P300-like event-related potentials in rats

    NASA Astrophysics Data System (ADS)

    Richard, Nelly; Laursen, Bettina; Grupe, Morten; Drewes, Asbjørn M.; Graversen, Carina; Sørensen, Helge B. D.; Bastlund, Jesper F.

    2017-04-01

    Objective. Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. Approach. An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. Main results. While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. Significance. The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.

  5. Neural code alterations and abnormal time patterns in Parkinson’s disease

    NASA Astrophysics Data System (ADS)

    Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo

    2015-04-01

    Objective. The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson’s disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. Approach. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. Main results. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson’s disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. Significance. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson’s disease.

  6. Timing group delay and differential code bias corrections for BeiDou positioning

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Zhang, Xiaohong; Wang, Jinling

    2015-05-01

    This article first clearly figures out the relationship between parameters of timing group delay (TGD) and differential code bias (DCB) for BDS, and demonstrates the equivalence of TGD and DCB correction models combining theory with practice. The TGD/DCB correction models have been extended to various occasions for BDS positioning, and such models have been evaluated by real triple-frequency datasets. To test the effectiveness of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both standard point positioning (SPP) and precise point positioning (PPP) tests are carried out for BDS signals with different schemes. Furthermore, the influence of differential code biases on BDS positioning estimates such as coordinates, receiver clock biases, tropospheric delays and carrier phase ambiguities is investigated comprehensively. Comparative analysis show that the unmodeled differential code biases degrade the performance of BDS SPP by a factor of two or more, whereas the estimates of PPP are subject to varying degrees of influences. For SPP, the accuracy of dual-frequency combinations is slightly worse than that of single-frequency, and they are much more sensitive to the differential code biases, particularly for the B2B3 combination. For PPP, the uncorrected differential code biases are mostly absorbed into the receiver clock bias and carrier phase ambiguities and thus resulting in a much longer convergence time. Even though the influence of the differential code biases could be mitigated over time and comparable positioning accuracy could be achieved after convergence, it is suggested to properly handle with the differential code biases since it is vital for PPP convergence and integer ambiguity resolution.

  7. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  8. Zipf's law in short-time timbral codings of speech, music, and environmental sound signals.

    PubMed

    Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Alvaro

    2012-01-01

    Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources.

  9. Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals

    PubMed Central

    Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro

    2012-01-01

    Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497

  10. Just in time? Using QR codes for multi-professional learning in clinical practice.

    PubMed

    Jamu, Joseph Tawanda; Lowi-Jones, Hannah; Mitchell, Colin

    2016-07-01

    Clinical guidelines and policies are widely available on the hospital intranet or from the internet, but can be difficult to access at the required time and place. Clinical staff with smartphones could use Quick Response (QR) codes for contemporaneous access to relevant information to support the Just in Time Learning (JIT-L) paradigm. There are several studies that advocate the use of smartphones to enhance learning amongst medical students and junior doctors in UK. However, these participants are already technologically orientated. There are limited studies that explore the use of smartphones in nursing practice. QR Codes were generated for each topic and positioned at relevant locations on a medical ward. Support and training were provided for staff. Website analytics and semi-structured interviews were performed to evaluate the efficacy, acceptability and feasibility of using QR codes to facilitate Just in Time learning. Use was intermittently high but not sustained. Thematic analysis of interviews revealed a positive assessment of the Just in Time learning paradigm and context-sensitive clinical information. However, there were notable barriers to acceptance, including usability of QR codes and appropriateness of smartphone use in a clinical environment. The use of Just in Time learning for education and reference may be beneficial to healthcare professionals. However, alternative methods of access for less technologically literate users and a change in culture of mobile device use in clinical areas may be needed.

  11. Computer code for space-time diagnostics of nuclear safety parameters

    SciTech Connect

    Solovyev, D. A.; Semenov, A. A.; Gruzdov, F. V.; Druzhaev, A. A.; Shchukin, N. V.; Dolgenko, S. G.; Solovyeva, I. V.; Ovchinnikova, E. A.

    2012-07-01

    The computer code ECRAN 3D (Experimental and Calculation Reactor Analysis) is designed for continuous monitoring and diagnostics of reactor cores and databases for RBMK-1000 on the basis of analytical methods for the interrelation parameters of nuclear safety. The code algorithms are based on the analysis of deviations between the physically obtained figures and the results of neutron-physical and thermal-hydraulic calculations. Discrepancies between the measured and calculated signals are equivalent to obtaining inadequacy between performance of the physical device and its simulator. The diagnostics system can solve the following problems: identification of facts and time for inconsistent results, localization of failures, identification and quantification of the causes for inconsistencies. These problems can be effectively solved only when the computer code is working in a real-time mode. This leads to increasing requirements for a higher code performance. As false operations can lead to significant economic losses, the diagnostics system must be based on the certified software tools. POLARIS, version 4.2.1 is used for the neutron-physical calculation in the computer code ECRAN 3D. (authors)

  12. The Role of Coding Time in Estimating and Interpreting Growth Curve Models.

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.; Deeb-Sossa, Natalia; Papadakis, Alison A.; Bollen, Kenneth A.; Curran, Patrick J.

    2004-01-01

    The coding of time in growth curve models has important implications for the interpretation of the resulting model that are sometimes not transparent. The authors develop a general framework that includes predictors of growth curve components to illustrate how parameter estimates and their standard errors are exactly determined as a function of…

  13. A novel repetition space-time coding scheme for mobile FSO systems

    NASA Astrophysics Data System (ADS)

    Li, Ming; Cao, Yang; Li, Shu-ming; Yang, Shao-wen

    2015-03-01

    Considering the influence of more random atmospheric turbulence, worse pointing errors and highly dynamic link on the transmission performance of mobile multiple-input multiple-output (MIMO) free space optics (FSO) communication systems, this paper establishes a channel model for the mobile platform. Based on the combination of Alamouti space-time code and time hopping ultra-wide band (TH-UWB) communications, a novel repetition space-time coding (RSTC) method for mobile 2×2 free-space optical communications with pulse position modulation (PPM) is developed. In particular, two decoding methods of equal gain combining (EGC) maximum likelihood detection (MLD) and correlation matrix detection (CMD) are derived. When a quasi-static fading and weak turbulence channel model are considered, simulation results show that whether the channel state information (CSI) is known or not, the coding system demonstrates more significant performance of the symbol error rate (SER) than the uncoding. In other words, transmitting diversity can be achieved while conveying the information only through the time delays of the modulated signals transmitted from different antennas. CMD has almost the same effect of signal combining with maximal ratio combining (MRC). However, when the channel correlation increases, SER performance of the coding 2×2 system degrades significantly.

  14. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.8...

  15. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.9 Reporting...

  16. Auditory spatial attention representations in the human cerebral cortex.

    PubMed

    Kong, Lingqiang; Michalka, Samantha W; Rosen, Maya L; Sheremata, Summer L; Swisher, Jascha D; Shinn-Cunningham, Barbara G; Somers, David C

    2014-03-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes.

  17. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  18. Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1980-01-01

    There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.

  19. Motivation for Using Generalized Geometry in the Time Dependent Transport Code TDKENO

    SciTech Connect

    Dustin Popp; Zander Mausolff; Sedat Goluoglu

    2016-04-01

    We are proposing to use the code, TDKENO, to model TREAT. TDKENO solves the time dependent, three dimensional Boltzmann transport equation with explicit representation of delayed neutrons. Instead of directly integrating this equation, the neutron flux is factored into two components – a rapidly varying amplitude equation and a slowly varying shape equation and each is solved separately on different time scales. The shape equation is solved using the 3D Monte Carlo transport code KENO, from Oak Ridge National Laboratory’s SCALE code package. Using the Monte Carlo method to solve the shape equation is still computationally intensive, but the operation is only performed when needed. The amplitude equation is solved deterministically and frequently, so the solution gives an accurate time-dependent solution without having to repeatedly We have modified TDKENO to incorporate KENO-VI so that we may accurately represent the geometries within TREAT. This paper explains the motivation behind using generalized geometry, and provides the results of our modifications. TDKENO uses the Improved Quasi-Static method to accomplish this. In this method, the neutron flux is factored into two components. One component is a purely time-dependent and rapidly varying amplitude function, which is solved deterministically and very frequently (small time steps). The other is a slowly varying flux shape function that weakly depends on time and is only solved when needed (significantly larger time steps).

  20. Driving-Simulator-Based Test on the Effectiveness of Auditory Red-Light Running Vehicle Warning System Based on Time-To-Collision Sensor

    PubMed Central

    Yan, Xuedong; Xue, Qingwan; Ma, Lu; Xu, Yongcun

    2014-01-01

    The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR) collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM). The collisions avoidance related variables were measured in terms of brake reaction time (BRT), maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles. PMID:24566631

  1. Driving-simulator-based test on the effectiveness of auditory red-light running vehicle warning system based on time-to-collision sensor.

    PubMed

    Yan, Xuedong; Xue, Qingwan; Ma, Lu; Xu, Yongcun

    2014-02-21

    The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR) collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM). The collisions avoidance related variables were measured in terms of brake reaction time (BRT), maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles.

  2. Organizing principles of real-time memory encoding: neural clique assemblies and universal neural codes.

    PubMed

    Lin, Longnian; Osan, Remus; Tsien, Joe Z

    2006-01-01

    Recent identification of network-level coding units, termed neural cliques, in the hippocampus has enabled real-time patterns of memory traces to be mathematically described, directly visualized, and dynamically deciphered. These memory coding units are functionally organized in a categorical and hierarchical manner, suggesting that internal representations of external events in the brain is achieved not by recording exact details of those events, but rather by recreating its own selective pictures based on cognitive importance. This neural-clique-based hierarchical-extraction and parallel-binding process enables the brain to acquire not only large storage capacity but also abstraction and generalization capability. In addition, activation patterns of the neural clique assemblies can be converted to strings of binary codes that would permit universal categorizations of internal brain representations across individuals and species.

  3. A Design of Low Frequency Time-Code Receiver Based on DSP and FPGA

    NASA Astrophysics Data System (ADS)

    Li, Guo-Dong; Xu, Lin-Sheng

    2006-06-01

    The hardware of a low frequency time-code receiver which was designed with FPGA (field programmable gate array) and DSP (digital signal processor) is introduced. The method of realizing the time synchronization for the receiver system is described. The software developed for DSP and FPGA is expounded, and the results of test and simulation are presented. The design is charcterized by high accuracy, good reliability, fair extensibility, etc.

  4. Reliable Wireless Broadcast with Linear Network Coding for Multipoint-to-Multipoint Real-Time Communications

    NASA Astrophysics Data System (ADS)

    Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi

    This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.

  5. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    PubMed

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.

  6. Selective processing of auditory evoked responses with iterative-randomized stimulation and averaging: A strategy for evaluating the time-invariant assumption.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D

    2016-03-01

    The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed.

  7. Auditory Perception of Complex Sounds.

    DTIC Science & Technology

    1987-10-30

    processes that underlie several aspects of complex pattern recog- nition -- whether of speech, of music , or of environmental sounds. These patterns differ...quality or timbre can play similar grouping roles in auditory steams. Most of the experimental work has concerned timing of successive sounds in sequences...auditory perceptual processes that underlie several aspects of complex pattern recognition - whether of speech, of music , or of environmental sounds

  8. Neural Coding of Interaural Time Differences with Bilateral Cochlear Implants in Unanesthetized Rabbits

    PubMed Central

    Hancock, Kenneth E.; Delgutte, Bertrand

    2016-01-01

    time difference (ITD)] to identify where the sound is coming from. This problem is especially acute at the high stimulation rates used in clinical CI processors. This study provides a better understanding of ITD processing with bilateral CIs and shows a parallel between human performance in ITD discrimination and neural responses in the auditory midbrain. The present study is the first report on binaural properties of auditory neurons with CIs in unanesthetized animals. PMID:27194332

  9. Process timing and its relation to the coding of tonal harmony.

    PubMed

    Aksentijevic, Aleksandar; Barber, Paul J; Elliott, Mark A

    2011-10-01

    Advances in auditory research suggest that gamma-band synchronization of frequency-specific cortical loci could be responsible for the integration of pure tones (harmonics) into harmonic complex tones. Thus far, evidence for such a mechanism has been revealed in neurophysiological studies, with little corroborative psychophysical evidence. In six experiments, we observed a rate- and time-specific response-time advantage for a sequence of target pips when the defining frequency of the target was a fractional multiple of a priming frequency. The effect was only observed when the prime and target sequences were presented at 33 pips per second and when the interstimulus interval was approximately 100 and 250 ms. This evidence implicates oscillatory gamma-band activity in the representation of harmonic complex tones and suggests that synchronization with precise temporal characteristics is important for disambiguating related harmonic templates. An outline of a model is presented, which accounts for these findings in terms of fast resynchronization of relevant neuronal assemblies.

  10. Automation from pictures: Producing real time code from a state transition diagram

    SciTech Connect

    Kozubal, A.J.

    1991-01-01

    The state transition diagram (STD) model has been helpful in the design of real time software, especially with the emergence of graphical computer aided software engineering (CASE) tools. Nevertheless, the translation of the STD to real time code has in the past been primarily a manual task. At Los Alamos we have automated this process. The designer constructs the STD using a CASE tool (Cadre Teamwork) using a special notation for events and actions. A translator converts the STD into an intermediate state notation language (SNL), and this SNL is compiled directly into C code (a state program). Execution of the state program is driven by external events, allowing multiple state programs to effectively share the resources of the host processor. Since the design and the code are tightly integrated through the CASE tool, the design and code never diverge, and we avoid design obsolescence. Furthermore, the CASE tool automates the production of formal technical documents from the graphic description encapsulated by the CASE tool. 10 refs., 3 figs.

  11. Role of precise spike timing in coding of dynamic vibrissa stimuli in somatosensory thalamus.

    PubMed

    Montemurro, Marcelo A; Panzeri, Stefano; Maravall, Miguel; Alenda, Andrea; Bale, Michael R; Brambilla, Marco; Petersen, Rasmus S

    2007-10-01

    Rats discriminate texture by whisking their vibrissae across the surfaces of objects. This process induces corresponding vibrissa vibrations, which must be accurately represented by neurons in the somatosensory pathway. In this study, we investigated the neural code for vibrissa motion in the ventroposterior medial (VPm) nucleus of the thalamus by single-unit recording. We found that neurons conveyed a great deal of information (up to 77.9 bits/s) about vibrissa dynamics. The key was precise spike timing, which typically varied by less than a millisecond from trial to trial. The neural code was sparse, the average spike being remarkably informative (5.8 bits/spike). This implies that as few as four VPm spikes, coding independent information, might reliably differentiate between 10(6) textures. To probe the mechanism of information transmission, we compared the role of time-varying firing rate to that of temporally correlated spike patterns in two ways: 93.9% of the information encoded by a neuron could be accounted for by a hypothetical neuron with the same time-dependent firing rate but no correlations between spikes; moreover, > or =93.4% of the information in the spike trains could be decoded even if temporal correlations were ignored. Taken together, these results suggest that the essence of the VPm code for vibrissa motion is firing rate modulation on a submillisecond timescale. The significance of such a code may be that it enables a small number of neurons, firing only few spikes, to convey distinctions between very many different textures to the barrel cortex.

  12. Adaptation in the auditory system: an overview.

    PubMed

    Pérez-González, David; Malmierca, Manuel S

    2014-01-01

    The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  13. Auditory sequence analysis and phonological skill

    PubMed Central

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D.

    2012-01-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  14. Imaginary time propagation code for large-scale two-dimensional eigenvalue problems in magnetic fields

    NASA Astrophysics Data System (ADS)

    Luukko, P. J. J.; Räsänen, E.

    2013-03-01

    We present a code for solving the single-particle, time-independent Schrödinger equation in two dimensions. Our program utilizes the imaginary time propagation (ITP) algorithm, and it includes the most recent developments in the ITP method: the arbitrary order operator factorization and the exact inclusion of a (possibly very strong) magnetic field. Our program is able to solve thousands of eigenstates of a two-dimensional quantum system in reasonable time with commonly available hardware. The main motivation behind our work is to allow the study of highly excited states and energy spectra of two-dimensional quantum dots and billiard systems with a single versatile code, e.g., in quantum chaos research. In our implementation we emphasize a modern and easily extensible design, simple and user-friendly interfaces, and an open-source development philosophy. Catalogue identifier: AENR_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 11310 No. of bytes in distributed program, including test data, etc.: 97720 Distribution format: tar.gz Programming language: C++ and Python. Computer: Tested on x86 and x86-64 architectures. Operating system: Tested under Linux with the g++ compiler. Any POSIX-compliant OS with a C++ compiler and the required external routines should suffice. Has the code been vectorised or parallelized?: Yes, with OpenMP. RAM: 1 MB or more, depending on system size. Classification: 7.3. External routines: FFTW3 (http://www.fftw.org), CBLAS (http://netlib.org/blas), LAPACK (http://www.netlib.org/lapack), HDF5 (http://www.hdfgroup.org/HDF5), OpenMP (http://openmp.org), TCLAP (http://tclap.sourceforge.net), Python (http://python.org), Google Test (http://code.google.com/p/googletest/) Nature of problem: Numerical calculation

  15. A Parallel Code for Solving the Molecular Time Dependent Schroedinger Equation in Cartesian Coordinates

    SciTech Connect

    Suarez, J.; Stamatiadis, S.; Farantos, S. C.; Lathouwers, L.

    2009-08-13

    Reproducing molecular dynamics is at the root of the basic principles of chemical change and physical properties of the matter. New insight on molecular encounters can be gained by solving the Schroedinger equation in cartesian coordinates, provided one can overcome the massive calculations that it implies. We have developed a parallel code for solving the molecular Time Dependent Schroedinger Equation (TDSE) in cartesian coordinates. Variable order Finite Difference methods result in sparse Hamiltonian matrices which can make the large scale problem solving feasible.

  16. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  17. Flexible Radiation Codes for Numerical Weather Prediction Across Space and Time Scales

    DTIC Science & Technology

    2013-09-30

    time and space scales, especially from regional models to global models. OBJECTIVES We are adapting radiation codes developed for climate ...PSrad is now complete, thorougly tested and debugged, is functioning as the radiation scheme in the climate model ECHAM 6.2 developed at the Max Planck...statiically significant change at most stations, indicating that errors in most places are not primarily driven by radiation errors. We are working

  18. Network Coded Cooperative Communication in a Real-Time Wireless Hospital Sensor Network.

    PubMed

    Prakash, R; Balaji Ganesh, A; Sivabalan, Somu

    2017-05-01

    The paper presents a network coded cooperative communication (NC-CC) enabled wireless hospital sensor network architecture for monitoring health as well as postural activities of a patient. A wearable device, referred as a smartband is interfaced with pulse rate, body temperature sensors and an accelerometer along with wireless protocol services, such as Bluetooth and Radio-Frequency transceiver and Wi-Fi. The energy efficiency of wearable device is improved by embedding a linear acceleration based transmission duty cycling algorithm (NC-DRDC). The real-time demonstration is carried-out in a hospital environment to evaluate the performance characteristics, such as power spectral density, energy consumption, signal to noise ratio, packet delivery ratio and transmission offset. The resource sharing and energy efficiency features of network coding technique are improved by proposing an algorithm referred as network coding based dynamic retransmit/rebroadcast decision control (LA-TDC). From the experimental results, it is observed that the proposed LA-TDC algorithm reduces network traffic and end-to-end delay by an average of 27.8% and 21.6%, respectively than traditional network coded wireless transmission. The wireless architecture is deployed in a hospital environment and results are then successfully validated.

  19. Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.

    PubMed

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-27

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.

  20. Coded throughput performance simulations for the time-varying satellite channel. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Han, LI

    1995-01-01

    The design of a reliable satellite communication link involving the data transfer from a small, low-orbit satellite to a ground station, but through a geostationary satellite, was examined. In such a scenario, the received signal power to noise density ratio increases as the transmitting low-orbit satellite comes into view, and then decreases as it then departs, resulting in a short-duration, time-varying communication link. The optimal values of the small satellite antenna beamwidth, signaling rate, modulation scheme and the theoretical link throughput (in bits per day) have been determined. The goal of this thesis is to choose a practical coding scheme which maximizes the daily link throughput while satisfying a prescribed probability of error requirement. We examine the throughput of both fixed rate and variable rate concatenated forward error correction (FEC) coding schemes for the additive white Gaussian noise (AWGN) channel, and then examine the effect of radio frequency interference (RFI) on the best coding scheme among them. Interleaving is used to mitigate degradation due to RFI. It was found that the variable rate concatenated coding scheme could achieve 74 percent of the theoretical throughput, equivalent to 1.11 Gbits/day based on the cutoff rate R(sub 0). For comparison, 87 percent is achievable for AWGN-only case.

  1. TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    SciTech Connect

    Cullen, D.E

    2000-11-22

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.

  2. TART98 a coupled neutron-photon 3-D, combinatorial geometry time dependent Monte Carlo Transport code

    SciTech Connect

    Cullen, D E

    1998-11-22

    TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.

  3. Power Allocation Strategies for Distributed Space-Time Codes in Amplify-and-Forward Mode

    NASA Astrophysics Data System (ADS)

    Maham, Behrouz; Hjørungnes, Are

    2009-12-01

    We consider a wireless relay network with Rayleigh fading channels and apply distributed space-time coding (DSTC) in amplify-and-forward (AF) mode. It is assumed that the relays have statistical channel state information (CSI) of the local source-relay channels, while the destination has full instantaneous CSI of the channels. It turns out that, combined with the minimum SNR based power allocation in the relays, AF DSTC results in a new opportunistic relaying scheme, in which the best relay is selected to retransmit the source's signal. Furthermore, we have derived the optimum power allocation between two cooperative transmission phases by maximizing the average received SNR at the destination. Next, assuming M-PSK and M-QAM modulations, we analyze the performance of cooperative diversity wireless networks using AF opportunistic relaying. We also derive an approximate formula for the symbol error rate (SER) of AF DSTC. Assuming the use of full-diversity space-time codes, we derive two power allocation strategies minimizing the approximate SER expressions, for constrained transmit power. Our analytical results have been confirmed by simulation results, using full-rate, full-diversity distributed space-time codes.

  4. Auditory short-term memory in the primate auditory cortex.

    PubMed

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  5. Auditory adaptation improves tactile frequency perception.

    PubMed

    Crommett, Lexi E; Pérez-Bellido, Alexis; Yau, Jeffrey M

    2017-01-11

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear: perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus, auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.

  6. A unified mathematical framework for coding time, space, and sequences in the hippocampal region.

    PubMed

    Howard, Marc W; MacDonald, Christopher J; Tiganj, Zoran; Shankar, Karthik H; Du, Qian; Hasselmo, Michael E; Eichenbaum, Howard

    2014-03-26

    The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1.

  7. A Unified Mathematical Framework for Coding Time, Space, and Sequences in the Hippocampal Region

    PubMed Central

    MacDonald, Christopher J.; Tiganj, Zoran; Shankar, Karthik H.; Du, Qian; Hasselmo, Michael E.; Eichenbaum, Howard

    2014-01-01

    The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1. PMID:24672015

  8. Bearing performance degradation assessment based on time-frequency code features and SOM network

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Han, Yan; Deng, Lei

    2017-04-01

    Bearing performance degradation assessment and prognostics are extremely important in supporting maintenance decision and guaranteeing the system’s reliability. To achieve this goal, this paper proposes a novel feature extraction method for the degradation assessment and prognostics of bearings. Features of time-frequency codes (TFCs) are extracted from the time-frequency distribution using a hybrid procedure based on short-time Fourier transform (STFT) and non-negative matrix factorization (NMF) theory. An alternative way to design the health indicator is investigated by quantifying the similarity between feature vectors using a self-organizing map (SOM) network. On the basis of this idea, a new health indicator called time-frequency code quantification error (TFCQE) is proposed to assess the performance degradation of the bearing. This indicator is constructed based on the bearing real-time behavior and the SOM model that is previously trained with only the TFC vectors under the normal condition. Vibration signals collected from the bearing run-to-failure tests are used to validate the developed method. The comparison results demonstrate the superiority of the proposed TFCQE indicator over many other traditional features in terms of feature quality metrics, incipient degradation identification and achieving accurate prediction. Highlights • Time-frequency codes are extracted to reflect the signals’ characteristics. • SOM network served as a tool to quantify the similarity between feature vectors. • A new health indicator is proposed to demonstrate the whole stage of degradation development. • The method is useful for extracting the degradation features and detecting the incipient degradation. • The superiority of the proposed method is verified using experimental data.

  9. Lab-on-a-chip flow cytometer employing color-space-time coding.

    PubMed

    Cho, Sung Hwan; Qiao, Wen; Tsai, Frank S; Yamashita, Kenichi; Lo, Yu-Hwa

    2010-08-30

    We describe a fluorescent detection technique for a lab-on-a-chip flow cytometer. Fluorescent emission is encoded into a time-dependent signal as a fluorescent cell or bead traverses a waveguide array with integrated spatial filters and color filters. Different from conventional colored filters with well-defined transmission spectral window, the integrated color filters are designed to have broad transmission characteristics, similar to the red-green-blue photoreceptors in the retina of human eye. This unique design allows us to detect multiple fluorescent colors with only three color filters based on the technique of color-space-time coding using only one single photomultiplier tube or avalanche photodetector.

  10. MINVAR: a local optimization criterion for rate-distortion tradeoff in real time video coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Ngan, King Ngi

    2005-10-01

    In this paper, we propose a minimum variation (MINVAR) distortion criterion based approach for the rate distortion tradeoff in video coding. The MINVAR based rate distortion tradeoff framework provides a local optimization strategy as a rate control mechanism in real time video coding applications by minimizing the distortion variation while the corresponding bit rate fluctuation is limited by utilizing the encoder buffer. We use the H.264 video codec to evaluate the performance of the proposed method. As shown in the simulation results, the decoded picture quality of the proposed approach is smoother than that of the traditional H.264 joint model (JM) rate control algorithm. The global video quality, the average PSNR, is maintained while a better subjective visual quality is guaranteed.

  11. A 2.9 ps equivalent resolution interpolating time counter based on multiple independent coding lines

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Jachna, Z.; Kwiatkowski, P.; Rozyc, K.

    2013-03-01

    We present the design, operation and test results of a time counter that has an equivalent resolution of 2.9 ps, a measurement uncertainty at the level of 6 ps, and a measurement range of 10 s. The time counter has been implemented in a general-purpose reprogrammable device Spartan-6 (Xilinx). To obtain both high precision and wide measurement range the counting of periods of a reference clock is combined with a two-stage interpolation within a single period of the clock signal. The interpolation involves a four-phase clock in the first interpolation stage (FIS) and an equivalent coding line (ECL) in the second interpolation stage (SIS). The ECL is created as a compound of independent discrete time coding lines (TCL). The number of TCLs used to create the virtual ECL has an effect on its resolution. We tested ECLs made from up to 16 TCLs, but the idea may be extended to a larger number of lines. In the presented time counter the coarse resolution of the counting method equal to 2 ns (period of the 500 MHz reference clock) is firstly improved fourfold in the FIS and next even more than 400 times in the SIS. The proposed solution allows us to overcome the technological limitation in achievable resolution and improve the precision of conversion of integrated interpolators based on tapped delay lines.

  12. Timing Precision in Population Coding of Natural Scenes in the Early Visual System

    PubMed Central

    Desbordes, Gaëlle; Jin, Jianzhong; Weng, Chong; Lesica, Nicholas A; Stanley, Garrett B; Alonso, Jose-Manuel

    2008-01-01

    The timing of spiking activity across neurons is a fundamental aspect of the neural population code. Individual neurons in the retina, thalamus, and cortex can have very precise and repeatable responses but exhibit degraded temporal precision in response to suboptimal stimuli. To investigate the functional implications for neural populations in natural conditions, we recorded in vivo the simultaneous responses, to movies of natural scenes, of multiple thalamic neurons likely converging to a common neuronal target in primary visual cortex. We show that the response of individual neurons is less precise at lower contrast, but that spike timing precision across neurons is relatively insensitive to global changes in visual contrast. Overall, spike timing precision within and across cells is on the order of 10 ms. Since closely timed spikes are more efficient in inducing a spike in downstream cortical neurons, and since fine temporal precision is necessary to represent the more slowly varying natural environment, we argue that preserving relative spike timing at a ∼10-ms resolution is a crucial property of the neural code entering cortex. PMID:19090624

  13. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  14. A two-level space-time color-coding method for 3D measurements using structured light

    NASA Astrophysics Data System (ADS)

    Xue, Qi; Wang, Zhao; Huang, Junhui; Gao, Jianmin; Qi, Zhaoshuai

    2015-11-01

    Color-coding methods have significantly improved the measurement efficiency of structured light systems. However, some problems, such as color crosstalk and chromatic aberration, decrease the measurement accuracy of the system. A two-level space-time color-coding method is thus proposed in this paper. The method, which includes a space-code level and a time-code level, is shown to be reliable and efficient. The influence of chromatic aberration is completely mitigated when using this method. Additionally, a self-adaptive windowed Fourier transform is used to eliminate all color crosstalk components. Theoretical analyses and experiments have shown that the proposed coding method solves the problems of color crosstalk and chromatic aberration effectively. Additionally, the method guarantees high measurement accuracy which is very close to the measurement accuracy using monochromatic coded patterns.

  15. Two Novel Space-Time Coding Techniques Designed for UWB MISO Systems Based on Wavelet Transform

    PubMed Central

    Zaki, Amira Ibrahim; El-Khamy, Said E.

    2016-01-01

    In this paper two novel space-time coding multi-input single-output (STC MISO) schemes, designed especially for Ultra-Wideband (UWB) systems, are introduced. The proposed schemes are referred to as wavelet space-time coding (WSTC) schemes. The WSTC schemes are based on two types of multiplexing, spatial and wavelet domain multiplexing. In WSTC schemes, four symbols are transmitted on the same UWB transmission pulse with the same bandwidth, symbol duration, and number of transmitting antennas of the conventional STC MISO scheme. The used mother wavelet (MW) is selected to be highly correlated with transmitted pulse shape and such that the multiplexed signal has almost the same spectral characteristics as those of the original UWB pulse. The two WSTC techniques increase the data rate to four times that of the conventional STC. The first WSTC scheme increases the data rate with a simple combination process. The second scheme achieves the increase in the data rate with a less complex receiver and better performance than the first scheme due to the spatial diversity introduced by the structure of its transmitter and receiver. The two schemes use Rake receivers to collect the energy in the dense multipath channel components. The simulation results show that the proposed WSTC schemes have better performance than the conventional scheme in addition to increasing the data rate to four times that of the conventional STC scheme. PMID:27959939

  16. General relativistic radiative transfer code in rotating black hole space-time: ARTIST

    NASA Astrophysics Data System (ADS)

    Takahashi, Rohta; Umemura, Masayuki

    2017-02-01

    We present a general relativistic radiative transfer code, ARTIST (Authentic Radiative Transfer In Space-Time), that is a perfectly causal scheme to pursue the propagation of radiation with absorption and scattering around a Kerr black hole. The code explicitly solves the invariant radiation intensity along null geodesics in the Kerr-Schild coordinates, and therefore properly includes light bending, Doppler boosting, frame dragging, and gravitational redshifts. The notable aspect of ARTIST is that it conserves the radiative energy with high accuracy, and is not subject to the numerical diffusion, since the transfer is solved on long characteristics along null geodesics. We first solve the wavefront propagation around a Kerr black hole that was originally explored by Hanni. This demonstrates repeated wavefront collisions, light bending, and causal propagation of radiation with the speed of light. We show that the decay rate of the total energy of wavefronts near a black hole is determined solely by the black hole spin in late phases, in agreement with analytic expectations. As a result, the ARTIST turns out to correctly solve the general relativistic radiation fields until late phases as t ˜ 90 M. We also explore the effects of absorption and scattering, and apply this code for a photon wall problem and an orbiting hotspot problem. All the simulations in this study are performed in the equatorial plane around a Kerr black hole. The ARTIST is the first step to realize the general relativistic radiation hydrodynamics.

  17. Comparison of WDM/Pulse-Position-Modulation (WDM/PPM) with Code/Pulse-Position-Swapping (C/PPS) Based on Wavelength/Time Codes

    SciTech Connect

    Mendez, A J; Hernandez, V J; Gagliardi, R M; Bennett, C V

    2009-06-19

    Pulse position modulation (PPM) signaling is favored in intensity modulated/direct detection (IM/DD) systems that have average power limitations. Combining PPM with WDM over a fiber link (WDM/PPM) enables multiple accessing and increases the link's throughput. Electronic bandwidth and synchronization advantages are further gained by mapping the time slots of PPM onto a code space, or code/pulse-position-swapping (C/PPS). The property of multiple bits per symbol typical of PPM can be combined with multiple accessing by using wavelength/time [W/T] codes in C/PPS. This paper compares the performance of WDM/PPM and C/PPS for equal wavelengths and bandwidth.

  18. Rejection positivity predicts trial-to-trial reaction times in an auditory selective attention task: a computational analysis of inhibitory control

    PubMed Central

    Chen, Sufen; Melara, Robert D.

    2014-01-01

    A series of computer simulations using variants of a formal model of attention (Melara and Algom, 2003) probed the role of rejection positivity (RP), a slow-wave electroencephalographic (EEG) component, in the inhibitory control of distraction. Behavioral and EEG data were recorded as participants performed auditory selective attention tasks. Simulations that modulated processes of distractor inhibition accounted well for reaction-time (RT) performance, whereas those that modulated target excitation did not. A model that incorporated RP from actual EEG recordings in estimating distractor inhibition was superior in predicting changes in RT as a function of distractor salience across conditions. A model that additionally incorporated momentary fluctuations in EEG as the source of trial-to-trial variation in performance precisely predicted individual RTs within each condition. The results lend support to the linking proposition that RP controls the speed of responding to targets through the inhibitory control of distractors. PMID:25191244

  19. Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests

    NASA Astrophysics Data System (ADS)

    Zilhão, Miguel; Witek, Helvi; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Nerozzi, Andrea

    2010-04-01

    The numerical evolution of Einstein’s field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.

  20. Process Timing and Its Relation to the Coding of Tonal Harmony

    ERIC Educational Resources Information Center

    Aksentijevic, Aleksandar; Barber, Paul J.; Elliott, Mark A.

    2011-01-01

    Advances in auditory research suggest that gamma-band synchronization of frequency-specific cortical loci could be responsible for the integration of pure tones (harmonics) into harmonic complex tones. Thus far, evidence for such a mechanism has been revealed in neurophysiological studies, with little corroborative psychophysical evidence. In six…

  1. Spectro-temporal shaping of supercontinuum for subnanosecond time-coded M-CARS spectroscopy.

    PubMed

    Shalaby, Badr M; Louot, Christophe; Capitaine, Erwan; Krupa, Katarzyna; Labruyère, Alexis; Tonello, Alessandro; Pagnoux, Dominique; Leproux, Philippe; Couderc, Vincent

    2016-11-01

    A supercontinuum laser source was designed for multiplex-coherent anti-Stokes Raman scattering spectroscopy. This source was based on the use of a germanium-doped standard optical fiber with a zero dispersion wavelength at 1600 nm and pumped at 1064 nm. We analyzed the nonlinear spectro-temporal interrelations of a subnanosecond pulse propagating in a normal dispersion regime in the presence of a multiple Raman cascading process and strong conversion. The multiple Raman orders permitted the generation of a high-power flat spectrum with a specific nonlinear dynamics that can open the way to subnanosecond time-coded multiplex CARS systems.

  2. Just-in-time coding of the problem list in a clinical environment.

    PubMed Central

    Warren, J. J.; Collins, J.; Sorrentino, C.; Campbell, J. R.

    1998-01-01

    Clinically useful problem lists are essential to the CPR. Providing a terminology that is standardized and understood by all clinicians is a major challenge. UNMC has developed a lexicon to support their problem list. Using a just-in-time coding strategy, the lexicon is maintained and extended prospectively in a dynamic clinical environment. The terms in the lexicon are mapped to ICD-9-CM, NANDA, and SNOMED International classification schemes. Currently, the lexicon contains 12,000 terms. This process of development and maintenance of the lexicon is described. PMID:9929226

  3. A real-time chirp-coded imaging system with tissue attenuation compensation.

    PubMed

    Ramalli, A; Guidi, F; Boni, E; Tortoli, P

    2015-07-01

    In ultrasound imaging, pulse compression methods based on the transmission (TX) of long coded pulses and matched receive filtering can be used to improve the penetration depth while preserving the axial resolution (coded-imaging). The performance of most of these methods is affected by the frequency dependent attenuation of tissue, which causes mismatch of the receiver filter. This, together with the involved additional computational load, has probably so far limited the implementation of pulse compression methods in real-time imaging systems. In this paper, a real-time low-computational-cost coded-imaging system operating on the beamformed and demodulated data received by a linear array probe is presented. The system has been implemented by extending the firmware and the software of the ULA-OP research platform. In particular, pulse compression is performed by exploiting the computational resources of a single digital signal processor. Each image line is produced in less than 20 μs, so that, e.g., 192-line frames can be generated at up to 200 fps. Although the system may work with a large class of codes, this paper has been focused on the test of linear frequency modulated chirps. The new system has been used to experimentally investigate the effects of tissue attenuation so that the design of the receive compression filter can be accordingly guided. Tests made with different chirp signals confirm that, although the attainable compression gain in attenuating media is lower than the theoretical value expected for a given TX Time-Bandwidth product (BT), good SNR gains can be obtained. For example, by using a chirp signal having BT=19, a 13 dB compression gain has been measured. By adapting the frequency band of the receiver to the band of the received echo, the signal-to-noise ratio and the penetration depth have been further increased, as shown by real-time tests conducted on phantoms and in vivo. In particular, a 2.7 dB SNR increase has been measured through a

  4. Imaging of human tooth using ultrasound based chirp-coded nonlinear time reversal acoustics.

    PubMed

    Dos Santos, Serge; Prevorovsky, Zdenek

    2011-08-01

    Human tooth imaging sonography is investigated experimentally with an acousto-optic noncoupling set-up based on the chirp-coded nonlinear time reversal acoustic concept. The complexity of the tooth internal structure (enamel-dentine interface, cracks between internal tubules) is analyzed by adapting the nonlinear elastic wave spectroscopy (NEWS) with the objective of the tomography of damage. Optimization of excitations using intrinsic symmetries, such as time reversal (TR) invariance, reciprocity, correlation properties are then proposed and implemented experimentally. The proposed medical application of this TR-NEWS approach is implemented on a third molar human tooth and constitutes an alternative of noncoupling echodentography techniques. A 10 MHz bandwidth ultrasonic instrumentation has been developed including a laser vibrometer and a 20 MHz contact piezoelectric transducer. The calibrated chirp-coded TR-NEWS imaging of the tooth is obtained using symmetrized excitations, pre- and post-signal processing, and the highly sensitive 14 bit resolution TR-NEWS instrumentation previously calibrated. Nonlinear signature coming from the symmetry properties is observed experimentally in the tooth using this bi-modal TR-NEWS imaging after and before the focusing induced by the time-compression process. The TR-NEWS polar B-scan of the tooth is described and suggested as a potential application for modern echodentography. It constitutes the basis of the self-consistent harmonic imaging sonography for monitoring cracks propagation in the dentine, responsible of human tooth structural health.

  5. Architecture for time or transform domain decoding of reed-solomon codes

    NASA Technical Reports Server (NTRS)

    Shao, Howard M. (Inventor); Truong, Trieu-Kie (Inventor); Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  6. Audio Signal Processing Using Time-Frequency Approaches: Coding, Classification, Fingerprinting, and Watermarking

    NASA Astrophysics Data System (ADS)

    Umapathy, K.; Ghoraani, B.; Krishnan, S.

    2010-12-01

    Audio signals are information rich nonstationary signals that play an important role in our day-to-day communication, perception of environment, and entertainment. Due to its non-stationary nature, time- or frequency-only approaches are inadequate in analyzing these signals. A joint time-frequency (TF) approach would be a better choice to efficiently process these signals. In this digital era, compression, intelligent indexing for content-based retrieval, classification, and protection of digital audio content are few of the areas that encapsulate a majority of the audio signal processing applications. In this paper, we present a comprehensive array of TF methodologies that successfully address applications in all of the above mentioned areas. A TF-based audio coding scheme with novel psychoacoustics model, music classification, audio classification of environmental sounds, audio fingerprinting, and audio watermarking will be presented to demonstrate the advantages of using time-frequency approaches in analyzing and extracting information from audio signals.

  7. A really complicated problem: Auditory scene analysis

    NASA Astrophysics Data System (ADS)

    Yost, William A.

    2004-05-01

    It has been more than a decade since Al Bregman and other authors brought the challenge of auditory scene analysis back to the attention of auditory science. While a lot of research has been done on and around this topic, an accepted theory of auditory scene analysis has not evolved. Auditory science has little, if any, information about how the nervous system solves this problem, and there have not been any major successes in developing computational methods that solve the problem for most real-world auditory scenes. I will argue that the major reason that more has not been accomplished is that auditory scene analysis is a really hard problem. If one starts with a single sound source and tries to understand how the auditory system determines this single source, the problem is already very complicated without adding other sources that occur at the same time as is the typical depiction of the auditory scene. In this paper I will illustrate some of the challenges that exist for determining the auditory scene that have not received a lot of attention, as well as some of the more discussed aspects of the challenge. [Work supported by NIDCD.

  8. Auditory and audiovisual inhibition of return.

    PubMed

    Spence, C; Driver, J

    1998-01-01

    Two experiments examined any inhibition-of-return (IOR) effects from auditory cues and from preceding auditory targets upon reaction times (RTs) for detecting subsequent auditory targets. Auditory RT was delayed if the preceding auditory cue was on the same side as the target, but was unaffected by the location of the auditory target from the preceding trial, suggesting that response inhibition for the cue may have produced its effects. By contrast, visual detection RT was inhibited by the ipsilateral presentation of a visual target on the preceding trial. In a third experiment, targets could be unpredictably auditory or visual, and no peripheral cues intervened. Both auditory and visual detection RTs were now delayed following an ipsilateral versus contralateral target in either modality on the preceding trial, even when eye position was monitored to ensure central fixation throughout. These data suggest that auditory target-target IOR arises only when target modality is unpredictable. They also provide the first unequivocal evidence for cross-modal IOR, since, unlike other recent studies (e.g., Reuter-Lorenz, Jha, & Rosenquist, 1996; Tassinari & Berlucchi, 1995; Tassinari & Campara, 1996), the present cross-modal effects cannot be explained in terms of response inhibition for the cue. The results are discussed in relation to neurophysiological studies and audiovisual links in saccade programming.

  9. TTVFast: An efficient and accurate code for transit timing inversion problems

    SciTech Connect

    Deck, Katherine M.; Agol, Eric; Holman, Matthew J.; Nesvorný, David

    2014-06-01

    Transit timing variations (TTVs) have proven to be a powerful technique for confirming Kepler planet candidates, for detecting non-transiting planets, and for constraining the masses and orbital elements of multi-planet systems. These TTV applications often require the numerical integration of orbits for computation of transit times (as well as impact parameters and durations); frequently tens of millions to billions of simulations are required when running statistical analyses of the planetary system properties. We have created a fast code for transit timing computation, TTVFast, which uses a symplectic integrator with a Keplerian interpolator for the calculation of transit times. The speed comes at the expense of accuracy in the calculated times, but the accuracy lost is largely unnecessary, as transit times do not need to be calculated to accuracies significantly smaller than the measurement uncertainties on the times. The time step can be tuned to give sufficient precision for any particular system. We find a speed-up of at least an order of magnitude relative to dynamical integrations with high precision using a Bulirsch-Stoer integrator.

  10. TTVFast: An Efficient and Accurate Code for Transit Timing Inversion Problems

    NASA Astrophysics Data System (ADS)

    Deck, Katherine M.; Agol, Eric; Holman, Matthew J.; Nesvorný, David

    2014-06-01

    Transit timing variations (TTVs) have proven to be a powerful technique for confirming Kepler planet candidates, for detecting non-transiting planets, and for constraining the masses and orbital elements of multi-planet systems. These TTV applications often require the numerical integration of orbits for computation of transit times (as well as impact parameters and durations); frequently tens of millions to billions of simulations are required when running statistical analyses of the planetary system properties. We have created a fast code for transit timing computation, TTVFast, which uses a symplectic integrator with a Keplerian interpolator for the calculation of transit times. The speed comes at the expense of accuracy in the calculated times, but the accuracy lost is largely unnecessary, as transit times do not need to be calculated to accuracies significantly smaller than the measurement uncertainties on the times. The time step can be tuned to give sufficient precision for any particular system. We find a speed-up of at least an order of magnitude relative to dynamical integrations with high precision using a Bulirsch-Stoer integrator.

  11. Is Auditory Discrimination Mature by Middle Childhood? A Study Using Time-Frequency Analysis of Mismatch Responses from 7 Years to Adulthood

    ERIC Educational Resources Information Center

    Bishop, Dorothy V. M.; Hardiman, Mervyn J.; Barry, Johanna G.

    2011-01-01

    Behavioural and electrophysiological studies give differing impressions of when auditory discrimination is mature. Ability to discriminate frequency and speech contrasts reaches adult levels only around 12 years of age, yet an electrophysiological index of auditory discrimination, the mismatch negativity (MMN), is reported to be as large in…

  12. 78 FR 34922 - Definition of Auditory Assistance Device

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-11

    ... COMMISSION 47 CFR Part 15 Definition of Auditory Assistance Device AGENCY: Federal Communications Commission. ACTION: Final rule. SUMMARY: This document modifies the definition of ``auditory assistance device'' in... near real time. The revised definition permits unlicensed auditory assistance devices to be used...

  13. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    SciTech Connect

    Arndt, S.A.

    1997-07-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.

  14. A visual parallel-BCI speller based on the time-frequency coding strategy

    NASA Astrophysics Data System (ADS)

    Xu, Minpeng; Chen, Long; Zhang, Lixin; Qi, Hongzhi; Ma, Lan; Tang, Jiabei; Wan, Baikun; Ming, Dong

    2014-04-01

    Objective. Spelling is one of the most important issues in brain-computer interface (BCI) research. This paper is to develop a visual parallel-BCI speller system based on the time-frequency coding strategy in which the sub-speller switching among four simultaneously presented sub-spellers and the character selection are identified in a parallel mode. Approach. The parallel-BCI speller was constituted by four independent P300+SSVEP-B (P300 plus SSVEP blocking) spellers with different flicker frequencies, thereby all characters had a specific time-frequency code. To verify its effectiveness, 11 subjects were involved in the offline and online spellings. A classification strategy was designed to recognize the target character through jointly using the canonical correlation analysis and stepwise linear discriminant analysis. Main results. Online spellings showed that the proposed parallel-BCI speller had a high performance, reaching the highest information transfer rate of 67.4 bit min-1, with an average of 54.0 bit min-1 and 43.0 bit min-1 in the three rounds and five rounds, respectively. Significance. The results indicated that the proposed parallel-BCI could be effectively controlled by users with attention shifting fluently among the sub-spellers, and highly improved the BCI spelling performance.

  15. A population coding account for systematic variation in saccadic dead time.

    PubMed

    Ludwig, Casimir J H; Mildinhall, John W; Gilchrist, Iain D

    2007-01-01

    During movement programming, there is a point in time at which the movement system is committed to executing an action with certain parameters even though new information may render this action obsolete. For saccades programmed to a visual target this period is termed the dead time. Using a double-step paradigm, we examined potential variability in the dead time with variations in overall saccade latency and spatiotemporal configuration of two sequential targets. In experiment 1, we varied overall saccade latency by manipulating the presence or absence of a central fixation point. Despite a large and robust gap effect, decreasing the saccade latency in this way did not alter the dead time. In experiment 2, we varied the separation between the two targets. The dead time increased with separation up to a point and then leveled off. A stochastic accumulator model of the oculomotor decision mechanism accounts comprehensively for our findings. The model predicts a gap effect through changes in baseline activity without producing variations in the dead time. Variations in dead time with separation between the two target locations are a natural consequence of the population coding assumption in the model.

  16. Functional dissociation of transient and sustained fMRI BOLD components in human auditory cortex revealed with a streaming paradigm based on interaural time differences.

    PubMed

    Schadwinkel, Stefan; Gutschalk, Alexander

    2010-12-01

    A number of physiological studies suggest that feature-selective adaptation is relevant to the pre-processing for auditory streaming, the perceptual separation of overlapping sound sources. Most of these studies are focused on spectral differences between streams, which are considered most important for streaming. However, spatial cues also support streaming, alone or in combination with spectral cues, but physiological studies of spatial cues for streaming remain scarce. Here, we investigate whether the tuning of selective adaptation for interaural time differences (ITD) coincides with the range where streaming perception is observed. FMRI activation that has been shown to adapt depending on the repetition rate was studied with a streaming paradigm where two tones were differently lateralized by ITD. Listeners were presented with five different ΔITD conditions (62.5, 125, 187.5, 343.75, or 687.5 μs) out of an active baseline with no ΔITD during fMRI. The results showed reduced adaptation for conditions with ΔITD ≥ 125 μs, reflected by enhanced sustained BOLD activity. The percentage of streaming perception for these stimuli increased from approximately 20% for ΔITD = 62.5 μs to > 60% for ΔITD = 125 μs. No further sustained BOLD enhancement was observed when the ΔITD was increased beyond ΔITD = 125 μs, whereas the streaming probability continued to increase up to 90% for ΔITD = 687.5 μs. Conversely, the transient BOLD response, at the transition from baseline to ΔITD blocks, increased most prominently as ΔITD was increased from 187.5 to 343.75 μs. These results demonstrate a clear dissociation of transient and sustained components of the BOLD activity in auditory cortex.

  17. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  18. A multi-layer VLC imaging system based on space-time trace-orthogonal coding

    NASA Astrophysics Data System (ADS)

    Li, Peng-Xu; Yang, Yu-Hong; Zhu, Yi-Jun; Zhang, Yan-Yu

    2017-02-01

    In visible light communication (VLC) imaging systems, different properties of data are usually demanded for transmission with different priorities in terms of reliability and/or validity. For this consideration, a novel transmission scheme called space-time trace-orthogonal coding (STTOC) for VLC is proposed in this paper by taking full advantage of the characteristics of time-domain transmission and space-domain orthogonality. Then, several constellation designs for different priority strategies subject to the total power constraint are presented. One significant advantage of this novel scheme is that the inter-layer interference (ILI) can be eliminated completely and the computation complexity of maximum likelihood (ML) detection is linear. Computer simulations verify the correctness of our theoretical analysis, and demonstrate that both transmission rate and error performance of the proposed scheme greatly outperform the conventional multi-layer transmission system.

  19. Experience and information loss in auditory and visual memory.

    PubMed

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  20. Biomedical time series clustering based on non-negative sparse coding and probabilistic topic model.

    PubMed

    Wang, Jin; Liu, Ping; F H She, Mary; Nahavandi, Saeid; Kouzani, Abbas

    2013-09-01

    Biomedical time series clustering that groups a set of unlabelled temporal signals according to their underlying similarity is very useful for biomedical records management and analysis such as biosignals archiving and diagnosis. In this paper, a new framework for clustering of long-term biomedical time series such as electrocardiography (ECG) and electroencephalography (EEG) signals is proposed. Specifically, local segments extracted from the time series are projected as a combination of a small number of basis elements in a trained dictionary by non-negative sparse coding. A Bag-of-Words (BoW) representation is then constructed by summing up all the sparse coefficients of local segments in a time series. Based on the BoW representation, a probabilistic topic model that was originally developed for text document analysis is extended to discover the underlying similarity of a collection of time series. The underlying similarity of biomedical time series is well captured attributing to the statistic nature of the probabilistic topic model. Experiments on three datasets constructed from publicly available EEG and ECG signals demonstrates that the proposed approach achieves better accuracy than existing state-of-the-art methods, and is insensitive to model parameters such as length of local segments and dictionary size.

  1. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  2. Noise-induced hearing loss alters the temporal dynamics of auditory-nerve responses.

    PubMed

    Scheidt, Ryan E; Kale, Sushrut; Heinz, Michael G

    2010-10-01

    Auditory-nerve fibers demonstrate dynamic response properties in that they adapt to rapid changes in sound level, both at the onset and offset of a sound. These dynamic response properties affect temporal coding of stimulus modulations that are perceptually relevant for many sounds such as speech and music. Temporal dynamics have been well characterized in auditory-nerve fibers from normal-hearing animals, but little is known about the effects of sensorineural hearing loss on these dynamics. This study examined the effects of noise-induced hearing loss on the temporal dynamics in auditory-nerve fiber responses from anesthetized chinchillas. Post-stimulus-time histograms were computed from responses to 50-ms tones presented at characteristic frequency and 30 dB above fiber threshold. Several response metrics related to temporal dynamics were computed from post-stimulus-time histograms and were compared between normal-hearing and noise-exposed animals. Results indicate that noise-exposed auditory-nerve fibers show significantly reduced response latency, increased onset response and percent adaptation, faster adaptation after onset, and slower recovery after offset. The decrease in response latency only occurred in noise-exposed fibers with significantly reduced frequency selectivity. These changes in temporal dynamics have important implications for temporal envelope coding in hearing-impaired ears, as well as for the design of dynamic compression algorithms for hearing aids.

  3. Novel space-time trellis codes for free-space optical communications using transmit laser selection.

    PubMed

    García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2015-09-21

    In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results.

  4. Accuracy of rate coding: When shorter time window and higher spontaneous activity help

    NASA Astrophysics Data System (ADS)

    Levakova, Marie; Tamborrino, Massimiliano; Kostal, Lubomir; Lansky, Petr

    2017-02-01

    It is widely accepted that neuronal firing rates contain a significant amount of information about the stimulus intensity. Nevertheless, theoretical studies on the coding accuracy inferred from the exact spike counting distributions are rare. We present an analysis based on the number of observed spikes assuming the stochastic perfect integrate-and-fire model with a change point, representing the stimulus onset, for which we calculate the corresponding Fisher information to investigate the accuracy of rate coding. We analyze the effect of changing the duration of the time window and the influence of several parameters of the model, in particular the level of the presynaptic spontaneous activity and the level of random fluctuation of the membrane potential, which can be interpreted as noise of the system. The results show that the Fisher information is nonmonotonic with respect to the length of the observation period. This counterintuitive result is caused by the discrete nature of the count of spikes. We observe also that the signal can be enhanced by noise, since the Fisher information is nonmonotonic with respect to the level of spontaneous activity and, in some cases, also with respect to the level of fluctuation of the membrane potential.

  5. Auditory Training for Central Auditory Processing Disorder

    PubMed Central

    Weihing, Jeffrey; Chermak, Gail D.; Musiek, Frank E.

    2015-01-01

    Auditory training (AT) is an important component of rehabilitation for patients with central auditory processing disorder (CAPD). The present article identifies and describes aspects of AT as they relate to applications in this population. A description of the types of auditory processes along with information on relevant AT protocols that can be used to address these specific deficits is included. Characteristics and principles of effective AT procedures also are detailed in light of research that reflects on their value. Finally, research investigating AT in populations who show CAPD or present with auditory complaints is reported. Although efficacy data in this area are still emerging, current findings support the use of AT for treatment of auditory difficulties. PMID:27587909

  6. Oscillations, phase-of-firing coding, and spike timing-dependent plasticity: an efficient learning scheme.

    PubMed

    Masquelier, Timothée; Hugues, Etienne; Deco, Gustavo; Thorpe, Simon J

    2009-10-28

    Recent experiments have established that information can be encoded in the spike times of neurons relative to the phase of a background oscillation in the local field potential-a phenomenon referred to as "phase-of-firing coding" (PoFC). These firing phase preferences could result from combining an oscillation in the input current with a stimulus-dependent static component that would produce the variations in preferred phase, but it remains unclear whether these phases are an epiphenomenon or really affect neuronal interactions-only then could they have a functional role. Here we show that PoFC has a major impact on downstream learning and decoding with the now well established spike timing-dependent plasticity (STDP). To be precise, we demonstrate with simulations how a single neuron equipped with STDP robustly detects a pattern of input currents automatically encoded in the phases of a subset of its afferents, and repeating at random intervals. Remarkably, learning is possible even when only a small fraction of the afferents ( approximately 10%) exhibits PoFC. The ability of STDP to detect repeating patterns had been noted before in continuous activity, but it turns out that oscillations greatly facilitate learning. A benchmark with more conventional rate-based codes demonstrates the superiority of oscillations and PoFC for both STDP-based learning and the speed of decoding: the oscillation partially formats the input spike times, so that they mainly depend on the current input currents, and can be efficiently learned by STDP and then recognized in just one oscillation cycle. This suggests a major functional role for oscillatory brain activity that has been widely reported experimentally.

  7. Real-time video coding under power constraint based on H.264 codec

    NASA Astrophysics Data System (ADS)

    Su, Li; Lu, Yan; Wu, Feng; Li, Shipeng; Gao, Wen

    2007-01-01

    In this paper, we propose a joint power-distortion optimization scheme for real-time H.264 video encoding under the power constraint. Firstly, the power constraint is translated to the complexity constraint based on DVS technology. Secondly, a computation allocation model (CAM) with virtual buffers is proposed to facilitate the optimal allocation of constrained computational resource for each frame. Thirdly, the complexity adjustable encoder based on optimal motion estimation and mode decision is proposed to meet the allocated resource. The proposed scheme takes the advantage of some new features of H.264/AVC video coding tools such as early termination strategy in fast ME. Moreover, it can avoid suffering from the high overhead of the parametric power control algorithms and achieve fine complexity scalability in a wide range with stable rate-distortion performance. The proposed scheme also shows the potential of a further reduction of computation and power consumption in the decoding without any change on the existing decoders.

  8. Manipulation of BK channel expression is sufficient to alter auditory hair cell thresholds in larval zebrafish

    PubMed Central

    Rohmann, Kevin N.; Tripp, Joel A.; Genova, Rachel M.; Bass, Andrew H.

    2014-01-01

    Non-mammalian vertebrates rely on electrical resonance for frequency tuning in auditory hair cells. A key component of the resonance exhibited by these cells is an outward calcium-activated potassium current that flows through large-conductance calcium-activated potassium (BK) channels. Previous work in midshipman fish (Porichthys notatus) has shown that BK expression correlates with seasonal changes in hearing sensitivity and that pharmacologically blocking these channels replicates the natural decreases in sensitivity during the winter non-reproductive season. To test the hypothesis that reducing BK channel function is sufficient to change auditory thresholds in fish, morpholino oligonucleotides (MOs) were used in larval zebrafish (Danio rerio) to alter expression of slo1a and slo1b, duplicate genes coding for the pore-forming α-subunits of BK channels. Following MO injection, microphonic potentials were recorded from the inner ear of larvae. Quantitative real-time PCR was then used to determine the MO effect on slo1a and slo1b expression in these same fish. Knockdown of either slo1a or slo1b resulted in disrupted gene expression and increased auditory thresholds across the same range of frequencies of natural auditory plasticity observed in midshipman. We conclude that interference with the normal expression of individual slo1 genes is sufficient to increase auditory thresholds in zebrafish larvae and that changes in BK channel expression are a direct mechanism for regulation of peripheral hearing sensitivity among fishes. PMID:24803460

  9. Auditory Temporal Conditioning in Neonates.

    ERIC Educational Resources Information Center

    Franz, W. K.; And Others

    Twenty normal newborns, approximately 36 hours old, were tested using an auditory temporal conditioning paradigm which consisted of a slow rise, 75 db tone played for five seconds every 25 seconds, ten times. Responses to the tones were measured by instantaneous, beat-to-beat heartrate; and the test trial was designated as the 2 1/2-second period…

  10. Delayed Auditory Feedback and Movement

    ERIC Educational Resources Information Center

    Pfordresher, Peter Q.; Dalla Bella, Simone

    2011-01-01

    It is well known that timing of rhythm production is disrupted by delayed auditory feedback (DAF), and that disruption varies with delay length. We tested the hypothesis that disruption depends on the state of the movement trajectory at the onset of DAF. Participants tapped isochronous rhythms at a rate specified by a metronome while hearing DAF…

  11. Hearing the light: neural and perceptual encoding of optogenetic stimulation in the central auditory pathway

    PubMed Central

    Guo, Wei; Hight, Ariel E.; Chen, Jenny X.; Klapoetke, Nathan C.; Hancock, Kenneth E.; Shinn-Cunningham, Barbara G.; Boyden, Edward S.; Lee, Daniel J.; Polley, Daniel B.

    2015-01-01

    Optogenetics provides a means to dissect the organization and function of neural circuits. Optogenetics also offers the translational promise of restoring sensation, enabling movement or supplanting abnormal activity patterns in pathological brain circuits. However, the inherent sluggishness of evoked photocurrents in conventional channelrhodopsins has hampered the development of optoprostheses that adequately mimic the rate and timing of natural spike patterning. Here, we explore the feasibility and limitations of a central auditory optoprosthesis by photoactivating mouse auditory midbrain neurons that either express channelrhodopsin-2 (ChR2) or Chronos, a channelrhodopsin with ultra-fast channel kinetics. Chronos-mediated spike fidelity surpassed ChR2 and natural acoustic stimulation to support a superior code for the detection and discrimination of rapid pulse trains. Interestingly, this midbrain coding advantage did not translate to a perceptual advantage, as behavioral detection of midbrain activation was equivalent with both opsins. Auditory cortex recordings revealed that the precisely synchronized midbrain responses had been converted to a simplified rate code that was indistinguishable between opsins and less robust overall than acoustic stimulation. These findings demonstrate the temporal coding benefits that can be realized with next-generation channelrhodopsins, but also highlight the challenge of inducing variegated patterns of forebrain spiking activity that support adaptive perception and behavior. PMID:26000557

  12. Optimum neural tuning curves for information efficiency with rate coding and finite-time window.

    PubMed

    Han, Fang; Wang, Zhijie; Fan, Hong; Sun, Xiaojuan

    2015-01-01

    An important question for neural encoding is what kind of neural systems can convey more information with less energy within a finite time coding window. This paper first proposes a finite-time neural encoding system, where the neurons in the system respond to a stimulus by a sequence of spikes that is assumed to be Poisson process and the external stimuli obey normal distribution. A method for calculating the mutual information of the finite-time neural encoding system is proposed and the definition of information efficiency is introduced. The values of the mutual information and the information efficiency obtained by using Logistic function are compared with those obtained by using other functions and it is found that Logistic function is the best one. It is further found that the parameter representing the steepness of the Logistic function has close relationship with full entropy, and that the parameter representing the translation of the function associates with the energy consumption and noise entropy tightly. The optimum parameter combinations for Logistic function to maximize the information efficiency are calculated when the stimuli and the properties of the encoding system are varied respectively. Some explanations for the results are given. The model and the method we proposed could be useful to study neural encoding system, and the optimum neural tuning curves obtained in this paper might exhibit some characteristics of a real neural system.

  13. Effective real-time vehicle tracking using discriminative sparse coding on local patches

    NASA Astrophysics Data System (ADS)

    Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei

    2016-01-01

    A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.

  14. Real-time distributed video coding for 1K-pixel visual sensor networks

    NASA Astrophysics Data System (ADS)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  15. A generalized time-frequency subtraction method for robust speech enhancement based on wavelet filter banks modeling of human auditory system.

    PubMed

    Shao, Yu; Chang, Chip-Hong

    2007-08-01

    We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.

  16. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    SciTech Connect

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I. E-mail: sshibata@post.kek.jp

    2015-08-15

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source function is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.

  17. The oscillatory activities and its synchronization in auditory-visual integration as revealed by event-related potentials to bimodal stimuli

    NASA Astrophysics Data System (ADS)

    Guo, Jia; Xu, Peng; Yao, Li; Shu, Hua; Zhao, Xiaojie

    2012-03-01

    Neural mechanism of auditory-visual speech integration is always a hot study of multi-modal perception. The articulation conveys speech information that helps detect and disambiguate the auditory speech. As important characteristic of EEG, oscillations and its synchronization have been applied to cognition research more and more. This study analyzed the EEG data acquired by unimodal and bimodal stimuli using time frequency and phase synchrony approach, investigated the oscillatory activities and its synchrony modes behind evoked potential during auditory-visual integration, in order to reveal the inherent neural integration mechanism under these modes. It was found that beta activity and its synchronization differences had relationship with gesture N1-P2, which happened in the earlier stage of speech coding to pronouncing action. Alpha oscillation and its synchronization related with auditory N1-P2 might be mainly responsible for auditory speech process caused by anticipation from gesture to sound feature. The visual gesture changing enhanced the interaction of auditory brain regions. These results provided explanations to the power and connectivity change of event-evoked oscillatory activities which matched ERPs during auditory-visual speech integration.

  18. Optogenetic stimulation of the auditory pathway

    PubMed Central

    Hernandez, Victor H.; Gehrt, Anna; Reuter, Kirsten; Jing, Zhizi; Jeschke, Marcus; Mendoza Schulz, Alejandro; Hoch, Gerhard; Bartels, Matthias; Vogt, Gerhard; Garnham, Carolyn W.; Yawo, Hiromu; Fukazawa, Yugo; Augustine, George J.; Bamberg, Ernst; Kügler, Sebastian; Salditt, Tim; de Hoz, Livia; Strenzke, Nicola; Moser, Tobias

    2014-01-01

    Auditory prostheses can partially restore speech comprehension when hearing fails. Sound coding with current prostheses is based on electrical stimulation of auditory neurons and has limited frequency resolution due to broad current spread within the cochlea. In contrast, optical stimulation can be spatially confined, which may improve frequency resolution. Here, we used animal models to characterize optogenetic stimulation, which is the optical stimulation of neurons genetically engineered to express the light-gated ion channel channelrhodopsin-2 (ChR2). Optogenetic stimulation of spiral ganglion neurons (SGNs) activated the auditory pathway, as demonstrated by recordings of single neuron and neuronal population responses. Furthermore, optogenetic stimulation of SGNs restored auditory activity in deaf mice. Approximation of the spatial spread of cochlear excitation by recording local field potentials (LFPs) in the inferior colliculus in response to suprathreshold optical, acoustic, and electrical stimuli indicated that optogenetic stimulation achieves better frequency resolution than monopolar electrical stimulation. Virus-mediated expression of a ChR2 variant with greater light sensitivity in SGNs reduced the amount of light required for responses and allowed neuronal spiking following stimulation up to 60 Hz. Our study demonstrates a strategy for optogenetic stimulation of the auditory pathway in rodents and lays the groundwork for future applications of cochlear optogenetics in auditory research and prosthetics. PMID:24509078

  19. Investigating bottom-up auditory attention

    PubMed Central

    Kaya, Emine Merve; Elhilali, Mounya

    2014-01-01

    Bottom-up attention is a sensory-driven selection mechanism that directs perception toward a subset of the stimulus that is considered salient, or attention-grabbing. Most studies of bottom-up auditory attention have adapted frameworks similar to visual attention models whereby local or global “contrast” is a central concept in defining salient elements in a scene. In the current study, we take a more fundamental approach to modeling auditory attention; providing the first examination of the space of auditory saliency spanning pitch, intensity and timbre; and shedding light on complex interactions among these features. Informed by psychoacoustic results, we develop a computational model of auditory saliency implementing a novel attentional framework, guided by processes hypothesized to take place in the auditory pathway. In particular, the model tests the hypothesis that perception tracks the evolution of sound events in a multidimensional feature space, and flags any deviation from background statistics as salient. Predictions from the model corroborate the relationship between bottom-up auditory attention and statistical inference, and argues for a potential role of predictive coding as mechanism for saliency detection in acoustic scenes. PMID:24904367

  20. WESSEL: Code for Numerical Simulation of Two-Dimensional Time-Dependent Width-Averaged Flows with Arbitrary Boundaries.

    DTIC Science & Technology

    1985-08-01

    id This report should be cited as follows: -0 Thompson , J . F ., and Bernard, R. S. 1985. "WESSEL: Code for Numerical Simulation of Two-Dimensional Time...Bodies," Ph. D. Dissertation, Mississippi State University, Mississippi State, Miss. Thompson , J . F . 1983. "A Boundary-Fitted Coordinate Code for General...Vicksburg, Miss. Thompson , J . F ., and Bernard, R. S. 1985. "Numerical Modeling of Two-Dimensional Width-Averaged Flows Using Boundary-Fitted Coordinate

  1. Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses.

    PubMed

    Sudhakar, Shyam Kumar; Torben-Nielsen, Benjamin; De Schutter, Erik

    2015-12-01

    Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it.

  2. Differential representation of spectral and temporal information by primary auditory cortex neurons in awake cats: relevance to auditory scene analysis.

    PubMed

    Sakai, Masashi; Chimoto, Sohei; Qin, Ling; Sato, Yu

    2009-04-10

    We investigated how the primary auditory cortex (AI) neurons encode the two major requisites for auditory scene analysis, i.e., spectral and temporal information. Single-unit activities in awake cats AI were studied by presenting 0.5-s-long tone bursts and click trains. First of all, the neurons (n=92) were classified into 3 types based on the time-course of excitatory responses to tone bursts: 1) phasic cells (P-cells; 26%), giving only transient responses; 2) tonic cells (T-cells; 34%), giving sustained responses with little or no adaptation; and 3) phasic-tonic cells (PT-cells; 40%), giving sustained responses with some tendency of adaptation. Other tone-response variables differed among cell types. For example, P-cells showed the shortest latency and smallest spiking jitter while T-cells had the sharpest frequency tuning. PT-cells generally fell in the intermediate between the two extremes. Click trains also revealed between-neuron-type differences for the emergent probability of excitatory responses (P-cells>PT-cells>T-cells) and their temporal features. For example, a substantial fraction of P-cells conducted stimulus-locking responses, but none of the T-cells did. f(r)-dependency characteristics of the stimulus locking resembled that reported for "comodulation masking release," a behavioral model of auditory scene analysis. Each type neurons were omnipresent throughout the AI and none of them showed intrinsic oscillation. These findings suggest that: 1) T-cells preferentially encode spectral information with a rate-place code and 2) P-cells preferentially encode acoustic transients with a temporal code whereby rate-place coded information is potentially bound for scene analysis.

  3. Coding and decoding with adapting neurons: a population approach to the peri-stimulus time histogram.

    PubMed

    Naud, Richard; Gerstner, Wulfram

    2012-01-01

    The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a 'quasi-renewal equation' which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.

  4. Multi-fluid transport code modeling of time-dependent recycling in ELMy H-mode

    SciTech Connect

    Pigarov, A. Yu.; Krasheninnikov, S. I.; Rognlien, T. D.; Hollmann, E. M.; Lasnier, C. J.; Unterberg, Ezekial A

    2014-01-01

    Simulations of a high-confinement-mode (H-mode) tokamak discharge with infrequent giant type-I ELMs are performed by the multi-fluid, multi-species, two-dimensional transport code UEDGE-MB, which incorporates the Macro-Blob approach for intermittent non-diffusive transport due to filamentary coherent structures observed during the Edge Localized Modes (ELMs) and simple time-dependent multi-parametric models for cross-field plasma transport coefficients and working gas inventory in material surfaces. Temporal evolutions of pedestal plasma profiles, divertor recycling, and wall inventory in a sequence of ELMs are studied and compared to the experimental time-dependent data. Short- and long-time-scale variations of the pedestal and divertor plasmas where the ELM is described as a sequence of macro-blobs are discussed. It is shown that the ELM recovery includes the phase of relatively dense and cold post-ELM divertor plasma evolving on a several ms scale, which is set by the transport properties of H-mode barrier. The global gas balance in the discharge is also analyzed. The calculated rates of working gas deposition during each ELM and wall outgassing between ELMs are compared to the ELM particle losses from the pedestal and neutral-beam-injection fueling rate, correspondingly. A sensitivity study of the pedestal and divertor plasmas to model assumptions for gas deposition and release on material surfaces is presented. The performed simulations show that the dynamics of pedestal particle inventory is dominated by the transient intense gas deposition into the wall during each ELM followed by continuous gas release between ELMs at roughly a constant rate.

  5. Multi-fluid transport code modeling of time-dependent recycling in ELMy H-mode

    SciTech Connect

    Pigarov, A. Yu.; Krasheninnikov, S. I.; Hollmann, E. M.; Rognlien, T. D.; Lasnier, C. J.; Unterberg, E.

    2014-06-15

    Simulations of a high-confinement-mode (H-mode) tokamak discharge with infrequent giant type-I ELMs are performed by the multi-fluid, multi-species, two-dimensional transport code UEDGE-MB, which incorporates the Macro-Blob approach for intermittent non-diffusive transport due to filamentary coherent structures observed during the Edge Localized Modes (ELMs) and simple time-dependent multi-parametric models for cross-field plasma transport coefficients and working gas inventory in material surfaces. Temporal evolutions of pedestal plasma profiles, divertor recycling, and wall inventory in a sequence of ELMs are studied and compared to the experimental time-dependent data. Short- and long-time-scale variations of the pedestal and divertor plasmas where the ELM is described as a sequence of macro-blobs are discussed. It is shown that the ELM recovery includes the phase of relatively dense and cold post-ELM divertor plasma evolving on a several ms scale, which is set by the transport properties of H-mode barrier. The global gas balance in the discharge is also analyzed. The calculated rates of working gas deposition during each ELM and wall outgassing between ELMs are compared to the ELM particle losses from the pedestal and neutral-beam-injection fueling rate, correspondingly. A sensitivity study of the pedestal and divertor plasmas to model assumptions for gas deposition and release on material surfaces is presented. The performed simulations show that the dynamics of pedestal particle inventory is dominated by the transient intense gas deposition into the wall during each ELM followed by continuous gas release between ELMs at roughly a constant rate.

  6. Tracking the Time Course of Word-Frequency Effects in Auditory Word Recognition with Event-Related Potentials

    ERIC Educational Resources Information Center

    Dufour, Sophie; Brunelliere, Angele; Frauenfelder, Ulrich H.

    2013-01-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to…

  7. Learning Novel Phonological Representations in Developmental Dyslexia: Associations with Basic Auditory Processing of Rise Time and Phonological Awareness

    ERIC Educational Resources Information Center

    Thomson, Jennifer M.; Goswami, Usha

    2010-01-01

    Across languages, children with developmental dyslexia are known to have impaired lexical phonological representations. Here, we explore associations between learning new phonological representations, phonological awareness, and sensitivity to amplitude envelope onsets (rise time). We show that individual differences in learning novel phonological…

  8. Implementation and evaluation of a simulation curriculum for paediatric residency programs including just-in-time in situ mock codes

    PubMed Central

    Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam

    2012-01-01

    OBJECTIVE: To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. DESIGN: Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. SETTING: Paediatric residency program at BC Children’s Hospital, Vancouver, British Columbia. INTERVENTIONS: The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. RESULTS: A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. CONCLUSIONS: A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes. PMID:23372405

  9. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    PubMed

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  10. Global Time Dependent Solutions of Stochastically Driven Standard Accretion Disks: Development of Hydrodynamical Code

    NASA Astrophysics Data System (ADS)

    Wani, Naveel; Maqbool, Bari; Iqbal, Naseer; Misra, Ranjeev

    2016-07-01

    X-ray binaries and AGNs are powered by accretion discs around compact objects, where the x-rays are emitted from the inner regions and uv emission arise from the relatively cooler outer parts. There has been an increasing evidence that the variability of the x-rays in different timescales is caused by stochastic fluctuations in the accretion disc at different radii. These fluctuations although arise in the outer parts of the disc but propagate inwards to give rise to x-ray variability and hence provides a natural connection between the x-ray and uv variability. There are analytical expressions to qualitatively understand the effect of these stochastic variabilities, but quantitative predictions are only possible by a detailed hydrodynamical study of the global time dependent solution of standard accretion disc. We have developed numerical efficient code (to incorporate all these effects), which considers gas pressure dominated solutions and stochastic fluctuations with the inclusion of boundary effect of the last stable orbit.

  11. Evaluation of a thin-slot formalism for finite-difference time-domain electromagnetics codes

    SciTech Connect

    Turner, C.D.; Bacon, L.D.

    1987-03-01

    A thin-slot formalism for use with finite-difference time-domain (FDTD) electromagnetics codes has been evaluated in both two and three dimensions. This formalism allows narrow slots to be modeled in the wall of a scatterer without reducing the space grid size to the gap width. In two dimensions, the evaluation involves the calculation of the total fields near two infinitesimally thin coplanar strips separated by a gap. A method-of-moments (MoM) solution of the same problem is used as a benchmark for comparison. Results in two dimensions show that up to 10% error can be expected in total electric and magnetic fields both near (lambda/40) and far (1 lambda) from the slot. In three dimensions, the evaluation is similar. The finite-length slot is placed in a finite plate and an MoM surface patch solution is used for the benchmark. These results, although less extensive than those in two dimensions, show that slightly larger errors can be expected. Considering the approximations made near the slot in incorporating the formalism, the results are very promising. Possibilities also exist for applying this formalism to walls of arbitrary thickness and to other types of slots, such as overlapping joints. 11 refs., 25 figs., 6 tabs.

  12. Detection by real time PCR of walnut allergen coding sequences in processed foods.

    PubMed

    Linacero, Rosario; Ballesteros, Isabel; Sanchiz, Africa; Prieto, Nuria; Iniesto, Elisa; Martinez, Yolanda; Pedrosa, Mercedes M; Muzquiz, Mercedes; Cabanillas, Beatriz; Rovira, Mercè; Burbano, Carmen; Cuadrado, Carmen

    2016-07-01

    A quantitative real-time PCR (RT-PCR) method, employing novel primer sets designed on Jug r 1, Jug r 3, and Jug r 4 allergen-coding sequences, was set up and validated. Its specificity, sensitivity, and applicability were evaluated. The DNA extraction method based on CTAB-phenol-chloroform was best for walnut. RT-PCR allowed a specific and accurate amplification of allergen sequence, and the limit of detection was 2.5pg of walnut DNA. The method sensitivity and robustness were confirmed with spiked samples, and Jug r 3 primers detected up to 100mg/kg of raw walnut (LOD 0.01%, LOQ 0.05%). Thermal treatment combined with pressure (autoclaving) reduced yield and amplification (integrity and quality) of walnut DNA. High hydrostatic pressure (HHP) did not produce any effect on the walnut DNA amplification. This RT-PCR method showed greater sensitivity and reliability in the detection of walnut traces in commercial foodstuffs compared with ELISA assays.

  13. Impairment of auditory spatial localization in congenitally blind human subjects.

    PubMed

    Gori, Monica; Sandini, Giulio; Martinoli, Cristina; Burr, David C

    2014-01-01

    Several studies have demonstrated enhanced auditory processing in the blind, suggesting that they compensate their visual impairment in part with greater sensitivity of the other senses. However, several physiological studies show that early visual deprivation can impact negatively on auditory spatial localization. Here we report for the first time severely impaired auditory localization in the congenitally blind: thresholds for spatially bisecting three consecutive, spatially-distributed sound sources were seriously compromised, on average 4.2-fold typical thresholds, and half performing at random. In agreement with previous studies, these subjects showed no deficits on simpler auditory spatial tasks or with auditory temporal bisection, suggesting that the encoding of Euclidean auditory relationships is specifically compromised in the congenitally blind. It points to the importance of visual experience in the construction and calibration of auditory spatial maps, with implications for rehabilitation strategies for the congenitally blind.

  14. Transient human auditory cortex activation during volitional attention shifting

    PubMed Central

    Uhlig, Christian Harm; Gutschalk, Alexander

    2017-01-01

    While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues. PMID:28273110

  15. A Real-Time SAR Processor using One-Bit Raw Signal Coding for SRTM

    DTIC Science & Technology

    2000-10-01

    allow users access to individually authored sections f proceedings, annals, symposia, ect . However, the component should be considered within he...granted during SRTM by X-IFSAR Gate Array ( FPGA ) technology, endorses the Single Bit will allow once again the analysis of precipitation over SAR coding...II", finally, ocean large-scale dynamics. Department of Electrical Engineering. The Signum Code (SC) algorithm, together with state- of-the-art FPGA

  16. Statistical learning of an auditory sequence and reorganization of acquired knowledge: A time course of word segmentation and ordering.

    PubMed

    Daikoku, Tatsuya; Yatomi, Yutaka; Yumoto, Masato

    2017-01-27

    Previous neural studies have supported the hypothesis that statistical learning mechanisms are used broadly across different domains such as language and music. However, these studies have only investigated a single aspect of statistical learning at a time, such as recognizing word boundaries or learning word order patterns. In this study, we neutrally investigated how the two levels of statistical learning for recognizing word boundaries and word ordering could be reflected in neuromagnetic responses and how acquired statistical knowledge is reorganised when the syntactic rules are revised. Neuromagnetic responses to the Japanese-vowel sequence (a, e, i, o, and u), presented every .45s, were recorded from 14 right-handed Japanese participants. The vowel order was constrained by a Markov stochastic model such that five nonsense words (aue, eao, iea, oiu, and uoi) were chained with an either-or rule: the probability of the forthcoming word was statistically defined (80% for one word; 20% for the other word) by the most recent two words. All of the word transition probabilities (80% and 20%) were switched in the middle of the sequence. In the first and second quarters of the sequence, the neuromagnetic responses to the words that appeared with higher transitional probability were significantly reduced compared with those that appeared with a lower transitional probability. After switching the word transition probabilities, the response reduction was replicated in the last quarter of the sequence. The responses to the final vowels in the words were significantly reduced compared with those to the initial vowels in the last quarter of the sequence. The results suggest that both within-word and between-word statistical learning are reflected in neural responses. The present study supports the hypothesis that listeners learn larger structures such as phrases first, and they subsequently extract smaller structures, such as words, from the learned phrases. The present

  17. Phase-amplitude cross-frequency coupling in EEG-derived cortical time series upon an auditory perception task.

    PubMed

    Papadaniil, Chrysa D; Kosmidou, Vasiliki E; Tsolaki, Anthoula; Tsolaki, Magda; Kompatsiaris, Ioannis Yiannis; Hadjileontiadis, Leontios J

    2015-01-01

    Recent evidence suggests that cross-frequency coupling (CFC) plays an essential role in multi-scale communication across the brain. The amplitude of the high frequency oscillations, responsible for local activity, is modulated by the phase of the lower frequency activity, in a task and region-relevant way. In this paper, we examine this phase-amplitude coupling in a two-tone oddball paradigm for the low frequency bands (delta, theta, alpha, and beta) and determine the most prominent CFCs. Data consisted of cortical time series, extracted by applying three-dimensional vector field tomography (3D-VFT) to high density (256 channels) electroencephalography (HD-EEG), and CFC analysis was based on the phase-amplitude coupling metric, namely PAC. Our findings suggest CFC spanning across all brain regions and low frequencies. Stronger coupling was observed in the delta band, that is closely linked to sensory processing. However, theta coupling was reinforced in the target tone response, revealing a task-dependent CFC and its role in brain networks communication.

  18. Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes

    SciTech Connect

    Lu, S.

    2002-07-01

    As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent of this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)

  19. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    PubMed

    Lu, Kai; Vicario, David S

    2014-10-07

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.

  20. Time-frequency joint coding method for boosting information transfer rate in an SSVEP based BCI system.

    PubMed

    Ke Lin; Yijun Wang; Xiaorong Gao

    2016-08-01

    Steady-State Visual Evoked Potential (SSVEP) based Brain-Computer Interface (BCI) system is an important BCI modality. It has advantages such as ease of use, little training and high Information Transfer Rate (ITR). Traditional SSVEP based BCI systems are based on the Frequency Division Multiple Access (FDMA) approach in telecommunications. Recently, Time Division Multiple Access (TDMA) was also introduced to SSVEP based BCI to enhance the system performance. This study designed a new time-frequency joint coding method to utilize the information coding from both time and frequency domains. TDMA using Different Frequency (DF) mode and Same Frequency (SF) mode were compared to the traditional FDMA mode in the offline experiment. The result showed that the DF mode had better performance than the other two modes. The mean and the standard deviation of accuracy and ITR of the online experiment was 83.3%±5.5% and 130.3 + 14.9 bits/min (trial time: 1.25s) and 92.0%±7.5% and 136.6 + 19.8 bit/min (trial time: 1.5s). The average typing speed for the word-copy spelling task was 14.9 characters per minute (cpm) (trial time: 1.25s) and 14.8 cpm (trial time: 1.5s). The overall results demonstrate the feasibility and advantage of the proposed time-frequency joint coding method.

  1. Navajo Code Talker Joe Morris, Sr. shared insights from his time as a secret World War Two messenger

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Navajo Code Talker Joe Morris, Sr. shared insights from his time as a secret World War Two messenger with his audience at NASA's Dryden Flight Research Center on Nov. 26, 2002. NASA Dryden is located on Edwards Air Force Base in California's Mojave Desert.

  2. Summary statistics in auditory perception.

    PubMed

    McDermott, Josh H; Schemitsch, Michael; Simoncelli, Eero P

    2013-04-01

    Sensory signals are transduced at high resolution, but their structure must be stored in a more compact format. Here we provide evidence that the auditory system summarizes the temporal details of sounds using time-averaged statistics. We measured discrimination of 'sound textures' that were characterized by particular statistical properties, as normally result from the superposition of many acoustic features in auditory scenes. When listeners discriminated examples of different textures, performance improved with excerpt duration. In contrast, when listeners discriminated different examples of the same texture, performance declined with duration, a paradoxical result given that the information available for discrimination grows with duration. These results indicate that once these sounds are of moderate length, the brain's representation is limited to time-averaged statistics, which, for different examples of the same texture, converge to the same values with increasing duration. Such statistical representations produce good categorical discrimination, but limit the ability to discern temporal detail.

  3. Coding ill-defined and unknown cause of death is 13 times more frequent in Denmark than in Finland.

    PubMed

    Ylijoki-Sørensen, Seija; Sajantila, Antti; Lalu, Kaisa; Bøggild, Henrik; Boldsen, Jesper Lier; Boel, Lene Warner Thorup

    2014-11-01

    Exact cause and manner of death determination improves legislative safety for the individual and for society and guides aspects of national public health. In the International Classification of Diseases, codes R00-R99 are used for "symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified" designated as "ill-defined" or "with unknown etiology". The World Health Organisation recommends avoiding the use of ill-defined and unknown causes of death in the death certificate as this terminology does not give any information concerning the possible conditions that led to the death. Thus, the aim of the study was, firstly, to analyse the frequencies of R00-R99-coded deaths in mortality statistics in Finland and in Denmark and, secondly, to compare these and the methods used to investigate the cause of death. To do so, we extracted a random 90% sample of the Finnish death certificates and 100% of the Danish certificates from the national mortality registries for 2000, 2005 and 2010. Subsequently, we analysed the frequencies of forensic and medical autopsies and external clinical examinations of the bodies in R00-R99-coded deaths. The use of R00-R99 codes was significantly higher in Denmark than in Finland; OR 18.6 (95% CI 15.3-22.4; p<0.001) for 2000, OR 9.5 (95% CI 8.0-11.3; p<0.001) for 2005 and OR 13.2 (95% CI 11.1-15.7; p<0.001) for 2010. More than 80% of Danish deaths with R00-R99 codes were over 70 years of age at the time of death. Forensic autopsy was performed in 88.3% of Finnish R00-R99-coded deaths, whereas only 3.5% of Danish R00-R99-coded deaths were investigated with forensic or medical autopsy. The codes that were most used in both countries were R96-R99, meaning "unknown cause of death". In Finland, all of these deaths were investigated with a forensic autopsy. Our study suggests that if all deaths in all age groups with unclear cause of death were systematically investigated with a forensic autopsy, only 2-3/1000 deaths per year

  4. Selective adaptation to "oddball" sounds by the human auditory system.

    PubMed

    Simpson, Andrew J R; Harper, Nicol S; Reiss, Joshua D; McAlpine, David

    2014-01-29

    Adaptation to both common and rare sounds has been independently reported in neurophysiological studies using probabilistic stimulus paradigms in small mammals. However, the apparent sensitivity of the mammalian auditory system to the statistics of incoming sound has not yet been generalized to task-related human auditory perception. Here, we show that human listeners selectively adapt to novel sounds within scenes unfolding over minutes. Listeners' performance in an auditory discrimination task remains steady for the most common elements within the scene but, after the first minute, performance improves for distinct and rare (oddball) sound elements, at the expense of rare sounds that are relatively less distinct. Our data provide the first evidence of enhanced coding of oddball sounds in a human auditory discrimination task and suggest the existence of an adaptive mechanism that tracks the long-term statistics of sounds and deploys coding resources accordingly.

  5. Translating Neurocognitive Models of Auditory-Verbal Hallucinations into Therapy: Using Real-time fMRI-Neurofeedback to Treat Voices

    PubMed Central

    Fovet, Thomas; Orlov, Natasza; Dyck, Miriam; Allen, Paul; Mathiak, Klaus; Jardri, Renaud

    2016-01-01

    Auditory-verbal hallucinations (AVHs) are frequent and disabling symptoms, which can be refractory to conventional psychopharmacological treatment in more than 25% of the cases. Recent advances in brain imaging allow for a better understanding of the neural underpinnings of AVHs. These findings strengthened transdiagnostic neurocognitive models that characterize these frequent and disabling experiences. At the same time, technical improvements in real-time functional magnetic resonance imaging (fMRI) enabled the development of innovative and non-invasive methods with the potential to relieve psychiatric symptoms, such as fMRI-based neurofeedback (fMRI-NF). During fMRI-NF, brain activity is measured and fed back in real time to the participant in order to help subjects to progressively achieve voluntary control over their own neural activity. Precisely defining the target brain area/network(s) appears critical in fMRI-NF protocols. After reviewing the available neurocognitive models for AVHs, we elaborate on how recent findings in the field may help to develop strong a priori strategies for fMRI-NF target localization. The first approach relies on imaging-based “trait markers” (i.e., persistent traits or vulnerability markers that can also be detected in the presymptomatic and remitted phases of AVHs). The goal of such strategies is to target areas that show aberrant activations during AVHs or are known to be involved in compensatory activation (or resilience processes). Brain regions, from which the NF signal is derived, can be based on structural MRI and neurocognitive knowledge, or functional MRI information collected during specific cognitive tasks. Because hallucinations are acute and intrusive symptoms, a second strategy focuses more on “state markers.” In this case, the signal of interest relies on fMRI capture of the neural networks exhibiting increased activity during AVHs occurrences, by means of multivariate pattern recognition methods. The fine

  6. LUMPED: a Visual Basic code of lumped-parameter models for mean residence time analyses of groundwater systems

    NASA Astrophysics Data System (ADS)

    Ozyurt, N. N.; Bayari, C. S.

    2003-02-01

    A Microsoft ® Visual Basic 6.0 (Microsoft Corporation, 1987-1998) code of 15 lumped-parameter models is presented for the analysis of mean residence time in aquifers. Groundwater flow systems obeying plug and exponential flow models and their combinations of parallel or serial connection can be simulated by these steady-state models which may include complications such as bypass flow and dead volume. Each model accepts tritium, krypton-85, chlorofluorocarbons (CFC-11, CFC-12 and CFC-113) and sulfur hexafluoride (SF 6) as environmental tracer. Retardation of gas tracers in the unsaturated zone and their degradation in the flow system may also be accounted for. The executable code has been tested to run under Windows 95 or higher operating systems. The results of comparisons between other comparable codes are discussed and the limitations are indicated.

  7. ynogkm: A new public code for calculating time-like geodesics in the Kerr-Newman spacetime

    NASA Astrophysics Data System (ADS)

    Yang, Xiao-Lin; Wang, Jian-Cheng

    2014-01-01

    In this paper, we present a new public code, named ynogkm (Yun-Nan observatories geodesic in a Kerr-Newman spacetime for massive particles), for the fast calculation of time-like geodesics in the Kerr-Newman (K-N) spacetime, which is a direct extension of ynogk (Yun-Nan observatories geodesic Kerr) calculating null geodesics in a Kerr spacetime. Following the strategies used in ynogk, we also solve the equations of motion analytically and semi-analytically by using Weierstrass' and Jacobi's elliptic functions and integrals in which the Boyer-Lidquist (B-L) coordinates r, θ, φ, t and the proper time σ are expressed as functions of an independent variable p (Mino time). All of the elliptic integrals are computed by Carlson's elliptic integral method, which guarantees the fast speed of the code. Finally, the code is applied to a couple of toy problems. The current version of the code is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/561/A127

  8. Dynamic bandwidth allocation algorithm for next-generation time division multiplexing passive optical networks with network coding

    NASA Astrophysics Data System (ADS)

    Wei, Pei; Gu, Rentao; Ji, Yuefeng

    2013-08-01

    An efficient dynamic bandwidth allocation (DBA) algorithm for multiclass services called MSDBA is proposed for next-generation time division multiplexing (TDM) passive optical networks with network coding (NC-PON). In MSDBA, a DBA cycle is divided into two subcycles with different coding strategies for differentiated classes of services, and the transmission time of the first subcycle overlaps with the bandwidth allocation calculation time at the optical line terminal. Moreover, according to the quality-of-service (QoS) requirements of services, different scheduling and bandwidth allocation schemes are applied to coded or uncoded services in the corresponding subcycle. Numerical analyses and simulations for performance evaluation are performed in 10 Gbps ethernet passive optical networks (10G EPON), which is a standardized solution for next-generation EPON. Evaluation results show that compared with the existing two DBA algorithms deployed in TDM NC-PON, MSDBA not only demonstrates better performance in delay and QoS support for all classes of services but also achieves the maximum end-to-end delay fairness between coded and uncoded lower-class services and guarantees the end-to-end delay bound and fixed polling order of high-class services by sacrificing their end-to-end delay fairness for compromise.

  9. Intensity modulation and direct detection Alamouti polarization-time coding for optical fiber transmission systems with polarization mode dispersion

    NASA Astrophysics Data System (ADS)

    Reza, Ahmed Galib; Rhee, June-Koo Kevin

    2016-07-01

    Alamouti space-time coding is modified in the form of polarization-time coding to combat against polarization mode dispersion (PMD) impairments in exploiting a polarization diversity multiplex (PDM) gain with simple intensity modulation and direct detection (IM/DD) in optical transmission systems. A theoretical model for the proposed IM/DD Alamouti polarization-time coding (APTC-IM/DD) using nonreturn-to-zero on-off keying signal can surprisingly eliminate the requirement of channel estimation for decoding in the low PMD regime, when a two-transmitter and two-receiver channel is adopted. Even in the high PMD regime, the proposed APTC-IM/DD still reveals coding gain demonstrating the robustness of APTC-IM/DD. In addition, this scheme can eliminate the requirements for a polarization state controller, a coherent receiver, and a high-speed analog-to-digital converter at a receiver. Simulation results reveal that the proposed APTC scheme is able to reduce the optical signal-to-noise ratio requirement by ˜3 dB and significantly enhance the PMD tolerance of a PDM-based IM/DD system.

  10. Surveying Multidisciplinary Aspects in Real-Time Distributed Coding for Wireless Sensor Networks

    PubMed Central

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-01

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, “real-time” coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories. PMID:25633597

  11. Ultrasonic imaging of human tooth using chirp-coded nonlinear time reversal acoustics

    NASA Astrophysics Data System (ADS)

    Santos, Serge Dos; Domenjoud, Mathieu; Prevorovsky, Zdenek

    2010-01-01

    We report in this paper the first use of TR-NEWS, included chirp-coded excitation and applied for ultrasonic imaging of human tooth. Feasibility of the focusing of ultrasound at the surface of the human tooth is demonstrated and potentiality of a new echodentography of the dentine-enamel interface using TR-NEWS is discussed.

  12. The Code of Conduct at 42: Time for a Middle-Age Check-Up

    DTIC Science & Technology

    1998-04-01

    surroundings starkly different from those the POW had just left, 149 coupled with offers of cigarettes, chocolate , or a walk to a beer garden.150 The...emphasized victory at any cost, 194 and the Meiji Code which treated every order of a superior as coming from the emperor himself. The ultimate

  13. Auditory spatial localization: Developmental delay in children with visual impairments.

    PubMed

    Cappagli, Giulia; Gori, Monica

    2016-01-01

    For individuals with visual impairments, auditory spatial localization is one of the most important features to navigate in the environment. Many works suggest that blind adults show similar or even enhanced performance for localization of auditory cues compared to sighted adults (Collignon, Voss, Lassonde, & Lepore, 2009). To date, the investigation of auditory spatial localization in children with visual impairments has provided contrasting results. Here we report, for the first time, that contrary to visually impaired adults, children with low vision or total blindness show a significant impairment in the localization of static sounds. These results suggest that simple auditory spatial tasks are compromised in children, and that this capacity recovers over time.

  14. Effects of an Auditory Lateralization Training in Children Suspected to Central Auditory Processing Disorder

    PubMed Central

    Lotfi, Yones; Moosavi, Abdollah; Bakhshi, Enayatollah; Sadjedi, Hamed

    2016-01-01

    Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim of this research was investigating effects of an auditory lateralization training on speech perception in presence of noise/competing signals in children suspected to (C)APD. Subjects and Methods In this analytical interventional study, 60 children suspected to (C)APD were selected based on multiple auditory processing assessment subtests. They were randomly divided into two groups: control (mean age 9.07) and training groups (mean age 9.00). Training program consisted of detection and pointing to sound sources delivered with interaural time differences under headphones for 12 formal sessions (6 weeks). Spatial word recognition score (WRS) and monaural selective auditory attention test (mSAAT) were used to follow the auditory lateralization training effects. Results This study showed that in the training group, mSAAT score and spatial WRS in noise (p value≤0.001) improved significantly after the auditory lateralization training. Conclusions We used auditory lateralization training for 6 weeks and showed that auditory lateralization can improve speech understanding in noise significantly. The generalization of this results needs further researches. PMID:27626084

  15. Auditory scene analysis by echolocation in bats.

    PubMed

    Moss, C F; Surlykke, A

    2001-10-01

    Echolocating bats transmit ultrasonic vocalizations and use information contained in the reflected sounds to analyze the auditory scene. Auditory scene analysis, a phenomenon that applies broadly to all hearing vertebrates, involves the grouping and segregation of sounds to perceptually organize information about auditory objects. The perceptual organization of sound is influenced by the spectral and temporal characteristics of acoustic signals. In the case of the echolocating bat, its active control over the timing, duration, intensity, and bandwidth of sonar transmissions directly impacts its perception of the auditory objects that comprise the scene. Here, data are presented from perceptual experiments, laboratory insect capture studies, and field recordings of sonar behavior of different bat species, to illustrate principles of importance to auditory scene analysis by echolocation in bats. In the perceptual experiments, FM bats (Eptesicus fuscus) learned to discriminate between systematic and random delay sequences in echo playback sets. The results of these experiments demonstrate that the FM bat can assemble information about echo delay changes over time, a requirement for the analysis of a dynamic auditory scene. Laboratory insect capture experiments examined the vocal production patterns of flying E. fuscus taking tethered insects in a large room. In each trial, the bats consistently produced echolocation signal groups with a relatively stable repetition rate (within 5%). Similar temporal patterning of sonar vocalizations was also observed in the field recordings from E. fuscus, thus suggesting the importance of temporal control of vocal production for perceptually guided behavior. It is hypothesized that a stable sonar signal production rate facilitates the perceptual organization of echoes arriving from objects at different directions and distances as the bat flies through a dynamic auditory scene. Field recordings of E. fuscus, Noctilio albiventris, N

  16. Auditory Processing Disorder (For Parents)

    MedlinePlus

    ... Feeding Your 1- to 2-Year-Old Auditory Processing Disorder KidsHealth > For Parents > Auditory Processing Disorder Print A A A What's in this ... Speech Symptoms Causes Diagnosis Helping Your Child Auditory processing disorder (APD), also known as central auditory processing ...

  17. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123

    PubMed Central

    Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  18. Distinct Spatiotemporal Response Properties of Excitatory Versus Inhibitory Neurons in the Mouse Auditory Cortex

    PubMed Central

    Maor, Ido; Shalev, Amos; Mizrahi, Adi

    2016-01-01

    In the auditory system, early neural stations such as brain stem are characterized by strict tonotopy, which is used to deconstruct sounds to their basic frequencies. But higher along the auditory hierarchy, as early as primary auditory cortex (A1), tonotopy starts breaking down at local circuits. Here, we studied the response properties of both excitatory and inhibitory neurons in the auditory cortex of anesthetized mice. We used in vivo two photon-targeted cell-attached recordings from identified parvalbumin-positive neurons (PVNs) and their excitatory pyramidal neighbors (PyrNs). We show that PyrNs are locally heterogeneous as characterized by diverse best frequencies, pairwise signal correlations, and response timing. In marked contrast, neighboring PVNs exhibited homogenous response properties in pairwise signal correlations and temporal responses. The distinct physiological microarchitecture of different cell types is maintained qualitatively in response to natural sounds. Excitatory heterogeneity and inhibitory homogeneity within the same circuit suggest different roles for each population in coding natural stimuli. PMID:27600839

  19. Temporal predictability enhances auditory detection

    PubMed Central

    Lawrance, Emma L. A.; Harper, Nicol S.; Cooke, James E.; Schnupp, Jan W. H.

    2015-01-01

    Periodic stimuli are common in natural environments and are ecologically relevant, for example, footsteps and vocalizations. This study reports a detectability enhancement for temporally cued, periodic sequences. Target noise bursts (embedded in background noise) arriving at the time points which followed on from an introductory, periodic “cue” sequence were more easily detected (by ~1.5 dB SNR) than identical noise bursts which randomly deviated from the cued temporal pattern. Temporal predictability and corresponding neuronal “entrainment” have been widely theorized to underlie important processes in auditory scene analysis and to confer perceptual advantage. This is the first study in the auditory domain to clearly demonstrate a perceptual enhancement of temporally predictable, near-threshold stimuli. PMID:24907846

  20. Temporal predictability enhances auditory detection.

    PubMed

    Lawrance, Emma L A; Harper, Nicol S; Cooke, James E; Schnupp, Jan W H

    2014-06-01

    Periodic stimuli are common in natural environments and are ecologically relevant, for example, footsteps and vocalizations. This study reports a detectability enhancement for temporally cued, periodic sequences. Target noise bursts (embedded in background noise) arriving at the time points which followed on from an introductory, periodic "cue" sequence were more easily detected (by ∼1.5 dB SNR) than identical noise bursts which randomly deviated from the cued temporal pattern. Temporal predictability and corresponding neuronal "entrainment" have been widely theorized to underlie important processes in auditory scene analysis and to confer perceptual advantage. This is the first study in the auditory domain to clearly demonstrate a perceptual enhancement of temporally predictable, near-threshold stimuli.

  1. Ultrasonic Imaging in Highly Attenuating Materials with Hadamard Codes and the Decomposition of the Time Reversal Operator.

    PubMed

    Lopez Villaverde, Eduardo; Robert, Sebastien; Prada, Claire

    2017-04-03

    In this work, defects in a high density polyethylene pipe are imaged with the total focusing method. The viscoelastic attenuation of this material greatly reduces the signal level and leads to a poor signal-to-noise ratio due to electronic noise. To improve the image quality, the decomposition of the time reversal operator method is combined with the spatial Hadamard coded transmissions before calculating images in the time domain. Because the Hadamard coding is not compatible with conventional imaging systems, the paper proposes two modified coding methods based on sparse Hadamard matrices with +1/0 coefficients. The signal-to-noise ratios expected with the different spatial codes are demonstrated, then validated on both simulated and experimental data. Experiments are performed with a transducer array in contact with the base material of a polyethylene pipe. In order to improve the noise filtering procedure, the singular values associated with electronic noise are expressed on the basis of the random matrix theory. This model of noise singular values allows a better identification of the defect response in noisy experimental data. Lastly, the imaging method is evaluated in a more industrial inspection configuration where an immersion array probe is used to image defects in a butt fusion weld with a complex geometry.

  2. Making time count: functional evidence for temporal coding of taste sensation.

    PubMed

    Di Lorenzo, Patricia M; Leshchinskiy, Sergey; Moroney, Dana N; Ozdoba, Jasen M

    2009-02-01

    Although the temporal characteristics of neural responses have been proposed as a mechanism for sensory neural coding, there has been little evidence thus far that this type of information is actually used by the nervous system. Here the authors show that patterned electrical pulses trains that mimic the response to the taste of quinine can produce a bitterlike sensation when delivered to the nucleus tractus solitarius of behaving rats. Following conditioned aversion training using either "quinine simulation" patterns of electrical stimulation or natural quinine (0.1 mM) as a conditioned stimulus, rats specifically generalized the aversion to 2 bitter tastants: quinine and urea. Randomization of the quinine simulation patterns resulted in generalization patterns that resembled those to a perithreshold concentration (0.01 mM) of quinine. These data provide strong evidence that the temporal pattern of brainstem activity may convey information about taste quality and underscore the functional significance of temporal coding.

  3. Wireless Network Cocast: Cooperative Communications with Space-Time Network Coding

    DTIC Science & Technology

    2011-04-21

    the transformed-based STNC for different numbers of user nodes (N = 2 and N = 3), QPSK and 16- QAM modulation , and (a) (M = 1) and (b) (M = 2...and (b) 16- QAM modulations . . . . . . . . . . . . . . . . . 95 4.6 Performance comparison between the proposed STNC scheme and a scheme employing...distributed Alamouti code for N = 2 and M = 2, (a) QPSK and (b) 16- QAM modulations . . . . . . . . . . . . . . . . . 96 5.1 A multi-source wireless network

  4. Glial Cell Contributions to Auditory Brainstem Development

    PubMed Central

    Cramer, Karina S.; Rubel, Edwin W

    2016-01-01

    Glial cells, previously thought to have generally supporting roles in the central nervous system, are emerging as essential contributors to multiple aspects of neuronal circuit function and development. This review focuses on the contributions of glial cells to the development of auditory pathways in the brainstem. These pathways display specialized synapses and an unusually high degree of precision in circuitry that enables sound source localization. The development of these pathways thus requires highly coordinated molecular and cellular mechanisms. Several classes of glial cells, including astrocytes, oligodendrocytes and microglia, have now been explored in these circuits in both avian and mammalian brainstems. Distinct populations of astrocytes are found over the course of auditory brainstem maturation. Early appearing astrocytes are associated with spatial compartments in the avian auditory brainstem. Factors from late appearing astrocytes promote synaptogenesis and dendritic maturation, and astrocytes remain integral parts of specialized auditory synapses. Oligodendrocytes play a unique role in both birds and mammals in highly regulated myelination essential for proper timing to decipher interaural cues. Microglia arise early in brainstem development and may contribute to maturation of auditory pathways. Together these studies demonstrate the importance of non-neuronal cells in the assembly of specialized auditory brainstem circuits. PMID:27818624

  5. Auditory temporal processing skills in musicians with dyslexia.

    PubMed

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia.

  6. Acute auditory agnosia as the presenting hearing disorder in MELAS.

    PubMed

    Miceli, Gabriele; Conti, Guido; Cianfoni, Alessandro; Di Giacopo, Raffaella; Zampetti, Patrizia; Servidei, Serenella

    2008-12-01

    MELAS is commonly associated with peripheral hearing loss. Auditory agnosia is a rare cortical auditory impairment, usually due to bilateral temporal damage. We document, for the first time, auditory agnosia as the presenting hearing disorder in MELAS. A young woman with MELAS (A3243G mtDNA mutation) suffered from acute cortical hearing damage following a single stroke-like episode, in the absence of previous hearing deficits. Audiometric testing showed marked central hearing impairment and very mild sensorineural hearing loss. MRI documented bilateral, acute lesions to superior temporal regions. Neuropsychological tests demonstrated auditory agnosia without aphasia. Our data and a review of published reports show that cortical auditory disorders are relatively frequent in MELAS, probably due to the strikingly high incidence of bilateral and symmetric damage following stroke-like episodes. Acute auditory agnosia can be the presenting hearing deficit in MELAS and, conversely, MELAS should be suspected in young adults with sudden hearing loss.

  7. Auditory processing deficits in reading disabled adults.

    PubMed

    Amitay, Sygal; Ahissar, Meray; Nelken, Israel

    2002-09-01

    The nature of the auditory processing deficit of disabled readers is still an unresolved issue. The quest for a fundamental, nonlinguistic, perceptual impairment has been dominated by the hypothesis that the difficulty lies in processing sequences of stimuli at presentation rates of tens of milliseconds. The present study examined this hypothesis using tasks that require processing of a wide range of stimulus time constants. About a third of the sampled population of disabled readers (classified as "poor auditory processors") had difficulties in most of the tasks tested: detection of frequency differences, detection of tones in narrowband noise, detection of amplitude modulation, detection of the direction of sound sources moving in virtual space, and perception of the lateralized position of tones based on their interaural phase differences. Nevertheless, across-channel integration was intact in these poor auditory processors since comodulation masking release was not reduced. Furthermore, phase locking was presumably intact since binaural masking level differences were normal. In a further examination of temporal processing, participants were asked to discriminate two tones at various intervals where the frequency difference was ten times each individual's frequency just noticeable difference (JND). Under these conditions, poor auditory processors showed no specific difficulty at brief intervals, contrary to predictions under a fast temporal processing deficit assumption. The complementary subgroup of disabled readers who were not poor auditory processors showed some difficulty in this condition when compared with their direct controls. However, they had no difficulty on auditory tasks such as amplitude modulation detection, which presumably taps processing of similar time scales. These two subgroups of disabled readers had similar reading performance but those with a generally poor auditory performance scored lower on some cognitive tests. Taken together, these

  8. Seamless Data-Rate Change Using Punctured Convolutional Codes for a Time-Varying Signal-to-Noise Ratio

    NASA Technical Reports Server (NTRS)

    Feria, Ying; Cheung, Kar-Ming

    1995-01-01

    In a time-varying signal-to-noise-ratio (SNR) environment, symbol rate is changed to maximize data return. However, the symbol-rate changes may cause the receiver symbol loop to lose lock, thus losing real-time data. We propose an alternate way of varying the data rate in a seamless fashion by puncturing the convolutionally encoded symbol stream and transmitting the punctured encoded symbols with a constant symbol rate. We systematically searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We concluded that this scheme reduces the symbol-rate changes from 9 to 2 and provides a larger data return and a higher symbol SNR during most of the day.

  9. Auditory grouping in the perception of speech and complex sounds

    NASA Astrophysics Data System (ADS)

    Darwin, Chris; Rivenez, Marie

    2004-05-01

    This talk will give an overview of experimental work on auditory grouping in speech perception including the use of grouping cues in the extraction of source-specific auditory information, and the tracking of sound sources across time. Work on the perception of unattended speech sounds will be briefly reviewed and some recent experiments described demonstrating the importance of pitch differences in allowing lexical processing of speech on the unattended ear. The relationship between auditory grouping and auditory continuity will also be discussed together with recent experiments on the role of grouping in the perceptual continuity of complex sounds.

  10. Hybrid Raman/Brillouin-optical-time-domain-analysis-distributed optical fiber sensors based on cyclic pulse coding.

    PubMed

    Taki, M; Signorini, A; Oton, C J; Nannipieri, T; Di Pasquale, F

    2013-10-15

    We experimentally demonstrate the use of cyclic pulse coding for distributed strain and temperature measurements in hybrid Raman/Brillouin optical time-domain analysis (BOTDA) optical fiber sensors. The highly integrated proposed solution effectively addresses the strain/temperature cross-sensitivity issue affecting standard BOTDA sensors, allowing for simultaneous meter-scale strain and temperature measurements over 10 km of standard single mode fiber using a single narrowband laser source only.

  11. A parallel code to calculate rate-state seismicity evolution induced by time dependent, heterogeneous Coulomb stress changes

    NASA Astrophysics Data System (ADS)

    Cattania, C.; Khalid, F.

    2016-09-01

    The estimation of space and time-dependent earthquake probabilities, including aftershock sequences, has received increased attention in recent years, and Operational Earthquake Forecasting systems are currently being implemented in various countries. Physics based earthquake forecasting models compute time dependent earthquake rates based on Coulomb stress changes, coupled with seismicity evolution laws derived from rate-state friction. While early implementations of such models typically performed poorly compared to statistical models, recent studies indicate that significant performance improvements can be achieved by considering the spatial heterogeneity of the stress field and secondary sources of stress. However, the major drawback of these methods is a rapid increase in computational costs. Here we present a code to calculate seismicity induced by time dependent stress changes. An important feature of the code is the possibility to include aleatoric uncertainties due to the existence of multiple receiver faults and to the finite grid size, as well as epistemic uncertainties due to the choice of input slip model. To compensate for the growth in computational requirements, we have parallelized the code for shared memory systems (using OpenMP) and distributed memory systems (using MPI). Performance tests indicate that these parallelization strategies lead to a significant speedup for problems with different degrees of complexity, ranging from those which can be solved on standard multicore desktop computers, to those requiring a small cluster, to a large simulation that can be run using up to 1500 cores.

  12. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  13. The development of auditory perception in children after auditory brainstem implantation.

    PubMed

    Colletti, Liliana; Shannon, Robert V; Colletti, Vittorio

    2014-01-01

    Auditory brainstem implants (ABIs) can provide useful auditory perception and language development in deaf children who are not able to use a cochlear implant (CI). We prospectively followed up a consecutive group of 64 deaf children up to 12 years following ABI surgery. The etiology of deafness in these children was: cochlear nerve aplasia in 49, auditory neuropathy in 1, cochlear malformations in 8, bilateral cochlear postmeningitic ossification in 3, neurofibromatosis type 2 in 2, and bilateral cochlear fractures due to a head injury in 1. Thirty-five children had other congenital nonauditory disabilities. Twenty-two children had previous CIs with no benefit. Fifty-eight children were fitted with the Cochlear 24 ABI device and 6 with the MedEl ABI device, and all children followed the same rehabilitation program. Auditory perceptual abilities were evaluated on the Categories of Auditory Performance (CAP) scale. No child was lost to follow-up, and there were no exclusions from the study. All children showed significant improvement in auditory perception with implant experience. Seven children (11%) were able to achieve the highest score on the CAP test; they were able to converse on the telephone within 3 years of implantation. Twenty children (31.3%) achieved open set speech recognition (CAP score of 5 or greater) and 30 (46.9%) achieved a CAP level of 4 or greater. Of the 29 children without nonauditory disabilities, 18 (62%) achieved a CAP score of 5 or greater with the ABI. All children showed continued improvements in auditory skills over time. The long-term results of ABI surgery reveal significant auditory benefit in most children, and open set auditory recognition in many.

  14. Physiological Measures of Auditory Function

    NASA Astrophysics Data System (ADS)

    Kollmeier, Birger; Riedel, Helmut; Mauermann, Manfred; Uppenkamp, Stefan

    When acoustic signals enter the ears, they pass several processing stages of various complexities before they will be perceived. The auditory pathway can be separated into structures dealing with sound transmission in air (i.e. the outer ear, ear canal, and the vibration of tympanic membrane), structures dealing with the transformation of sound pressure waves into mechanical vibrations of the inner ear fluids (i.e. the tympanic membrane, ossicular chain, and the oval window), structures carrying mechanical vibrations in the fluid-filled inner ear (i.e. the cochlea with basilar membrane, tectorial membrane, and hair cells), structures that transform mechanical oscillations into a neural code, and finally several stages of neural processing in the brain along the pathway from the brainstem to the cortex.

  15. Typical BWR/4 MSIV closure ATWS analysis using RAMONA-3B code with space-time neutron kinetics

    SciTech Connect

    Neymotin, L.; Saha, P.

    1984-01-01

    A best-estimate analysis of a typical BWR/4 MSIV closure ATWS has been performed using the RAMONA-3B code with three-dimensional neutron kinetics. All safety features, namely, the safety and relief valves, recirculation pump trip, high pressure safety injections and the standby liquid control system (boron injection), were assumed to work as designed. No other operator action was assumed. The results show a strong spatial dependence of reactor power during the transient. After the initial peak of pressure and reactor power, the reactor vessel pressure oscillated between the relief valve set points, and the reactor power oscillated between 20 to 50% of the steady state power until the hot shutdown condition was reached at approximately 1400 seconds. The suppression pool bulk water temperature at this time was predicted to be approx. 96/sup 0/C (205/sup 0/F). In view of code performance and reasonable computer running time, the RAMONA-3B code is recommended for further best-estimate analyses of ATWS-type events in BWRs.

  16. A Circuit for Motor Cortical Modulation of Auditory Cortical Activity

    PubMed Central

    Nelson, Anders; Schneider, David M.; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan

    2013-01-01

    Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity. PMID:24005287

  17. A circuit for motor cortical modulation of auditory cortical activity.

    PubMed

    Nelson, Anders; Schneider, David M; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan; Mooney, Richard

    2013-09-04

    Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity.

  18. Auditory hallucinations induced by trazodone.

    PubMed

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-04-03

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients.

  19. Auditory hallucinations induced by trazodone

    PubMed Central

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-01-01

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048

  20. Auditory models for speech analysis

    NASA Astrophysics Data System (ADS)

    Maybury, Mark T.

    This paper reviews the psychophysical basis for auditory models and discusses their application to automatic speech recognition. First an overview of the human auditory system is presented, followed by a review of current knowledge gleaned from neurological and psychoacoustic experimentation. Next, a general framework describes established peripheral auditory models which are based on well-understood properties of the peripheral auditory system. This is followed by a discussion of current enhancements to that models to include nonlinearities and synchrony information as well as other higher auditory functions. Finally, the initial performance of auditory models in the task of speech recognition is examined and additional applications are mentioned.

  1. Seamless Data-Rate Change Using Punctured Convolutional Codes for Time-Varying Signal-to-Noise Ratio

    NASA Technical Reports Server (NTRS)

    Feria, Ying

    1995-01-01

    In a time-varying signal-to-noise (SNR) environment, symbol rate is often changed to maximize ata return. However, the symbol-rate change has some undesirable effects such as changing the ransmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus osing some data. In this article, we are proposing an alternate way of varying the data rate without hanging the symbol rate and therefore the transmission bandwidth. The data rate change is achieved n a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the hanging SNR environment. We have also derived an exact expression to enumerate the number of nique puncturing patterns. To demonstrate this seamless rate-change capability, we searched for good uncturing patterns for the Galileo (14, 1/4) convolutional code and changed the data rates by using the unctured codes to match the Galileo SNR profile of November 9, 1997.

  2. Comodulation Enhances Signal Detection via Priming of Auditory Cortical Circuits

    PubMed Central

    Sollini, Joseph

    2016-01-01

    Acoustic environments are composed of complex overlapping sounds that the auditory system is required to segregate into discrete perceptual objects. The functions of distinct auditory processing stations in this challenging task are poorly understood. Here we show a direct role for mouse auditory cortex in detection and segregation of acoustic information. We measured the sensitivity of auditory cortical neurons to brief tones embedded in masking noise. By altering spectrotemporal characteristics of the masker, we reveal that sensitivity to pure tone stimuli is strongly enhanced in coherently modulated broadband noise, corresponding to the psychoacoustic phenomenon comodulation masking release. Improvements in detection were largest following priming periods of noise alone, indicating that cortical segregation is enhanced over time. Transient opsin-mediated silencing of auditory cortex during the priming period almost completely abolished these improvements, suggesting that cortical processing may play a direct and significant role in detection of quiet sounds in noisy environments. SIGNIFICANCE STATEMENT Auditory systems are adept at detecting and segregating competing sound sources, but there is little direct evidence of how this process occurs in the mammalian auditory pathway. We demonstrate that coherent broadband noise enhances signal representation in auditory cortex, and that prolonged exposure to noise is necessary to produce this enhancement. Using optogenetic perturbation to selectively silence auditory cortex during early noise processing, we show that cortical processing plays a crucial role in the segregation of competing sounds. PMID:27927950

  3. Multimodal Bivariate Thematic Maps: Auditory and Haptic Display.

    ERIC Educational Resources Information Center

    Jeong, Wooseob; Gluck, Myke

    2002-01-01

    Explores the possibility of multimodal bivariate thematic maps by utilizing auditory and haptic (sense of touch) displays. Measured completion time of tasks and the recall (retention) rate in two experiments, and findings confirmed the possibility of using auditory and haptic displays in geographic information systems (GIS). (Author/LRW)

  4. Multimodal Geographic Information Systems: Adding Haptic and Auditory Display.

    ERIC Educational Resources Information Center

    Jeong, Wooseob; Gluck, Myke

    2003-01-01

    Investigated the feasibility of adding haptic and auditory displays to traditional visual geographic information systems (GISs). Explored differences in user performance, including task completion time and accuracy, and user satisfaction with a multimodal GIS which was implemented with a haptic display, auditory display, and combined display.…

  5. Basic Auditory Processing and Developmental Dyslexia in Chinese

    ERIC Educational Resources Information Center

    Wang, Hsiao-Lan Sharon; Huss, Martina; Hamalainen, Jarmo A.; Goswami, Usha

    2012-01-01

    The present study explores the relationship between basic auditory processing of sound rise time, frequency, duration and intensity, phonological skills (onset-rime and tone awareness, sound blending, RAN, and phonological memory) and reading disability in Chinese. A series of psychometric, literacy, phonological, auditory, and character…

  6. Temporal expectation weights visual signals over auditory signals.

    PubMed

    Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru

    2017-04-01

    Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.

  7. The LAC Test: A New Look at Auditory Conceptualization and Literacy Development K-12.

    ERIC Educational Resources Information Center

    Lindamood, Charles; And Others

    The Lindamood Auditory Conceptualization (LAC) Test was constructed with the recognition that the process of decoding involves an integration of the auditory, visual, and motor senses. Requiring the manipulation of colored blocks to indicate conceptualization of test patterns spoken by the examiner, subtest 1 entails coding of identity, number,…

  8. Synchronous auditory nerve activity in the carboplatin-chinchilla model of auditory neuropathy

    PubMed Central

    Cowper-Smith, C. D.; Dingle, R. N.; Guo, Y.; Burkard, R.; Phillips, D. P.

    2010-01-01

    Two hallmark features of auditory neuropathy (AN) are normal outer hair cell function in the presence of an absent∕abnormal auditory brainstem response (ABR). Studies of human AN patients are unable to determine whether disruption of the ABR is the result of a reduction of neural input, a loss of auditory nerve fiber (ANF) synchrony, or both. Neurophysiological data from the carboplatin model of AN reveal intact neural synchrony in the auditory nerve and inferior colliculus, despite significant reductions in neural input. These data suggest that (1), intact neural synchrony is available to support an ABR following carboplatin treatment and, (2), impaired spike timing intrinsic to neurons is required for the disruption of the ABR observed in human AN. PMID:20649190

  9. Auditory Brainstem Response Latency in Noise as a Marker of Cochlear Synaptopathy

    PubMed Central

    Hickox, Ann E.; Bharadwaj, Hari M.; Goldberg, Hannah; Verhulst, Sarah; Liberman, M. Charles; Shinn-Cunningham, Barbara G.

    2016-01-01

    Evidence from animal and human studies suggests that moderate acoustic exposure, causing only transient threshold elevation, can nonetheless cause “hidden hearing loss” that interferes with coding of suprathreshold sound. Such noise exposure destroys synaptic connections between cochlear hair cells and auditory nerve fibers; however, there is no clinical test of this synaptopathy in humans. In animals, synaptopathy reduces the amplitude of auditory brainstem response (ABR) wave-I. Unfortunately, ABR wave-I is difficult to measure in humans, limiting its clinical use. Here, using analogous measurements in humans and mice, we show that the effect of masking noise on the latency of the more robust ABR wave-V mirrors changes in ABR wave-I amplitude. Furthermore, in our human cohort, the effect of noise on wave-V latency predicts perceptual temporal sensitivity. Our results suggest that measures of the effects of noise on ABR wave-V latency can be used to diagnose cochlear synaptopathy in humans. SIGNIFICANCE STATEMENT Although there are suspicions that cochlear synaptopathy affects humans with normal hearing thresholds, no one has yet reported a clinical measure that is a reliable marker of such loss. By combining human and animal data, we demonstrate that the latency of auditory brainstem response wave-V in noise reflects auditory nerve loss. This is the first study of human listeners with normal hearing thresholds that links individual differences observed in behavior and auditory brainstem response timing to cochlear synaptopathy. These results can guide development of a clinical test to reveal this previously unknown form of noise-induced hearing loss in humans. PMID:27030760

  10. Mechanisms underlying the temporal precision of sound coding at the inner hair cell ribbon synapse

    PubMed Central

    Moser, Tobias; Neef, Andreas; Khimich, Darina

    2006-01-01

    Our auditory system is capable of perceiving the azimuthal location of a low frequency sound source with a precision of a few degrees. This requires the auditory system to detect time differences in sound arrival between the two ears down to tens of microseconds. The detection of these interaural time differences relies on network computation by auditory brainstem neurons sharpening the temporal precision of the afferent signals. Nevertheless, the system requires the hair cell synapse to encode sound with the highest possible temporal acuity. In mammals, each auditory nerve fibre receives input from only one inner hair cell (IHC) synapse. Hence, this single synapse determines the temporal precision of the fibre. As if this was not enough of a challenge, the auditory system is also capable of maintaining such high temporal fidelity with acoustic signals that vary greatly in their intensity. Recent research has started to uncover the cellular basis of sound coding. Functional and structural descriptions of synaptic vesicle pools and estimates for the number of Ca2+ channels at the ribbon synapse have been obtained, as have insights into how the receptor potential couples to the release of synaptic vesicles. Here, we review current concepts about the mechanisms that control the timing of transmitter release in inner hair cells of the cochlea. PMID:16901948

  11. Effects of Multimodal Presentation and Stimulus Familiarity on Auditory and Visual Processing

    ERIC Educational Resources Information Center

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2010-01-01

    Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of…

  12. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    PubMed

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening.

  13. Real Time PCR to detect hazelnut allergen coding sequences in processed foods.

    PubMed

    Iniesto, Elisa; Jiménez, Ana; Prieto, Nuria; Cabanillas, Beatriz; Burbano, Carmen; Pedrosa, Mercedes M; Rodríguez, Julia; Muzquiz, Mercedes; Crespo, Jesús F; Cuadrado, Carmen; Linacero, Rosario

    2013-06-01

    A quantitative RT-PCR method, employing novel primer sets designed on Cor a 9, Cor a 11 and Cor a 13 allergen-coding sequences has been setup and validated. Its specificity, sensitivity and applicability have been compared. The effect of processing on detectability of these hazelnut targets in complex food matrices was also studied. The DNA extraction method based on CTAB-phenol-chloroform was the best for hazelnut. RT-PCR using primers for Cor a 9, 11 and 13 allowed a specific and accurate amplification of these sequences. The limit of detection was 1 ppm of raw hazelnut. The method sensitivity and robustness were confirmed with spiked samples. Thermal treatments (roasting and autoclaving) reduced yield and amplificability of hazelnut DNA, however, high-hydrostatic pressure did not affect. Compared with an ELISA assay, this RT-PCR showed higher sensitivity to detected hazelnut traces in commercial foodstuffs. The RT-PCR method described is the most sensitive of those reported for the detection of hazelnut traces in processed foods.

  14. Coding of azimuthal directions via time-compensated combination of celestial compass cues.

    PubMed

    Pfeiffer, Keram; Homberg, Uwe

    2007-06-05

    Many animals use the sun as a reference for spatial orientation [1-3]. In addition to sun position, the sky provides two other sources of directional information, a color gradient [4] and a polarization pattern [5]. Work on insects has predominantly focused on celestial polarization as an orientation cue [6, 7]. Relying on sky polarization alone, however, poses the following two problems: E vector orientations in the sky are not suited to distinguish between the solar and antisolar hemisphere of the sky, and the polarization pattern changes with changing solar elevation during the day [8, 9]. Here, we present neurons that overcome both problems in a locust's brain. The spiking activity of these neurons depends (1) on the E vector orientation of dorsally presented polarized light, (2) on the azimuthal, i.e., horizontal, direction, and (3) on the wavelength of an unpolarized light source. Their tuning to these stimuli matches the distribution of a UV/green chromatic contrast as well as the polarization of natural skylight and compensates for changes in solar elevation during the day. The neurons are, therefore, suited to code for solar azimuth by concurrent combination of signals from the spectral gradient, intensity gradient, and polarization pattern of the sky.

  15. Neural Representation of Concurrent Harmonic Sounds in Monkey Primary Auditory Cortex: Implications for Models of Auditory Scene Analysis

    PubMed Central

    Steinschneider, Mitchell; Micheyl, Christophe

    2014-01-01

    The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0. A difference in pitch between simultaneous HCTs provides a powerful cue for their segregation. The neural mechanisms underlying concurrent sound segregation based on pitch differences are poorly understood. Here, we examined neural responses in monkey primary auditory cortex (A1) to two concurrent HCTs that differed in F0 such that they are heard as two separate “auditory objects” with distinct pitches. We found that A1 can resolve, via a rate-place code, the lower harmonics of both HCTs, a prerequisite for deriving their pitches and for their perceptual segregation. Onset asynchrony between the HCTs enhanced the neural representation of their harmonics, paralleling their improved perceptual segregation in humans. Pitches of the concurrent HCTs could also be temporally represented by neuronal phase-locking at their respective F0s. Furthermore, a model of A1 responses using harmonic templates could qualitatively reproduce psychophysical data on concurrent sound segregation in humans. Finally, we identified a possible intracortical homolog of the “object-related negativity” recorded noninvasively in humans, which correlates with the perceptual segregation of concurrent sounds. Findings indicate that A1 contains sufficient spectral and temporal information for segregating concurrent sounds based on differences in pitch. PMID:25209282

  16. Neural representation of concurrent harmonic sounds in monkey primary auditory cortex: implications for models of auditory scene analysis.

    PubMed

    Fishman, Yonatan I; Steinschneider, Mitchell; Micheyl, Christophe

    2014-09-10

    The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0. A difference in pitch between simultaneous HCTs provides a powerful cue for their segregation. The neural mechanisms underlying concurrent sound segregation based on pitch differences are poorly understood. Here, we examined neural responses in monkey primary auditory cortex (A1) to two concurrent HCTs that differed in F0 such that they are heard as two separate "auditory objects" with distinct pitches. We found that A1 can resolve, via a rate-place code, the lower harmonics of both HCTs, a prerequisite for deriving their pitches and for their perceptual segregation. Onset asynchrony between the HCTs enhanced the neural representation of their harmonics, paralleling their improved perceptual segregation in humans. Pitches of the concurrent HCTs could also be temporally represented by neuronal phase-locking at their respective F0s. Furthermore, a model of A1 responses using harmonic templates could qualitatively reproduce psychophysical data on concurrent sound segregation in humans. Finally, we identified a possible intracortical homolog of the "object-related negativity" recorded noninvasively in humans, which correlates with the perceptual segregation of concurrent sounds. Findings indicate that A1 contains sufficient spectral and temporal information for segregating concurrent sounds based on differences in pitch.

  17. Optogenetic Stimulation of the Auditory Nerve

    PubMed Central

    Hernandez, Victor H.; Gehrt, Anna; Jing, Zhizi; Hoch, Gerhard; Jeschke, Marcus; Strenzke, Nicola; Moser, Tobias

    2014-01-01

    Direct electrical stimulation of spiral ganglion neurons (SGNs) by cochlear implants (CIs) enables open speech comprehension in the majority of implanted deaf subjects1-6. Nonetheless, sound coding with current CIs has poor frequency and intensity resolution due to broad current spread from each electrode contact activating a large number of SGNs along the tonotopic axis of the cochlea7-9. Optical stimulation is proposed as an alternative to electrical stimulation that promises spatially more confined activation of SGNs and, hence, higher frequency resolution of coding. In recent years, direct infrared illumination of the cochlea has been used to evoke responses in the auditory nerve10. Nevertheless it requires higher energies than electrical stimulation10,11 and uncertainty remains as to the underlying mechanism12. Here we describe a method based on optogenetics to stimulate SGNs with low intensity blue light, using transgenic mice with neuronal expression of channelrhodopsin 2 (ChR2)13 or virus-mediated expression of the ChR2-variant CatCh14. We used micro-light emitting diodes (µLEDs) and fiber-coupled lasers to stimulate ChR2-expressing SGNs through a small artificial opening (cochleostomy) or the round window. We assayed the responses by scalp recordings of light-evoked potentials (optogenetic auditory brainstem response: oABR) or by microelectrode recordings from the auditory pathway and compared them with acoustic and electrical stimulation. PMID:25350571

  18. TEDIT, a computer code for studying the time evolution of drift instabilities in a torus

    SciTech Connect

    van Rij, W.I.; Beasley, C.O. Jr.

    1983-07-01

    TEDIT is an initial-value program that calculates the time evolution of drift instabilities in a toroidal plasma with shear and in a slab-geometry approximation. The electron and ion kinetic equations are advanced in time, with the electrostatic potential phi and vector potential A/sub parallel/ calculated from quasi-neutrality and Ampere's law, respectively.

  19. Estimating the Receiver Delay for Ionosphere-Free Code (P3) GPS Time Transfer

    DTIC Science & Technology

    2007-01-01

    1994, “Technical Directives for Standardization of GPS Time Receiver Software,” Metrologia , 31, 69-79. [3] P. Defraigne and G. Petit, 2003, “Time...Transfer to TAI Using Geodetic Receivers,” Metrologia , 40, 184-188. [4] J. White, R. Beard, G. P. Landis, G. Petit, and E. Powers, 2002, “Dual

  20. Binaural auditory processing in multiple sclerosis subjects.

    PubMed

    Levine, R A; Gardner, J C; Stufflebeam, S M; Fullerton, B C; Carlisle, E W; Furst, M; Rosen, B R; Kiang, N Y

    1993-06-01

    In order to relate human auditory processing to physiological and anatomical experimental animal data, we have examined the interrelationships between behavioral, electrophysiological and anatomical data obtained from human subjects with focal brainstem lesions. Thirty-eight subjects with multiple sclerosis were studied with tests of interaural time and level discrimination (just noticeable differences or jnds), brainstem auditory evoked potentials and magnetic resonance (MR) imaging. Interaural testing used two types of stimuli, high-pass (> 4000 Hz) and low-pass (< 1000 Hz) noise bursts. Abnormal time jnds (Tjnd) were far more common than abnormal level jnds (70% vs 11%); especially for the high-pass (Hp) noise (70% abnormal vs 40% abnormal for low-pass (Lp) noise). The HpTjnd could be abnormal with no other abnormalities; however, whenever the BAEPs, LpTjnd and/or level jnds were abnormal HpTjnd was always abnormal. Abnormal wave III amplitude was associated with abnormalities in both time jnds, but abnormal wave III latency with only abnormal HpTjnds. Abnormal wave V amplitude, when unilateral, was associated with a major HpTjnd abnormality, and, when bilateral, with both HpTjnd and LpTjnd major abnormalities. Sixteen of the subjects had their MR scans obtained with a uniform protocol and could be analyzed with objective criteria. In all four subjects with lesions involving the pontine auditory pathway, the BAEPs and both time jnds were abnormal. Of the twelve subjects with no lesions involving the pontine auditory pathway, all had normal BAEPs and level jnds, ten had normal LpTjnds, but only five had normal HpTjnds. We conclude that interaural time discrimination is closely related to the BAEPs and is dependent upon the stimulus spectrum. Redundant encoding of low-frequency sounds in the discharge patterns of auditory neurons, may explain why the HpTjnd is a better indicator of neural desynchrony than the LpTjnd. Encroachment of MS lesions upon the pontine

  1. The Drosophila Auditory System

    PubMed Central

    Boekhoff-Falk, Grace; Eberl, Daniel F.

    2013-01-01

    Development of a functional auditory system in Drosophila requires specification and differentiation of the chordotonal sensilla of Johnston’s organ (JO) in the antenna, correct axonal targeting to the antennal mechanosensory and motor center (AMMC) in the brain, and synaptic connections to neurons in the downstream circuit. Chordotonal development in JO is functionally complicated by structural, molecular and functional diversity that is not yet fully understood, and construction of the auditory neural circuitry is only beginning to unfold. Here we describe our current understanding of developmental and molecular mechanisms that generate the exquisite functions of the Drosophila auditory system, emphasizing recent progress and highlighting important new questions arising from research on this remarkable sensory system. PMID:24719289

  2. Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.

    SciTech Connect

    PADOVANI, ENRICO

    2012-04-15

    Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, was developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.

  3. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    SciTech Connect

    Candel, A; Kabel, A.; Lee, L.; Li, Z.; Ng, C.; Schussman, G.; Ko, K.; Syratchev, I.; /CERN

    2009-06-19

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  4. Real-time video streaming using H.264 scalable video coding (SVC) in multihomed mobile networks: a testbed approach

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2011-03-01

    Users of the next generation wireless paradigm known as multihomed mobile networks expect satisfactory quality of service (QoS) when accessing streamed multimedia content. The recent H.264 Scalable Video Coding (SVC) extension to the Advanced Video Coding standard (AVC), offers the facility to adapt real-time video streams in response to the dynamic conditions of multiple network paths encountered in multihomed wireless mobile networks. Nevertheless, preexisting streaming algorithms were mainly proposed for AVC delivery over multipath wired networks and were evaluated by software simulation. This paper introduces a practical, hardware-based testbed upon which we implement and evaluate real-time H.264 SVC streaming algorithms in a realistic multihomed wireless mobile networks environment. We propose an optimised streaming algorithm with multi-fold technical contributions. Firstly, we extended the AVC packet prioritisation schemes to reflect the three-dimensional granularity of SVC. Secondly, we designed a mechanism for evaluating the effects of different streamer 'read ahead window' sizes on real-time performance. Thirdly, we took account of the previously unconsidered path switching and mobile networks tunnelling overheads encountered in real-world deployments. Finally, we implemented a path condition monitoring and reporting scheme to facilitate the intelligent path switching. The proposed system has been experimentally shown to offer a significant improvement in PSNR of the received stream compared with representative existing algorithms.

  5. Auditory perceptual simulation: Simulating speech rates or accents?

    PubMed

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects.

  6. Spectrotemporal resolution tradeoff in auditory processing as revealed by human auditory brainstem responses and psychophysical indices.

    PubMed

    Bidelman, Gavin M; Syed Khaja, Ameenuddin

    2014-06-20

    Auditory filter theory dictates a physiological compromise between frequency and temporal resolution of cochlear signal processing. We examined neurophysiological correlates of these spectrotemporal tradeoffs in the human auditory system using auditory evoked brain potentials and psychophysical responses. Temporal resolution was assessed using scalp-recorded auditory brainstem responses (ABRs) elicited by paired clicks. The inter-click interval (ICI) between successive pulses was parameterized from 0.7 to 25 ms to map ABR amplitude recovery as a function of stimulus spacing. Behavioral frequency difference limens (FDLs) and auditory filter selectivity (Q10 of psychophysical tuning curves) were obtained to assess relations between behavioral spectral acuity and electrophysiological estimates of temporal resolvability. Neural responses increased monotonically in amplitude with increasing ICI, ranging from total suppression (0.7 ms) to full recovery (25 ms) with a temporal resolution of ∼3-4 ms. ABR temporal thresholds were correlated with behavioral Q10 (frequency selectivity) but not FDLs (frequency discrimination); no correspondence was observed between Q10 and FDLs. Results suggest that finer frequency selectivity, but not discrimination, is associated with poorer temporal resolution. The inverse relation between ABR recovery and perceptual frequency tuning demonstrates a time-frequency tradeoff between the temporal and spectral resolving power of the human auditory system.

  7. LUMPED Unsteady: a Visual Basic ® code of unsteady-state lumped-parameter models for mean residence time analyses of groundwater systems

    NASA Astrophysics Data System (ADS)

    Ozyurt, N. Nur; Bayari, C. Serdar

    2005-04-01

    A Microsoft ® Visual Basic 6.0 (Microsoft Corporation, 1987-1998) code of 9 lumped-parameter models of unsteady flow is presented for the analysis of mean residence time in aquifers. Groundwater flow systems obeying plug and well-mixed flow models and their combinations in parallel or serial connection can be simulated by the code. Models can use tritium, tritiugenic He-3, oxygen-18, deuterium, krypton-85, chlorofluorocarbons (CFC-11, CFC-12 and CFC-113) and sulfur hexafluoride (SF 6) as the environmental tracers. The executable code runs under all 32-bit Windows operating systems. Details of the code are explained and its limitations are indicated.

  8. Design of time-pulse coded optoelectronic neuronal elements for nonlinear transformation and integration

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.

    2008-03-01

    In the paper the actuality of neurophysiologically motivated neuron arrays with flexibly programmable functions and operations with possibility to select required accuracy and type of nonlinear transformation and learning are shown. We consider neurons design and simulation results of multichannel spatio-time algebraic accumulation - integration of optical signals. Advantages for nonlinear transformation and summation - integration are shown. The offered circuits are simple and can have intellectual properties such as learning and adaptation. The integrator-neuron is based on CMOS current mirrors and comparators. The performance: consumable power - 100...500 μW, signal period- 0.1...1ms, input optical signals power - 0.2...20 μW time delays - less 1μs, the number of optical signals - 2...10, integration time - 10...100 of signal periods, accuracy or integration error - about 1%. Various modifications of the neuron-integrators with improved performance and for different applications are considered in the paper.

  9. Phonetic categorization in auditory word perception.

    PubMed

    Ganong, W F

    1980-02-01

    To investigate the interaction in speech perception of auditory information and lexical knowledge (in particular, knowledge of which phonetic sequences are words), acoustic continua varying in voice onset time were constructed so that for each acoustic continuum, one of the two possible phonetic categorizations made a word and the other did not. For example, one continuum ranged between the word dash and the nonword tash; another used the nonword dask and the word task. In two experiments, subjects showed a significant lexical effect--that is, a tendency to make phonetic categorizations that make words. This lexical effect was greater at the phoneme boundary (where auditory information is ambiguous) than at the ends of the condinua. Hence the lexical effect must arise at a stage of processing sensitive to both lexical knowledge and auditory information.

  10. Auditory Spatial Receptive Fields Created by Multiplication

    NASA Astrophysics Data System (ADS)

    Peña, José Luis; Konishi, Masakazu

    2001-04-01

    Examples of multiplication by neurons or neural circuits are scarce, although many computational models use this basic operation. The owl's auditory system computes interaural time (ITD) and level (ILD) differences to create a two-dimensional map of auditory space. Space-specific neurons are selective for combinations of ITD and ILD, which define, respectively, the horizontal and vertical dimensions of their receptive fields. A multiplication of separate postsynaptic potentials tuned to ITD and ILD, rather than an addition, can account for the subthreshold responses of these neurons to ITD-ILD pairs. Other nonlinear processes improve the spatial tuning of the spike output and reduce the fit to the multiplicative model.

  11. An Effect of Spatial-Temporal Association of Response Codes: Understanding the Cognitive Representations of Time

    ERIC Educational Resources Information Center

    Vallesi, Antonino; Binns, Malcolm A.; Shallice, Tim

    2008-01-01

    The present study addresses the question of how such an abstract concept as time is represented by our cognitive system. Specifically, the aim was to assess whether temporal information is cognitively represented through left-to-right spatial coordinates, as already shown for other ordered sequences (e.g., numbers). In Experiment 1, the…

  12. Short Time-Scale Sensory Coding in S1 during Discrimination of Whisker Vibrotactile Sequences

    PubMed Central

    Miyashita, Toshio; Lee, Daniel J.; Smith, Katherine A.; Feldman, Daniel E.

    2016-01-01

    Rodent whisker input consists of dense microvibration sequences that are often temporally integrated for perceptual discrimination. Whether primary somatosensory cortex (S1) participates in temporal integration is unknown. We trained rats to discriminate whisker impulse sequences that varied in single-impulse kinematics (5–20-ms time scale) and mean speed (150-ms time scale). Rats appeared to use the integrated feature, mean speed, to guide discrimination in this task, consistent with similar prior studies. Despite this, 52% of S1 units, including 73% of units in L4 and L2/3, encoded sequences at fast time scales (≤20 ms, mostly 5–10 ms), accurately reflecting single impulse kinematics. 17% of units, mostly in L5, showed weaker impulse responses and a slow firing rate increase during sequences. However, these units did not effectively integrate whisker impulses, but instead combined weak impulse responses with a distinct, slow signal correlated to behavioral choice. A neural decoder could identify sequences from fast unit spike trains and behavioral choice from slow units. Thus, S1 encoded fast time scale whisker input without substantial temporal integration across whisker impulses. PMID:27574970

  13. Rate Control for Network-Coded Multipath Relaying with Time-Varying Connectivity

    DTIC Science & Technology

    2010-12-10

    currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE ( DD -MM-YYYY) 2. REPORT TYPE 3. DATES...Retransmission Interval 2.0 sec, and SPF Calculation Delay and Hold Time to 0 sec. For flooding, each of the N relay nodes is able to receive all

  14. 5-HT1A and 5-HT1B receptors differentially modulate rate and timing of auditory responses in the mouse inferior colliculus

    PubMed Central

    Ramsey, Lissandra Castellan Baldan; Sinha, Shiva R.; Hurley, Laura M.

    2010-01-01

    Serotonin is a physiological signal that translates both internal and external information about behavioral context into changes in sensory processing through a diverse array of receptors. The details of this process, particularly how receptors interact to shape sensory encoding, are poorly understood. In the inferior colliculus, a midbrain auditory nucleus, serotonin (5-HT) 1A receptors have suppressive and 5-HT1B receptors have facilitatory effects on evoked responses of neurons. We explored how these two receptor classes interact by testing three hypotheses: that they 1) affect separate neuron populations, 2) affect different response properties, or 3) have different endogenous patterns of activation. The first two hypotheses were tested by iontophoretic application of 5-HT1A and 5-HT1B receptor agonists individually and together to neurons in vivo. 5-HT1A and 5-HT1B agonists affected overlapping populations of neurons. During co-application, 5-HT1A and 5-HT1B agonists influenced spike rate and frequency bandwidth additively, with each moderating the effect of the other. In contrast, although both agonists individually influenced latencies and interspike intervals, the 5-HT1A agonist dominated these measurements during co-application. The third hypothesis was tested by applying antagonists of the 5-HT1A and 5-HT1B receptors. Blocking 5-HT1B receptors was complementary to activation of the receptor, but blocking 5-HT1A receptors was not, suggesting the endogenous activation of additional receptor types. These results suggest that cooperative interactions between 5-HT1A and 5-HT1B receptors shape auditory encoding in the IC, and that the effects of neuromodulators within sensory systems may depend nonlinearly on the specific profile of receptors that are activated. PMID:20646059

  15. Generalizable knowledge outweighs incidental details in prefrontal ensemble code over time

    PubMed Central

    Morrissey, Mark D; Insel, Nathan; Takehara-Nishiuchi, Kaori

    2017-01-01

    Memories for recent experiences are rich in incidental detail, but with time the brain is thought to extract latent rules and structures common across past experiences. We show that over weeks following the acquisition of two distinct associative memories, neuron firing in the rat prelimbic prefrontal cortex (mPFC) became less selective for perceptual features unique to each association and, with an apparently different time-course, became more selective for common relational features. We further found that during exposure to a novel experimental context, memory expression and neuron selectivity for relational features immediately generalized to the new situation. These neural patterns offer a window into the network-level processes by which the mPFC develops a knowledge structure of the world that can be adaptively applied to new experiences. DOI: http://dx.doi.org/10.7554/eLife.22177.001 PMID:28195037

  16. Altered auditory function in rats exposed to hypergravic fields

    NASA Technical Reports Server (NTRS)

    Jones, T. A.; Hoffman, L.; Horowitz, J. M.

    1982-01-01

    The effect of an orthodynamic hypergravic field of 6 G on the brainstem auditory projections was studied in rats. The brain temperature and EEG activity were recorded in the rats during 6 G orthodynamic acceleration and auditory brainstem responses were used to monitor auditory function. Results show that all animals exhibited auditory brainstem responses which indicated impaired conduction and transmission of brainstem auditory signals during the exposure to the 6 G acceleration field. Significant increases in central conduction time were observed for peaks 3N, 4P, 4N, and 5P (N = negative, P = positive), while the absolute latency values for these same peaks were also significantly increased. It is concluded that these results, along with those for fields below 4 G (Jones and Horowitz, 1981), indicate that impaired function proceeds in a rostro-caudal progression as field strength is increased.

  17. Virtual Auditory Displays

    DTIC Science & Technology

    2000-01-01

    timbre , intensity, distance, room modeling, radio communication Virtual Environments Handbook Chapter 4 Virtual Auditory Displays Russell D... musical note “A” as a pure sinusoid, there will be 440 condensations and rarefactions per second. The distance between two adjacent condensations or...and complexity are pitch, loudness, and timbre respectively. This distinction between physical and perceptual measures of sound properties is an

  18. Modelling auditory attention.

    PubMed

    Kaya, Emine Merve; Elhilali, Mounya

    2017-02-19

    Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'.

  19. Modelling auditory attention

    PubMed Central

    Kaya, Emine Merve

    2017-01-01

    Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information—a phenomenon referred to as the ‘cocktail party problem’. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by ‘bottom-up’ sensory-driven factors, as well as ‘top-down’ task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044012

  20. Auditory Fusion in Children.

    ERIC Educational Resources Information Center

    Davis, Sylvia M.; McCroskey, Robert L.

    1980-01-01

    Focuses on auditory fusion (defined in terms of a listerner's ability to distinguish paired acoustic events from single acoustic events) in 3- to 12-year-old children. The subjects listened to 270 pairs of tones controlled for frequency, intensity, and duration. (CM)

  1. Developmental Changes in Auditory Temporal Perception.

    ERIC Educational Resources Information Center

    Morrongiello, Barbara A.; And Others

    1984-01-01

    Infants, preschoolers, and adults were tested to determine the shortest time interval at which they would respond to the precedence effect, an auditory phenomenon produced by presenting the same sound through two loudspeakers with the input to one loudspeaker delayed in relation to the other. Results revealed developmental differences in threshold…

  2. SER Performance of Enhanced Spatial Multiplexing Codes with ZF/MRC Receiver in Time-Varying Rayleigh Fading Channels

    PubMed Central

    Lee, In-Ho

    2014-01-01

    We propose enhanced spatial multiplexing codes (E-SMCs) to enable various encoding rates. The symbol error rate (SER) performance of the E-SMC is investigated when zero-forcing (ZF) and maximal-ratio combining (MRC) techniques are used at a receiver. The proposed E-SMC allows a transmitted symbol to be repeated over time to achieve further diversity gain at the cost of the encoding rate. With the spatial correlation between transmit antennas, SER equations for M-ary QAM and PSK constellations are derived by using a moment generating function (MGF) approximation of a signal-to-noise ratio (SNR), based on the assumption of independent zero-forced SNRs. Analytic and simulated results are compared for time-varying and spatially correlated Rayleigh fading channels that are modelled as first-order Markovian channels. Furthermore, we can find an optimal block length for the E-SMC that meets a required SER. PMID:25114969

  3. Strategy choice mediates the link between auditory processing and spelling.

    PubMed

    Kwong, Tru E; Brachman, Kyle J

    2014-01-01

    Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.

  4. Using light to tell the time of day: sensory coding in the mammalian circadian visual network

    PubMed Central

    2016-01-01

    ABSTRACT Circadian clocks are a near-ubiquitous feature of biology, allowing organisms to optimise their physiology to make the most efficient use of resources and adjust behaviour to maximise survival over the solar day. To fulfil this role, circadian clocks require information about time in the external world. This is most reliably obtained by measuring the pronounced changes in illumination associated with the earth's rotation. In mammals, these changes are exclusively detected in the retina and are relayed by direct and indirect neural pathways to the master circadian clock in the hypothalamic suprachiasmatic nuclei. Recent work reveals a surprising level of complexity in this sensory control of the circadian system, including the participation of multiple photoreceptive pathways conveying distinct aspects of visual and/or time-of-day information. In this Review, I summarise these important recent advances, present hypotheses as to the functions and neural origins of these sensory signals, highlight key challenges for future research and discuss the implications of our current knowledge for animals and humans in the modern world. PMID:27307539

  5. A New Model for Real-Time Regional Vertical Total Electron Content and Differential Code Bias Estimation Using IGS Real-Time Service (IGS-RTS) Products

    NASA Astrophysics Data System (ADS)

    Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed

    2016-04-01

    The international global navigation satellite system (GNSS) real-time service (IGS-RTS) products have been used extensively for real-time precise point positioning and ionosphere modeling applications. In this study, we develop a regional model for real-time vertical total electron content (RT-VTEC) and differential code bias (RT-DCB) estimation over Europe using the IGS-RTS satellite orbit and clock products. The developed model has a spatial and temporal resolution of 1°×1° and 15 minutes, respectively. GPS observations from a regional network consisting of 60 IGS and EUREF reference stations are processed in the zero-difference mode using the Bernese-5.2 software package in order to extract the geometry-free linear combination of the smoothed code observations. The spherical harmonic expansion function is used to model the VTEC, the receiver and the satellite DCBs. To validate the proposed model, the RT-VTEC values are computed and compared with the final IGS-global ionospheric map (IGS-GIM) counterparts in three successive days under high solar activity including one of an extreme geomagnetic activity. The real-time satellite DCBs are also estimated and compared with the IGS-GIM counterparts. Moreover, the real-time receiver DCB for six IGS stations are obtained and compared with the IGS-GIM counterparts. The examined stations are located in different latitudes with different receiver types. The findings reveal that the estimated RT-VTEC values show agreement with the IGS-GIM counterparts with root mean-square-errors (RMSEs) values less than 2 TEC units. In addition, RMSEs of both the satellites and receivers DCBs are less than 0.85 ns and 0.65 ns, respectively in comparison with the IGS-GIM.

  6. Seamless data-range change using punctured convolutional codes for time-varying signal-to-noise ratios

    NASA Technical Reports Server (NTRS)

    Feria, Y.; Cheung, K.-M.

    1995-01-01

    In a time-varying signal-to-noise ration (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.

  7. PORTHOS - A computer code for solving general three-dimensional, time-dependent two-fluid equations

    SciTech Connect

    Chan, R.K.C.; Masiello, P.J.; Srikantiah, G.S.

    1987-01-01

    PORTHOS is a computer code for calculating three-dimensional steady-state or time dependent two-phase flow in porous or non-porous media. It was developed with the initial goal of simulating two-phase flows in steam generators of PWR nuclear power plants. However, the modular code design and the generality of approach allow application to a wide variety of problems in single phase or two-phase flow. The present method employs a finite difference technique to solve the complete set of two-fluid equations, i.e., the ''six-equation'' model which includes tow mass conservation equations, two momentum equations, two energy equations, as well as constitutive equations to effect closure of the system. The use of volume porosity and surface permeability allows the treatment of complex geometry. This paper describes the mathematical basis, the numerical solution procedure employed, and the results of comparisons with two sources of experimental data: the 8MW FRIGG loop experiment and the Electricite de France (EdF) Bugey 4 steam generator test. Calculations of the FRIGG experiment by PORTHOS, in terms of void fraction distribution, are in good agreement with measurements. Verification against the EdF data is also quite satisfactory.

  8. Time-of-flights and traps: from the Histone Code to Mars*

    PubMed Central

    Swatkoski, Stephen; Becker, Luann; Evans-Nguyen, Theresa

    2011-01-01

    Two very different analytical instruments are featured in this perspective paper on mass spectrometer design and development. The first instrument, based upon the curved-field reflectron developed in the Johns Hopkins Middle Atlantic Mass Spectrometry Laboratory, is a tandem time-of-flight mass spectrometer whose performance and practicality are illustrated by applications to a series of research projects addressing the acetylation, deacetylation and ADP-ribosylation of histone proteins. The chemical derivatization of lysine-rich, hyperacetylated histones as their deuteroacetylated analogs enables one to obtain an accurate quantitative assessment of the extent of acetylation at each site. Chemical acetylation of histone mixtures is also used to determine the lysine targets of sirtuins, an important class of histone deacetylases (HDACs), by replacing the deacetylated residues with biotin. Histone deacetylation by sirtuins requires the co-factor NAD+, as does the attachment of ADP-ribose. The second instrument, a low voltage and low power ion trap mass spectrometer known as the Mars Organic Mass Analyzer (MOMA), is a prototype for an instrument expected to be launched in 2018. Like the tandem mass spectrometer, it is also expected to have applicability to environmental and biological analyses and, ultimately, to clinical care. PMID:20530839

  9. Animal models for auditory streaming.

    PubMed

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  10. High resolution auditory perception system

    NASA Astrophysics Data System (ADS)

    Alam, Iftekhar; Ghatol, Ashok

    2005-04-01

    Blindness is a sensory disability which is difficult to treat but can to some extent be helped by artificial aids. The paper describes the design aspects of a high resolution auditory perception system, which is designed on the principle of air sonar with binaural perception. This system is a vision substitution aid for enabling blind persons. The blind person wears ultrasonic eyeglasses which has ultrasonic sensor array embedded on it. The system has been designed to operate in multiresolution modes. The ultrasonic sound from the transmitter array is reflected back by the objects, falling in the beam of the array and is received. The received signal is converted to a sound signal, which is presented stereophonically for auditory perception. A detailed study has been done as the background work required for the system implementation; the appropriate range analysis procedure, analysis of space-time signals, the acoustic sensors study, amplification methods and study of the removal of noise using filters. Finally the system implementation including both the hardware and the software part of it has been described. Experimental results on actual blind subjects and inferences obtained during the study have also been included.

  11. Study of ITER plasma position reflectometer using a two-dimensional full-wave finite-difference time domain code

    SciTech Connect

    Silva, F. da

    2008-10-15

    The EU will supply the plasma position reflectometer for ITER. The system will have channels located at different poloidal positions, some of them obliquely viewing a plasma which has a poloidal density divergence and curvature, both adverse conditions for profile measurements. To understand the impact of such topology in the reconstruction of density profiles a full-wave two-dimensional finite-difference time domain O-mode code with the capability for frequency sweep was used. Simulations show that the reconstructed density profiles still meet the ITER radial accuracy specifications for plasma position (1 cm), except for the highest densities. Other adverse effects such as multireflections induced by the blanket, density fluctuations, and MHD activity were considered and a first understanding on their impact obtained.

  12. Investigation of auditory processing disorder and language impairment using the speech-evoked auditory brainstem response.

    PubMed

    Rocha-Muniz, Caroline N; Befi-Lopes, Debora M; Schochat, Eliane

    2012-12-01

    This study investigated whether there are differences in the Speech-Evoked Auditory Brainstem Response among children with Typical Development (TD), (Central) Auditory Processing Disorder (C)APD, and Language Impairment (LI). The speech-evoked Auditory Brainstem Response was tested in 57 children (ages 6-12). The children were placed into three groups: TD (n = 18), (C)APD (n = 18) and LI (n = 21). Speech-evoked ABR were elicited using the five-formant syllable/da/. Three dimensions were defined for analysis, including timing, harmonics, and pitch. A comparative analysis of the responses between the typical development children and children with (C)APD and LI revealed abnormal encoding of the speech acoustic features that are characteristics of speech perception in children with (C)APD and LI, although the two groups differed in their abnormalities. While the children with (C)APD might had a greater difficulty distinguishing stimuli based on timing cues, the children with LI had the additional difficulty of distinguishing speech harmonics, which are important to the identification of speech sounds. These data suggested that an inefficient representation of crucial components of speech sounds may contribute to the difficulties with language processing found in children with LI. Furthermore, these findings may indicate that the neural processes mediated by the auditory brainstem differ among children with auditory processing and speech-language disorders.

  13. Predicting “When” in Discourse Engages the Human Dorsal Auditory Stream: An fMRI Study Using Naturalistic Stories

    PubMed Central

    Nagels, Arne; Tune, Sarah; Kircher, Tilo; Wiese, Richard; Schlesewsky, Matthias; Bornkessel-Schlesewsky, Ina

    2016-01-01

    The hierarchical organization of human cortical circuits integrates information across different timescales via temporal receptive windows, which increase in length from lower to higher levels of the cortical hierarchy (Hasson et al., 2015). A recent neurobiological model of higher-order language processing (Bornkessel-Schlesewsky et al., 2015) posits that temporal receptive windows in the dorsal auditory stream provide the basis for a hierarchically organized predictive coding architecture (Friston and Kiebel, 2009). In this stream, a nested set of internal models generates time-based (“when”) predictions for upcoming input at different linguistic levels (sounds, words, sentences, discourse). Here, we used naturalistic stories to test the hypothesis that multi-sentence, discourse-level predictions are processed in the dorsal auditory stream, yielding attenuated BOLD responses for highly predicted versus less strongly predicted language input. The results were as hypothesized: discourse-related cues, such as passive voice, which effect a higher predictability of remention for a character at a later point within a story, led to attenuated BOLD responses for auditory input of high versus low predictability within the dorsal auditory stream, specifically in the inferior parietal lobule, middle frontal gyrus, and dorsal parts of the inferior frontal gyrus, among other areas. Additionally, we found effects of content-related (“what”) predictions in ventral regions. These findings provide novel evidence that hierarchical predictive coding extends to discourse-level processing in natural language. Importantly, they ground language processing on a hierarchically organized predictive network, as a common underlying neurobiological basis shared with other brain functions. SIGNIFICANCE STATEMENT Language is the most powerful communicative medium available to humans. Nevertheless, we lack an understanding of the neurobiological basis of language processing in natural

  14. Auditory Learning. Dimensions in Early Learning Series.

    ERIC Educational Resources Information Center

    Zigmond, Naomi K.; Cicci, Regina

    The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…

  15. Auditory Processing Disorders. Revised. Technical Assistance Paper.

    ERIC Educational Resources Information Center

    Florida State Dept. of Education, Tallahassee. Bureau of Instructional Support and Community Services.

    Designed to assist audiologists in the educational setting in responding to frequently asked questions concerning audiological auditory processing disorder (APD) evaluations, this paper addresses: (1) auditory processes; (2) auditory processing skills; (3) characteristics of auditory processing disorders; (4) causes of auditory overload; (5) why…

  16. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes.

    PubMed

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  17. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    PubMed Central

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  18. Using Facebook to Reach People Who Experience Auditory Hallucinations

    PubMed Central

    Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience

  19. Dissociated lateralization of transient and sustained blood oxygen level-dependent signal components in human primary auditory cortex.

    PubMed

    Lehmann, Christoph; Herdener, Marcus; Schneider, Peter; Federspiel, Andrea; Bach, Dominik R; Esposito, Fabrizio; di Salle, Francesco; Scheffler, Klaus; Kretz, Robert; Dierks, Thomas; Seifritz, Erich

    2007-02-15

    Among other auditory operations, the analysis of different sound levels received at both ears is fundamental for the localization of a sound source. These so-called interaural level differences, in animals, are coded by excitatory-inhibitory neurons yielding asymmetric hemispheric activity patterns with acoustic stimuli having maximal interaural level differences. In human auditory cortex, the temporal blood oxygen level-dependent (BOLD) response to auditory inputs, as measured by functional magnetic resonance imaging (fMRI), consists of at least two independent components: an initial transient and a subsequent sustained signal, which, on a different time scale, are consistent with electrophysiological human and animal response patterns. However, their specific functional role remains unclear. Animal studies suggest these temporal components being based on different neural networks and having specific roles in representing the external acoustic environment. Here we hypothesized that the transient and sustained response constituents are differentially involved in coding interaural level differences and therefore play different roles in spatial information processing. Healthy subjects underwent monaural and binaural acoustic stimulation and BOLD responses were measured using high signal-to-noise-ratio fMRI. In the anatomically segmented Heschl's gyrus the transient response was bilaterally balanced, independent of the side of stimulation, while in opposite the sustained response was contralateralized. This dissociation suggests a differential role at these two independent temporal response components, with an initial bilateral transient signal subserving rapid sound detection and a subsequent lateralized sustained signal subserving detailed sound characterization.

  20. Central auditory disorders: toward a neuropsychology of auditory objects

    PubMed Central

    Goll, Johanna C.; Crutch, Sebastian J.; Warren, Jason D.

    2012-01-01

    Purpose of review Analysis of the auditory environment, source identification and vocal communication all require efficient brain mechanisms for disambiguating, representing and understanding complex natural sounds as ‘auditory objects’. Failure of these mechanisms leads to a diverse spectrum of clinical deficits. Here we review current evidence concerning the phenomenology, mechanisms and brain substrates of auditory agnosias and related disorders of auditory object processing. Recent findings Analysis of lesions causing auditory object deficits has revealed certain broad anatomical correlations: deficient parsing of the auditory scene is associated with lesions involving the parieto-temporal junction, while selective disorders of sound recognition occur with more anterior temporal lobe or extra-temporal damage. Distributed neural networks have been increasingly implicated in the pathogenesis of such disorders as developmental dyslexia, congenital amusia and tinnitus. Auditory category deficits may arise from defective interaction of spectrotemporal encoding and executive and mnestic processes. Dedicated brain mechanisms are likely to process specialised sound objects such as voices and melodies. Summary Emerging empirical evidence suggests a clinically relevant, hierarchical and fractionated neuropsychological model of auditory object processing that provides a framework for understanding auditory agnosias and makes specific predictions to direct future work. PMID:20975559

  1. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise.

    PubMed

    White-Schwoch, Travis; Davies, Evan C; Thompson, Elaine C; Woodruff Carr, Kali; Nicol, Trent; Bradlow, Ann R; Kraus, Nina

    2015-10-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response

  2. Early hominin auditory capacities

    PubMed Central

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis

    2015-01-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  3. Early hominin auditory capacities.

    PubMed

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis

    2015-09-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats.

  4. Efficient population coding of naturalistic whisker motion in the ventro-posterior medial thalamus based on precise spike timing

    PubMed Central

    Bale, Michael R.; Ince, Robin A. A.; Santagata, Greta; Petersen, Rasmus S.

    2015-01-01

    The rodent whisker-associated thalamic nucleus (VPM) contains a somatotopic map where whisker representation is divided into distinct neuronal sub-populations, called “barreloids”. Each barreloid projects to its associated cortical barrel column and so forms a gateway for incoming sensory stimuli to the barrel cortex. We aimed to determine how the population of neurons within one barreloid encodes naturalistic whisker motion. In rats, we recorded the extracellular activity of up to nine single neurons within a single barreloid, by implanting silicon probes parallel to the longitudinal axis of the barreloids. We found that play-back of texture-induced whisker motion evoked sparse responses, timed with millisecond precision. At the population level, there was synchronous activity: however, different subsets of neurons were synchronously active at different times. Mutual information between population responses and whisker motion increased near linearly with population size. When normalized to factor out firing rate differences, we found that texture was encoded with greater informational-efficiency than white noise. These results indicate that, within each VPM barreloid, there is a rich and efficient population code for naturalistic whisker motion based on precisely timed, population spike patterns. PMID:26441549

  5. Representations of Pitch and Timbre Variation in Human Auditory Cortex.

    PubMed

    Allen, Emily J; Burton, Philip C; Olman, Cheryl A; Oxenham, Andrew J

    2017-02-01

    Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex.

  6. Auditory interfaces: The human perceiver

    NASA Technical Reports Server (NTRS)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  7. A basic study on universal design of auditory signals in automobiles.

    PubMed

    Yamauchi, Katsuya; Choi, Jong-dae; Maiguma, Ryo; Takada, Masayuki; Iwamiya, Shin-ichiro

    2004-11-01

    In this paper, the impression of various kinds of auditory signals currently used in automobiles and a comprehensive evaluation were measured by a semantic differential method. The desirable acoustic characteristic was examined for each type of auditory signal. Sharp sounds with dominant high-frequency components were not suitable for auditory signals in automobiles. This trend is expedient for the aged whose auditory sensitivity in the high frequency region is lower. When intermittent sounds were used, a longer OFF time was suitable. Generally, "dull (not sharp)" and "calm" sounds were appropriate for auditory signals. Furthermore, the comparison between the frequency spectrum of interior noise in automobiles and that of suitable sounds for various auditory signals indicates that the suitable sounds are not easily masked. The suitable auditory signals for various purposes is a good solution from the viewpoint of universal design.

  8. [EFFECT OF HYPOXIA ON THE CHARACTERISTICS OF HUMAN AUDITORY PERCEPTION].

    PubMed

    Ogorodnikova, E A; Stolvaroya, E I; Pak, S P; Bogomolova, G M; Korolev, Yu N; Golubev, V N; Lesova, E M

    2015-12-01

    The effect of normobaric hypoxic hypoxia (single and interval training) on the characteristics of human hearing was investigated. The hearing thresholds (tonal audiograms), reaction time of subjects in psychophysical experiments (pause detection, perception of rhythm and target words), and short-term auditory memory were measured before and after hypoxia. The obtained data revealed improvement of the auditory sensitivity and characteristics of working memory, and increasing of response speed. It was demonstrated that interval hypoxic training had positive effect on the processes of auditory perception.

  9. Longitudinal auditory learning facilitates auditory cognition as revealed by microstate analysis.

    PubMed

    Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Matthes, Katarina L; Meyer, Martin

    2017-02-01

    The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training.

  10. Subcortical processing in auditory communication.

    PubMed

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2015-10-01

    The voice is a rich source of information, which the human brain has evolved to decode and interpret. Empirical observations have shown that the human auditory system is especially sensitive to the human voice, and that activity within the voice-sensitive regions of the primary and secondary auditory cortex is modulated by the emotional quality of the vocal signal, and may therefore subserve, with frontal regions, the cognitive ability to correctly identify the speaker's affective state. So far, the network involved in the processing of vocal affect has been mainly characterised at the cortical level. However, anatomical and functional evidence suggests that acoustic information relevant to the affective quality of the auditory signal might be processed prior to the auditory cortex. Here we review the animal and human literature on the main subcortical structures along the auditory pathway, and propose a model whereby the distinction between different types of vocal affect in auditory communication begins at very early stages of auditory processing, and relies on the analysis of individual acoustic features of the sound signal. We further suggest that this early feature-based decoding occurs at a subcortical level along the ascending auditory pathway, and provides a preliminary coarse (but fast) characterisation of the affective quality of the auditory signal before the more refined (but slower) cortical processing is completed.

  11. Automated auditory recognition training and testing

    PubMed Central

    Gess, Austen; Schneider, David M.; Vyas, Akshat; Woolley, Sarah M. N.

    2011-01-01

    Laboratory training and testing of auditory recognition skills in animals is important for understanding animal communication systems that depend on auditory cues. Songbirds are commonly studied because of their exceptional ability to learn complex vocalizations. In recent years, mounting interest in the perceptual abilities of songbirds has increased the demand for laboratory behavioural training and testing paradigms. Here, we describe and demonstrate the success of a method for auditory discrimination experiments, including all the necessary hardware, training procedures and freely-available, versatile software. The system can run several behavioural training and testing paradigms, including operant (go-nogo, stimulus preference, and two-alternative forced choice) and classical conditioning tasks. The software and some hardware components can be used with any laboratory animal that learns and responds to sensory cues. The peripheral hardware and training procedures are designed for use with songbirds and auditory stimuli. Using the go-nogo paradigm of the training system, we show that adult zebra finches learn to recognize and correctly classify individual female calls and male songs. We also show that learning the task generalizes to new stimulus classes; birds that learned the task with calls subsequently learned to recognize songs faster than did birds that learned the task and songs at the same time. PMID:21857717

  12. Idealized Computational Models for Auditory Receptive Fields

    PubMed Central

    Lindeberg, Tony; Friberg, Anders

    2015-01-01

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973

  13. Abnormal Effective Connectivity in the Brain is Involved in Auditory Verbal Hallucinations in Schizophrenia.

    PubMed

    Li, Baojuan; Cui, Long-Biao; Xi, Yi-Bin; Friston, Karl J; Guo, Fan; Wang, Hua-Ning; Zhang, Lin-Chuan; Bai, Yuan-Han; Tan, Qing-Rong; Yin, Hong; Lu, Hongbing

    2017-02-21

    Information flow among auditory and language processing-related regions implicated in the pathophysiology of auditory verbal hallucinations (AVHs) in schizophrenia (SZ) remains unclear. In this study, we used stochastic dynamic causal modeling (sDCM) to quantify connections among the left dorsolateral prefrontal cortex (inner speech monitoring), auditory cortex (auditory processing), hippocampus (memory retrieval), thalamus (information filtering), and Broca's area (language production) in 17 first-episode drug-naïve SZ patients with AVHs, 15 without AVHs, and 19 healthy controls using resting-state functional magnetic resonance imaging. Finally, we performed receiver operating characteristic (ROC) analysis and correlation analysis between image measures and symptoms. sDCM revealed an increased sensitivity of auditory cortex to its thalamic afferents and a decrease in hippocampal sensitivity to auditory inputs in SZ patients with AVHs. The area under the ROC curve showed the diagnostic value of these two connections to distinguish SZ patients with AVHs from those without AVHs. Furthermore, we found a positive correlation between the strength of the connectivity from Broca's area to the auditory cortex and the severity of AVHs. These findings demonstrate, for the first time, augmented AVH-specific excitatory afferents from the thalamus to the auditory cortex in SZ patients, resulting in auditory perception without external auditory stimuli. Our results provide insights into the neural mechanisms underlying AVHs in SZ. This thalamic-auditory cortical-hippocampal dysconnectivity may also serve as a diagnostic biomarker of AVHs in SZ and a therapeutic target based on direct in vivo evidence.

  14. Assessment of ionization chamber correction factors in photon beams using a time saving strategy with PENELOPE code.

    PubMed

    Reis, C Q M; Nicolucci, P

    2016-02-01

    The purpose of this study was to investigate Monte Carlo-based perturbation and beam quality correction factors for ionization chambers in photon beams using a saving time strategy with PENELOPE code. Simulations for calculating absorbed doses to water using full spectra of photon beams impinging the whole water phantom and those using a phase-space file previously stored around the point of interest were performed and compared. The widely used NE2571 ionization chamber was modeled with PENELOPE using data from the literature in order to calculate absorbed doses to the air cavity of the chamber. Absorbed doses to water at reference depth were also calculated for providing the perturbation and beam quality correction factors for that chamber in high energy photon beams. Results obtained in this study show that simulations with phase-space files appropriately stored can be up to ten times shorter than using a full spectrum of photon beams in the input-file. Values of kQ and its components for the NE2571 ionization chamber showed good agreement with published values in the literature and are provided with typical statistical uncertainties of 0.2%. Comparisons to kQ values published in current dosimetry protocols such as the AAPM TG-51 and IAEA TRS-398 showed maximum percentage differences of 0.1% and 0.6% respectively. The proposed strategy presented a significant efficiency gain and can be applied for a variety of ionization chambers and clinical photon beams.

  15. Negative emotion provides cues for orienting auditory spatial attention

    PubMed Central

    Asutay, Erkin; Västfjäll, Daniel

    2015-01-01

    The auditory stimuli provide information about the objects and events around us. They can also carry biologically significant emotional information (such as unseen dangers and conspecific vocalizations), which provides cues for allocation of attention and mental resources. Here, we investigated whether task-irrelevant auditory emotional information can provide cues for orientation of auditory spatial attention. We employed a covert spatial orienting task: the dot-probe task. In each trial, two task-irrelevant auditory cues were simultaneously presented at two separate locations (left–right or front–back). Environmental sounds were selected to form emotional vs. neutral, emotional vs. emotional, and neutral vs. neutral cue pairs. The participants’ task was to detect the location of an acoustic target that was presented immediately after the task-irrelevant auditory cues. The target was presented at the same location as one of the auditory cues. The results indicated that participants were significantly faster to locate the target when it replaced the negative cue compared to when it replaced the neutral cue. The positive cues did not produce a clear attentional bias. Further, same valence pairs (emotional–emotional or neutral–neutral) did not modulate reaction times due to a lack of spatial attention capture by one cue in the pair. Taken together, the results indicate that negative affect can provide cues for the orientation of spatial attention in the auditory domain. PMID:26029149

  16. Estimating NIRR-1 burn-up and core life time expectancy using the codes WIMS and CITATION

    NASA Astrophysics Data System (ADS)

    Yahaya, B.; Ahmed, Y. A.; Balogun, G. I.; Agbo, S. A.

    The Nigeria Research Reactor-1 (NIRR-1) is a low power miniature neutron source reactor (MNSR) located at the Centre for Energy Research and Training, Ahmadu Bello University, Zaria Nigeria. The reactor went critical with initial core excess reactivity of 3.77 mk. The NIRR-1 cold excess reactivity measured at the time of commissioning was determined to be 4.97 mk, which is more than the licensed range of 3.5-4 mk. Hence some cadmium poison worth -1.2 mk was inserted into one of the inner irradiation sites which act as reactivity regulating device in order to reduce the core excess reactivity to 3.77 mk, which is within recommended licensed range of 3.5 mk and 4.0 mk. In this present study, the burn-up calculations of the NIRR-1 fuel and the estimation of the core life time expectancy after 10 years (the reactor core expected cycle) have been conducted using the codes WIMS and CITATION. The burn-up analyses carried out indicated that the excess reactivity of NIRR-1 follows a linear decreasing trend having 216 Effective Full Power Days (EFPD) operations. The reactivity worth of top beryllium shim data plates was calculated to be 19.072 mk. The result of depletion analysis for NIRR-1 core shows that (7.9947 ± 0.0008) g of U-235 was consumed for the period of 12 years of operating time. The production of the build-up of Pu-239 was found to be (0.0347 ± 0.0043) g. The core life time estimated in this research was found to be 30.33 years. This is in good agreement with the literature

  17. Different timescales for the neural coding of consonant and vowel sounds.

    PubMed

    Perez, Claudia A; Engineer, Crystal T; Jakkamsetti, Vikram; Carraway, Ryan S; Perry, Matthew S; Kilgard, Michael P

    2013-03-01

    Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders.

  18. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    PubMed

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human

  19. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    ERIC Educational Resources Information Center

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  20. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  1. Ion channel noise can explain firing correlation in auditory nerves.

    PubMed

    Moezzi, Bahar; Iannella, Nicolangelo; McDonnell, Mark D

    2016-10-01

    Neural spike trains are commonly characterized as a Poisson point process. However, the Poisson assumption is a poor model for spiking in auditory nerve fibres because it is known that interspike intervals display positive correlation over long time scales and negative correlation over shorter time scales. We have therefore developed a biophysical model based on the well-known Meddis model of the peripheral auditory system, to produce simulated auditory nerve fibre spiking statistics that more closely match the firing correlations observed in empirical data. We achieve this by introducing biophysically realistic ion channel noise to an inner hair cell membrane potential model that includes fractal fast potassium channels and deterministic slow potassium channels. We succeed in producing simulated spike train statistics that match empirically observed firing correlations. Our model thus replicates macro-scale stochastic spiking statistics in the auditory nerve fibres due to modeling stochasticity at the micro-scale of potassium channels.

  2. Transformation of temporal sequences in the zebra finch auditory system

    PubMed Central

    Lim, Yoonseob; Lagoy, Ryan; Shinn-Cunningham, Barbara G; Gardner, Timothy J

    2016-01-01

    This study examines how temporally patterned stimuli are transformed as they propagate from primary to secondary zones in the thalamorecipient auditory pallium in zebra finches. Using a new class of synthetic click stimuli, we find a robust mapping from temporal sequences in the primary zone to distinct population vectors in secondary auditory areas. We tested whether songbirds could discriminate synthetic click sequences in an operant setup and found that a robust behavioral discrimination is present for click sequences composed of intervals ranging from 11 ms to 40 ms, but breaks down for stimuli composed of longer inter-click intervals. This work suggests that the analog of the songbird auditory cortex transforms temporal patterns to sequence-selective population responses or ‘spatial codes', and that these distinct population responses contribute to behavioral discrimination of temporally complex sounds. DOI: http://dx.doi.org/10.7554/eLife.18205.001 PMID:27897971

  3. Predicting "When" in Discourse Engages the Human Dorsal Auditory Stream: An fMRI Study Using Naturalistic Stories.

    PubMed

    Kandylaki, Katerina Danae; Nagels, Arne; Tune, Sarah; Kircher, Tilo; Wiese, Richard; Schlesewsky, Matthias; Bornkessel-Schlesewsky, Ina

    2016-11-30

    The hierarchical organization of human cortical circuits integrates information across different timescales via temporal receptive windows, which increase in length from lower to higher levels of the cortical hierarchy (Hasson et al., 2015). A recent neurobiological model of higher-order language processing (Bornkessel-Schlesewsky et al., 2015) posits that temporal receptive windows in the dorsal auditory stream provide the basis for a hierarchically organized predictive coding architecture (Friston and Kiebel, 2009). In this stream, a nested set of internal models generates time-based ("when") predictions for upcoming input at different linguistic levels (sounds, words, sentences, discourse). Here, we used naturalistic stories to test the hypothesis that multi-sentence, discourse-level predictions are processed in the dorsal auditory stream, yielding attenuated BOLD responses for highly predicted versus less strongly predicted language input. The results were as hypothesized: discourse-related cues, such as passive voice, which effect a higher predictability of remention for a character at a later point within a story, led to attenuated BOLD responses for auditory input of high versus low predictability within the dorsal auditory stream, specifically in the inferior parietal lobule, middle frontal gyrus, and dorsal parts of the inferior frontal gyrus, among other areas. Additionally, we found effects of content-related ("what") predictions in ventral regions. These findings provide novel evidence that hierarchical predictive coding extends to discourse-level processing in natural language. Importantly, they ground language processing on a hierarchically organized predictive network, as a common underlying neurobiological basis shared with other brain functions.

  4. High-accuracy and long-range Brillouin optical time-domain analysis sensor based on the combination of pulse prepump technique and complementary coding

    NASA Astrophysics Data System (ADS)

    Sun, Qiao; Tu, Xiaobo; Lu, Yang; Sun, Shilin; Meng, Zhou

    2016-06-01

    A Brillouin optical time-domain analysis (BOTDA) sensor that combines the conventional complementary coding with the pulse prepump technique for high-accuracy and long-range distributed sensing is implemented and analyzed. The employment of the complementary coding provides an enhanced signal-to-noise ratio (SNR) of the sensing system and an extended sensing distance, and the measurement time is also reduced compared with a BOTDA sensor using linear coding. The combination of pulse prepump technique enables the establishment of a preactivated acoustic field in each pump pulse of the complementary codeword, which ensures measurements of high spatial resolution and high frequency accuracy. The feasibility of the prepumped complementary coding is analyzed theoretically and experimentally. The experiments are carried out beyond 50-km single-mode fiber, and experimental results show the capabilities of the proposed scheme to achieve 1-m spatial resolution with temperature and strain resolutions equal to ˜1.6°C and ˜32 μɛ, and 2-m spatial resolution with temperature and strain resolutions equal to ˜0.3°C and ˜6 μɛ, respectively. A longer sensing distance with the same spatial resolution and measurement accuracy can be achieved through increasing the code length of the prepumped complementary code.

  5. Issues in Human Auditory Development

    ERIC Educational Resources Information Center

    Werner, Lynne A.

    2007-01-01

    The human auditory system is often portrayed as precocious in its development. In fact, many aspects of basic auditory processing appear to be adult-like by the middle of the first year of postnatal life. However, processes such as attention and sound source determination take much longer to develop. Immaturity of higher-level processes limits the…

  6. Word Recognition in Auditory Cortex

    ERIC Educational Resources Information Center

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  7. Attention to natural auditory signals.

    PubMed

    Caporello Bluvas, Emily; Gentner, Timothy Q

    2013-11-01

    The challenge of understanding how the brain processes natural signals is compounded by the fact that such signals are often tied closely to specific natural behaviors and natural environments. This added complexity is especially true for auditory communication signals that can carry information at multiple hierarchical levels, and often occur in the context of other competing communication signals. Selective attention provides a mechanism to focus processing resources on specific components of auditory signals, and simultaneously suppress responses to unwanted signals or noise. Although selective auditory attention has been well-studied behaviorally, very little is known about how selective auditory attention shapes the processing on natural auditory signals, and how the mechanisms of auditory attention are implemented in single neurons or neural circuits. Here we review the role of selective attention in modulating auditory responses to complex natural stimuli in humans. We then suggest how the current understanding can be applied to the study of selective auditory attention in the context natural signal processing at the level of single neurons and populations in animal models amenable to invasive neuroscience techniques. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  8. Auditory neglect and related disorders.

    PubMed

    Gutschalk, Alexander; Dykstra, Andrew

    2015-01-01

    Neglect is a neurologic disorder, typically associated with lesions of the right hemisphere, in which patients are biased towards their ipsilesional - usually right - side of space while awareness for their contralesional - usually left - side is reduced or absent. Neglect is a multimodal disorder that often includes deficits in the auditory domain. Classically, auditory extinction, in which left-sided sounds that are correctly perceived in isolation are not detected in the presence of synchronous right-sided stimulation, has been considered the primary sign of auditory neglect. However, auditory extinction can also be observed after unilateral auditory cortex lesions and is thus not specific for neglect. Recent research has shown that patients with neglect are also impaired in maintaining sustained attention, on both sides, a fact that is reflected by an impairment of auditory target detection in continuous stimulation conditions. Perhaps the most impressive auditory symptom in full-blown neglect is alloacusis, in which patients mislocalize left-sided sound sources to their right, although even patients with less severe neglect still often show disturbance of auditory spatial perception, most commonly a lateralization bias towards the right. We discuss how these various disorders may be explained by a single model of neglect and review emerging interventions for patient rehabilitation.

  9. Unidirectional transparent signal injection in finite-difference time-domain electromagnetic codes -application to reflectometry simulations

    SciTech Connect

    Silva, F. da; Hacquin, S.

    2005-03-01

    We present a novel numerical signal injection technique allowing unidirectional injection of a wave in a wave-guiding structure, applicable to 2D finite-difference time-domain electromagnetic codes, both Maxwell and wave-equation. It is particularly suited to continuous wave radar-like simulations. The scheme gives an unidirectional injection of a signal while being transparent to waves propagating in the opposite direction (directional coupling). The reflected or backscattered waves (returned) are separated from the probing waves allowing direct access to the information on amplitude and phase of the returned wave. It also facilitates the signal processing used to extract the phase derivative (or group delay) when simulating radar systems. Although general, the technique is particularly suited to swept frequency sources (frequency modulated) in the context of reflectometry, a fusion plasma diagnostic. The UTS applications presented here are restricted to fusion plasma reflectometry simulations for different physical situations. This method can, nevertheless, also be used in other dispersive media such as dielectrics, being useful, for example, in the simulation of plasma filled waveguides or directional couplers.

  10. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    SciTech Connect

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.

  11. Auditory Model: Effects on Learning under Blocked and Random Practice Schedules

    ERIC Educational Resources Information Center

    Han, Dong-Wook; Shea, Charles H.

    2008-01-01

    An experiment was conducted to determine the impact of an auditory model on blocked, random, and mixed practice schedules of three five-segment timing sequences (relative time constant). We were interested in whether or not the auditory model differentially affected the learning of relative and absolute timing under blocked and random practice.…

  12. Polar Codes

    DTIC Science & Technology

    2014-12-01

    density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes. iii CONTENTS EXECUTIVE SUMMARY...the most common. Many civilian systems use low density parity check (LDPC) FEC codes, and the Navy is planning to use LDPC for some future systems...other forward error correction methods: a turbo code, a low density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes

  13. Training in rapid auditory processing ameliorates auditory comprehension in aphasic patients: a randomized controlled pilot study.

    PubMed

    Szelag, Elzbieta; Lewandowska, Monika; Wolak, Tomasz; Seniow, Joanna; Poniatowska, Renata; Pöppel, Ernst; Szymaszek, Aneta

    2014-03-15

    Experimental studies have often reported close associations between rapid auditory processing and language competency. The present study was aimed at improving auditory comprehension in aphasic patients following specific training in the perception of temporal order (TO) of events. We tested 18 aphasic patients showing both comprehension and TO perception deficits. Auditory comprehension was assessed by the Token Test, phonemic awareness and Voice-Onset-Time Test. The TO perception was assessed using auditory Temporal-Order-Threshold, defined as the shortest interval between two consecutive stimuli, necessary to report correctly their before-after relation. Aphasic patients participated in eight 45-minute sessions of either specific temporal training (TT, n=11) aimed to improve sequencing abilities, or control non-temporal training (NT, n=7) focussed on volume discrimination. The TT yielded improved TO perception; moreover, a transfer of improvement was observed from the time domain to the language domain, which was untrained during the training. The NT did not improve either the TO perception or comprehension in any language test. These results are in agreement with previous literature studies which proved ameliorated language competency following the TT in language-learning-impaired or dyslexic children. Our results indicated for the first time such benefits also in aphasic patients.

  14. Visual and Auditory Sensitivities and Discriminations

    DTIC Science & Technology

    2007-11-02

    scene. The visual scene was rendered and updated by an Octane workstation (Silicon Graphics Inc.). It was projected onto a wall 3.5 m in front of the...represent driving in the left lane and positive positions represent driving in the right lane. A negative time headway indicates that the front bumpr of...accelerator or turning the steering wheel did not alter the visual display. A brief auditory tone signaled the end of the 5-min period after which observers

  15. Electroencephalographic measures of auditory perception in dynamic acoustic environments

    NASA Astrophysics Data System (ADS)

    McMullan, Amanda R.

    We are capable of effortlessly parsing a complex scene presented to us. In order to do this, we must segregate objects from each other and from the background. While this process has been extensively studied in vision science, it remains relatively less understood in auditory science. This thesis sought to characterize the neuroelectric correlates of auditory scene analysis using electroencephalography. Chapter 2 determined components evoked by first-order energy boundaries and second-order pitch boundaries. Chapter 3 determined components evoked by first-order and second-order discontinuous motion boundaries. Both of these chapters focused on analysis of event-related potential (ERP) waveforms and time-frequency analysis. In addition, these chapters investigated the contralateral nature of a negative ERP component. These results extend the current knowledge of auditory scene analysis by providing a starting point for discussing and characterizing first-order and second-order boundaries in an auditory scene.

  16. Modality specific neural correlates of auditory and somatic hallucinations

    PubMed Central

    Shergill, S; Cameron, L; Brammer, M; Williams, S; Murray, R; McGuire, P

    2001-01-01

    Somatic hallucinations occur in schizophrenia and other psychotic disorders, although auditory hallucinations are more common. Although the neural correlates of auditory hallucinations have been described in several neuroimaging studies, little is known of the pathophysiology of somatic hallucinations. Functional magnetic resonance imaging (fMRI) was used to compare the distribution of brain activity during somatic and auditory verbal hallucinations, occurring at different times in a 36 year old man with schizophrenia. Somatic hallucinations were associated with activation in the primary somatosensory and posterior parietal cortex, areas that normally mediate tactile perception. Auditory hallucinations were associated with activation in the middle and superior temporal cortex, areas involved in processing external speech. Hallucinations in a given modality seem to involve areas that normally process sensory information in that modality.

 PMID:11606687

  17. Adaptation to vocal expressions reveals multistep perception of auditory emotion.

    PubMed

    Bestelmeyer, Patricia E G; Maurage, Pierre; Rouger, Julien; Latinus, Marianne; Belin, Pascal

    2014-06-11

    The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect.

  18. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  19. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding.

    PubMed

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.

  20. Biological changes in auditory function following training in children with autism spectrum disorders

    PubMed Central

    2010-01-01

    Background Children with pervasive developmental disorders (PDD), such as children with autism spectrum disorders (ASD), often show auditory processing deficits related to their overarching language impairment. Auditory training programs such as Fast ForWord Language may potentially alleviate these deficits through training-induced improvements in auditory processing. Methods To assess the impact of auditory training on auditory function in children with ASD, brainstem and cortical responses to speech sounds presented in quiet and noise were collected from five children with ASD who completed Fast ForWord training. Results Relative to six control children with ASD who did not complete Fast ForWord, training-related changes were found in brainstem response timing (three children) and pitch-tracking (one child), and cortical response timing (all five children) after Fast ForWord use. Conclusions These results provide an objective indication of the benefit of training on auditory function for some children with ASD. PMID:20950487

  1. Visual change detection recruits auditory cortices in early deafness.

    PubMed

    Bottari, Davide; Heimler, Benedetta; Caclin, Anne; Dalmolin, Anna; Giard, Marie-Hélène; Pavani, Francesco

    2014-07-01

    Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300 ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation.

  2. Development of auditory localization accuracy and auditory spatial discrimination in children and adolescents.

    PubMed

    Kühnle, S; Ludwig, A A; Meuret, S; Küttner, C; Witte, C; Scholbach, J; Fuchs, M; Rübsamen, R

    2013-01-01

    The present study investigated the development of two parameters of spatial acoustic perception in children and adolescents with normal hearing, aged 6-18 years. Auditory localization accuracy was quantified by means of a sound source identification task and auditory spatial discrimination acuity by measuring minimum audible angles (MAA). Both low- and high-frequency noise bursts were employed in the tests, thereby separately addressing auditory processing based on interaural time and intensity differences. Setup consisted of 47 loudspeakers mounted in the frontal azimuthal hemifield, ranging from 90° left to 90° right (-90°, +90°). Target signals were presented from 8 loudspeaker positions in the left and right hemifields (±4°, ±30°, ±60° and ±90°). Localization accuracy and spatial discrimination acuity showed different developmental courses. Localization accuracy remained stable from the age of 6 onwards. In contrast, MAA thresholds and interindividual variability of spatial discrimination decreased significantly with increasing age. Across all age groups, localization was most accurate and MAA thresholds were lower for frontal than for lateral sound sources, and for low-frequency compared to high-frequency noise bursts. The study also shows better performance in spatial hearing based on interaural time differences rather than on intensity differences throughout development. These findings confirm that specific aspects of central auditory processing show continuous development during childhood up to adolescence.

  3. Temporal resolution and temporal integration of short pulses at the auditory periphery of echolocating animals

    NASA Astrophysics Data System (ADS)

    Rimskaya-Korsakova, L. K.

    2004-05-01

    To explain the temporal integration and temporal resolution abilities revealed in echolocating animals by behavioral and electrophysiological experiments, the peripheral coding of sounds in the high-frequency auditory system of these animals is modeled. The stimuli are paired pulses similar to the echolocating signals of the animals. Their duration is comparable with or smaller than the time constants of the following processes: formation of the firing rate of the basilar membrane, formation of the receptor potentials of internal hair cells, and recovery of the excitability of spiral ganglion neurons. The models of auditory nerve fibers differ in spontaneous firing rate, response thresholds, and abilities to reproduce small variations of the stimulus level. The formation of the response to the second pulse of a pair of pulses in the multitude of synchronously excited high-frequency auditory nerve fibers may occur in only two ways. The first way defined as the stochastic mechanism implies the formation of the response to the second pulse as a result of the responses of the fibers that did not respond to the first pulse. This mechanism is based on the stochastic nature of the responses of auditory nerve fibers associated with the spontaneous firing rate. The second way, defined as the repeatition mechanism, implies the appearance of repeated responses in fibers that already responded to the first pulse but suffered a decrease in their response threshold after the first spike generation. This mechanism is based on the deterministic nature of the responses of fibers associated with refractoriness. The temporal resolution of pairs of short pulses, which, according to the data of behavioral experiments, is about 0.1 0.2 ms, is explained by the formation of the response to the second pulse through the stochastic mechanism. A complete recovery of the response to the second pulse, which, according to the data of electrophysiological studies of short-latency evoked brainstem

  4. Reduced object related negativity response indicates impaired auditory scene analysis in adults with autistic spectrum disorder.

    PubMed

    Lodhia, Veema; Brock, Jon; Johnson, Blake W; Hautus, Michael J

    2014-01-01

    Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN) in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG) was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400). These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.

  5. Involvement of the human midbrain and thalamus in auditory deviance detection.

    PubMed

    Cacciaglia, Raffaele; Escera, Carles; Slabu, Lavinia; Grimm, Sabine; Sanjuán, Ana; Ventura-Campos, Noelia; Ávila, César

    2015-02-01

    Prompt detection of unexpected changes in the sensory environment is critical for survival. In the auditory domain, the occurrence of a rare stimulus triggers a cascade of neurophysiological events spanning over multiple time-scales. Besides the role of the mismatch negativity (MMN), whose cortical generators are located in supratemporal areas, cumulative evidence suggests that violations of auditory regularities can be detected earlier and lower in the auditory hierarchy. Recent human scalp recordings have shown signatures of auditory mismatch responses at shorter latencies than those of the MMN. Moreover, animal single-unit recordings have demonstrated that rare stimulus changes cause a release from stimulus-specific adaptation in neurons of the primary auditory cortex, the medial geniculate body (MGB), and the inferior colliculus (IC). Although these data suggest that change detection is a pervasive property of the auditory system which may reside upstream cortical sites, direct evidence for the involvement of subcortical stages in the human auditory novelty system is lacking. Using event-related functional magnetic resonance imaging during a frequency oddball paradigm, we here report that auditory deviance detection occurs in the MGB and the IC of healthy human participants. By implementing a random condition controlling for neural refractoriness effects, we show that auditory change detection in these subcortical stations involves the encoding of statistical regularities from the acoustic input. These results provide the first direct evidence of the existence of multiple mismatch detectors nested at different levels along the human ascending auditory pathway.

  6. Cortical auditory disorders: clinical and psychoacoustic features.

    PubMed Central

    Mendez, M F; Geehan, G R

    1988-01-01

    The symptoms of two patients with bilateral cortical auditory lesions evolved from cortical deafness to other auditory syndromes: generalised auditory agnosia, amusia and/or pure word deafness, and a residual impairment of temporal sequencing. On investigation, both had dysacusis, absent middle latency evoked responses, acoustic errors in sound recognition and matching, inconsistent auditory behaviours, and similarly disturbed psychoacoustic discrimination tasks. These findings indicate that the different clinical syndromes caused by cortical auditory lesions form a spectrum of related auditory processing disorders. Differences between syndromes may depend on the degree of involvement of a primary cortical processing system, the more diffuse accessory system, and possibly the efferent auditory system. Images PMID:2450968

  7. Development of an efficient computer code to solve the time-dependent Navier-Stokes equations. [for predicting viscous flow fields about lifting bodies

    NASA Technical Reports Server (NTRS)

    Harp, J. L., Jr.; Oatway, T. P.

    1975-01-01

    A research effort was conducted with the goal of reducing computer time of a Navier Stokes Computer Code for prediction of viscous flow fields about lifting bodies. A two-dimensional, time-dependent, laminar, transonic computer code (STOKES) was modified to incorporate a non-uniform timestep procedure. The non-uniform time-step requires updating of a zone only as often as required by its own stability criteria or that of its immediate neighbors. In the uniform timestep scheme each zone is updated as often as required by the least stable zone of the finite difference mesh. Because of less frequent update of program variables it was expected that the nonuniform timestep would result in a reduction of execution time by a factor of five to ten. Available funding was exhausted prior to successful demonstration of the benefits to be derived from the non-uniform time-step method.

  8. Application of power time-projection on the operator-splitting coupling scheme of the TRACE/S3K coupled code

    SciTech Connect

    Wicaksono, D.; Zerkak, O.; Nikitin, K.; Ferroukhi, H.; Chawla, R.

    2013-07-01

    This paper reports refinement studies on the temporal coupling scheme and time-stepping management of TRACE/S3K, a dynamically coupled code version of the thermal-hydraulics system code TRACE and the 3D core simulator Simulate-3K. The studies were carried out for two test cases, namely a PWR rod ejection accident and the Peach Bottom 2 Turbine Trip Test 2. The solution of the coupled calculation, especially the power peak, proves to be very sensitive to the time-step size with the currently employed conventional operator-splitting. Furthermore, a very small time-step size is necessary to achieve decent accuracy. This degrades the trade-off between accuracy and performance. A simple and computationally cheap implementation of time-projection of power has been shown to be able to improve the convergence of the coupled calculation. This scheme is able to achieve a prescribed accuracy with a larger time-step size. (authors)

  9. Coffee improves auditory neuropathy in diabetic mice.

    PubMed

    Hong, Bin Na; Yi, Tae Hoo; Park, Raekil; Kim, Sun Yeou; Kang, Tong Ho

    2008-08-29

    Coffee is a widely consumed beverage and has recently received considerable attention for its possible beneficial effects. Auditory neuropathy is a hearing disorder characterized by an abnormal auditory brainstem response. This study examined the auditory neuropathy induced by diabetes and investigated the action of coffee, trigonelline, and caffeine to determine whether they improved diabetic auditory neuropathy in mice. Auditory brainstem responses, auditory middle latency responses, and otoacoustic emissions were evaluated to assess auditory neuropathy. Coffee or trigonelline ameliorated the hearing threshold shift and delayed latency of the auditory evoked potential in diabetic neuropathy. These findings demonstrate that diabetes can produce a mouse model of auditory neuropathy and that coffee consumption potentially facilitates recovery from diabetes-induced auditory neuropathy. Furthermore, the active constituent in coffee may be trigonelline.

  10. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships.

  11. Auditory brainstem responses and auditory thresholds in woodpeckers.

    PubMed

    Lohr, Bernard; Brittan-Powell, Elizabeth F; Dooling, Robert J

    2013-01-01

    Auditory sensitivity in three species of woodpeckers was estimated using the auditory brainstem response (ABR), a measure of the summed electrical activity of auditory neurons. For all species, the ABR waveform showed at least two, and sometimes three prominent peaks occurring within 10 ms of stimulus onset. Also ABR peak amplitude increased and latency decreased as a function of increasing sound pressure levels. Results showed no significant differences in overall auditory abilities between the three species of woodpeckers. The average ABR audiogram showed that woodpeckers have lowest thresholds between 1.5 and 5.7 kHz. The shape of the average woodpecker ABR audiogram was similar to the shape of the ABR-measured audiograms of other small birds at most frequencies, but at the highest frequency data suggest that woodpecker thresholds may be lower than those of domesticated birds, while similar to those of wild birds.

  12. Time-Based Capabilities of Occupants to Escape Fires in Public Buildings: A Review of Code Provisions and Technical Literature. Final Report.

    ERIC Educational Resources Information Center

    Stahl, Fred I.; And Others

    Available technical literature pertaining to exit facility design and emergency escape provisions of the National Fire Protection Association's "Life Safety Code" (1976 Edition) are reviewed in order to determine the technical support for such provisions. The report focuses on the time-based capabilities of building occupants to effect…

  13. Auditory perspective taking.

    PubMed

    Martinson, Eric; Brock, Derek

    2013-06-01

    Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot self-noise or ego noise, echoes and reverberation, and human interference are all common sources of decreased intelligibility. Moreover, in real-world settings, these problems are routinely aggravated by a variety of sources of background noise. Military scenarios can be punctuated by high decibel noise from materiel and weaponry that would easily overwhelm a robot's normal speaking volume. Moreover, in nonmilitary settings, fans, computers, alarms, and transportation noise can cause enough interference to make a traditional speech interface unusable. This work presents and evaluates a prototype robotic interface that uses perspective taking to estimate the effectiveness of its own speech presentation and takes steps to improve intelligibility for human listeners.

  14. Response recovery in the locust auditory pathway

    PubMed Central

    Ronacher, Bernhard

    2015-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period. PMID:26609115

  15. Visual-induced expectations modulate auditory cortical responses

    PubMed Central

    van Wassenhove, Virginie; Grzeczkowski, Lukasz

    2015-01-01

    Active sensing has important consequences on multisensory processing (Schroeder et al., 2010). Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient color changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the “where” and the “when” of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG) while maintaining the position of their eyes on the left, right, or center of the screen. Participants counted color changes of the fixation cross while neglecting sounds which could be presented to the left, right, or both ears. First, clear alpha power increases were observed in auditory cortices, consistent with participants' attention directed to visual inputs. Second, color changes elicited robust modulations of auditory cortex responses (“when” prediction) seen as ramping activity, early alpha phase-locked responses, and enhanced high-gamma band responses in the contralateral side of sound presentation. Third, no modulations of auditory evoked or oscillatory activity were found to be specific to eye position. Altogether, our results suggest that visual transience can automatically elicit a prediction of “when” a sound will occur by changing the excitability of auditory cortices irrespective of the attended modality, eye position or spatial congruency of auditory and visual events. To the contrary, auditory cortical responses were not significantly affected by eye position suggesting that “where” predictions may require active sensing or saccadic reset to modulate auditory cortex responses, notably in the absence of spatial orientation to sounds. PMID:25705174

  16. The plastic ear and perceptual relearning in auditory spatial perception

    PubMed Central

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  17. Auditory short-term memory activation during score reading.

    PubMed

    Simoens, Veerle L; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  18. Auditory Short-Term Memory Activation during Score Reading

    PubMed Central

    Simoens, Veerle L.; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback. PMID:23326487

  19. Auditory-olfactory synesthesia coexisting with auditory-visual synesthesia.

    PubMed

    Jackson, Thomas E; Sandramouli, Soupramanien

    2012-09-01

    Synesthesia is an unusual condition in which stimulation of one sensory modality causes an experience in another sensory modality or when a sensation in one sensory modality causes another sensation within the same modality. We describe a previously unreported association of auditory-olfactory synesthesia coexisting with auditory-visual synesthesia. Given that many types of synesthesias involve vision, it is important that the clinician provide these patients with the necessary information and support that is available.

  20. Integrated processing of spatial cues in human auditory cortex.

    PubMed

    Salminen, Nelli H; Takanen, Marko; Santala, Olli; Lamminsalo, Jarkko; Altoè, Alessandro; Pulkki, Ville

    2015-09-01

    Human sound source localization relies on acoustical cues, most importantly, the interaural differences in time and level (ITD and ILD). For reaching a unified representation of auditory space the auditory nervous system needs to combine the information provided by these two cues. In search for such a unified representation, we conducted a magnetoencephalography (MEG) experiment that took advantage of the location-specific adaptation of the auditory cortical N1 response. In general, the attenuation caused by a preceding adaptor sound to the response elicited by a probe depends on their spatial arrangement: if the two sounds coincide, adaptation is stronger than when the locations differ. Here, we presented adaptor-probe pairs that contained different localization cues, for instance, adaptors with ITD and probes with ILD. We found that the adaptation of the N1 amplitude was location-specific across localization cues. This result can be explained by the existence of auditory cortical neurons that are sensitive to sound source location independent on which cue, ITD or ILD, provides the location information. Such neurons would form a cue-independent, unified representation of auditory space in human auditory cortex.

  1. [Analysis of auditory information in the brain of the cetacean].

    PubMed

    Popov, V V; Supin, A Ia

    2006-01-01

    The cetacean brain specifics involve an exceptional development of the auditory neural centres. The place of projection sensory areas including the auditory that in the cetacean brain cortex is essentially different from that in other mammals. The EP characteristics indicated presence of several functional divisions in the auditory cortex. Physiological studies of the cetacean auditory centres were mainly performed using the EP technique. Of several types of the EPs, the short-latency auditory EP was most thoroughly studied. In cetacean, it is characterised by exceptionally high temporal resolution with the integration time about 0.3 ms which corresponds to the cut-off frequency 1700 Hz. This much exceeds the temporal resolution of the hearing in terranstrial mammals. The frequency selectivity of hearing in cetacean was measured using a number of variants of the masking technique. The hearing frequency selectivity acuity in cetacean exceeds that of most terraneous mammals (excepting the bats). This acute frequency selectivity provides the differentiation among the finest spectral patterns of auditory signals.

  2. The effect of superior auditory skills on vocal accuracy

    NASA Astrophysics Data System (ADS)

    Amir, Ofer; Amir, Noam; Kishon-Rabin, Liat

    2003-02-01

    The relationship between auditory perception and vocal production has been typically investigated by evaluating the effect of either altered or degraded auditory feedback on speech production in either normal hearing or hearing-impaired individuals. Our goal in the present study was to examine this relationship in individuals with superior auditory abilities. Thirteen professional musicians and thirteen nonmusicians, with no vocal or singing training, participated in this study. For vocal production accuracy, subjects were presented with three tones. They were asked to reproduce the pitch using the vowel /a/. This procedure was repeated three times. The fundamental frequency of each production was measured using an autocorrelation pitch detection algorithm designed for this study. The musicians' superior auditory abilities (compared to the nonmusicians) were established in a frequency discrimination task reported elsewhere. Results indicate that (a) musicians had better vocal production accuracy than nonmusicians (production errors of 1/2 a semitone compared to 1.3 semitones, respectively); (b) frequency discrimination thresholds explain 43% of the variance of the production data, and (c) all subjects with superior frequency discrimination thresholds showed accurate vocal production; the reverse relationship, however, does not hold true. In this study we provide empirical evidence to the importance of auditory feedback on vocal production in listeners with superior auditory skills.

  3. Auditory function in children with Charcot-Marie-Tooth disease.

    PubMed

    Rance, Gary; Ryan, Monique M; Bayliss, Kristen; Gill, Kathryn; O'Sullivan, Caitlin; Whitechurch, Marny

    2012-05-01

    The peripheral manifestations of the inherited neuropathies are increasingly well characterized, but their effects upon cranial nerve function are not well understood. Hearing loss is recognized in a minority of children with this condition, but has not previously been systemically studied. A clear understanding of the prevalence and degree of auditory difficulties in this population is important as hearing impairment can impact upon speech/language development, social interaction ability and educational progress. The aim of this study was to investigate auditory pathway function, speech perception ability and everyday listening and communication in a group of school-aged children with inherited neuropathies. Twenty-six children with Charcot-Marie-Tooth disease confirmed by genetic testing and physical examination participated. Eighteen had demyelinating neuropathies (Charcot-Marie-Tooth type 1) and eight had the axonal form (Charcot-Marie-Tooth type 2). While each subject had normal or near-normal sound detection, individuals in both disease groups showed electrophysiological evidence of auditory neuropathy with delayed or low amplitude auditory brainstem responses. Auditory perception was also affected, with >60% of subjects with Charcot-Marie-Tooth type 1 and >85% of Charcot-Marie-Tooth type 2 suffering impaired processing of auditory temporal (timing) cues and/or abnormal speech understanding in everyday listening conditions.

  4. A comprehensive catalogue of the coding and non-coding transcripts of the human inner ear.

    PubMed

    Schrauwen, Isabelle; Hasin-Brumshtein, Yehudit; Corneveaux, Jason J; Ohmen, Jeffrey; White, Cory; Allen, April N; Lusis, Aldons J; Van Camp, Guy; Huentelman, Matthew J; Friedman, Rick A

    2016-03-01

    The mammalian inner ear consists of the cochlea and the vestibular labyrinth (utricle, saccule, and semicircular canals), which participate in both hearing and balance. Proper development and life-long function of these structures involves a highly complex coordinated system of spatial and temporal gene expression. The characterization of the inner ear transcriptome is likely important for the functional study of auditory and vestibular components, yet, primarily due to tissue unavailability, detailed expression catalogues of the human inner ear remain largely incomplete. We report here, for the first time, comprehensive transcriptome characterization of the adult human cochlea, ampulla, saccule and utricle of the vestibule obtained from patients without hearing abnormalities. Using RNA-Seq, we measured the expression of >50,000 predicted genes corresponding to approximately 200,000 transcripts, in the adult inner ear and compared it to 32 other human tissues. First, we identified genes preferentially expressed in the inner ear, and unique either to the vestibule or cochlea. Next, we examined expression levels of specific groups of potentially interesting RNAs, such as genes implicated in hearing loss, long non-coding RNAs, pseudogenes and transcripts subject to nonsense mediated decay (NMD). We uncover the spatial specificity of expression of these RNAs in the hearing/balance system, and reveal evidence of tissue specific NMD. Lastly, we investigated the non-syndromic deafness loci to which no gene has been mapped, and narrow the list of potential candidates for each locus. These data represent the first high-resolution transcriptome catalogue of the adult human inner ear. A comprehensive identification of coding and non-coding RNAs in the inner ear will enable pathways of auditory and vestibular function to be further defined in the study of hearing and balance. Expression data are freely accessible at https://www.tgen.org/home/research/research-divisions/neurogenomics/supplementary-data/inner-ear-transcriptome.aspx.

  5. Proceedings of the second international conference on auditory display, ICAD `94

    SciTech Connect

    Kramer, G.

    1995-12-31

    ICAD is a forum for presenting research on the use of sound to display data, monitor systems, and provide enhanced user interfaces for computers and virtual reality systems. It is unique in its singular focus on auditory displays and the array of perception, technology, and application areas that this encompasses. Research areas covered by ICAD include: auditory exploration of data via sonification (data controlled sound) and audification (audible playback of data samples); real time monitoring of multivariate data; sound in immersive interfaces (virtual reality) and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting auditory display creation; data handling for auditory display systems; applications of auditory display. Included within each of these areas of inquiry are many issues concerning application, theory, hardware/software, and human factors. Integration with speech-audio implementations with graphical display techniques and their concomitant perception issues also pose significant challenges in each area.

  6. The influence of auditory-motor coupling on fractal dynamics in human gait.

    PubMed

    Hunt, Nathaniel; McGrath, Denise; Stergiou, Nicholas

    2014-08-01

    Humans exhibit an innate ability to synchronize their movements to music. The field of gait rehabilitation has sought to capitalize on this phenomenon by invoking patients to walk in time to rhythmic auditory cues with a view to improving pathological gait. However, the temporal structure of the auditory cue, and hence the temporal structure of the target behavior has not been sufficiently explored. This study reveals the plasticity of auditory-motor coupling in human walking in relation to 'complex' auditory cues. The authors demonstrate that auditory-motor coupling can be driven by different coloured auditory noise signals (e.g. white, brown), shifting the fractal temporal structure of gait dynamics towards the statistical properties of the signals used. This adaptive capability observed in whole-body movement, could potentially be harnessed for targeted neuromuscular rehabilitation in patient groups, depending on the specific treatment goal.

  7. The influence of auditory-motor coupling on fractal dynamics in human gait

    PubMed Central

    Hunt, Nathaniel; McGrath, Denise; Stergiou, Nicholas

    2014-01-01

    Humans exhibit an innate ability to synchronize their movements to music. The field of gait rehabilitation has sought to capitalize on this phenomenon by invoking patients to walk in time to rhythmic auditory cues with a view to improving pathological gait. However, the temporal structure of the audi