Science.gov

Sample records for auditory time coding

  1. Low Somatic Sodium Conductance Enhances Action Potential Precision in Time-Coding Auditory Neurons.

    PubMed

    Yang, Yang; Ramamurthy, Bina; Neef, Andreas; Xu-Friedman, Matthew A

    2016-11-23

    Auditory nerve fibers encode sounds in the precise timing of action potentials (APs), which is used for such computations as sound localization. Timing information is relayed through several cell types in the auditory brainstem that share an unusual property: their APs are not overshooting, suggesting that the cells have very low somatic sodium conductance (gNa). However, it is not clear how gNa influences temporal precision. We addressed this by comparing bushy cells (BCs) in the mouse cochlear nucleus with T-stellate cells (SCs), which do have normal overshooting APs. BCs play a central role in both relaying and refining precise timing information from the auditory nerve, whereas SCs discard precise timing information and encode the envelope of sound amplitude. Nucleated-patch recording at near-physiological temperature indicated that the Na current density was 62% lower in BCs, and the voltage dependence of gNa inactivation was 13 mV hyperpolarized compared with SCs. We endowed BCs with SC-like gNa using two-electrode dynamic clamp and found that synaptic activity at physiologically relevant rates elicited APs with significantly lower probability, through increased activation of delayed rectifier channels. In addition, for two near-simultaneous synaptic inputs, the window of coincidence detection widened significantly with increasing gNa, indicating that refinement of temporal information by BCs is degraded by gNa Thus, reduced somatic gNa appears to be an adaption for enhancing fidelity and precision in time-coding neurons.

  2. Are interaural time and level differences represented by independent or integrated codes in the human auditory cortex?

    PubMed

    Edmonds, Barrie A; Krumbholz, Katrin

    2014-02-01

    Sound localization is important for orienting and focusing attention and for segregating sounds from different sources in the environment. In humans, horizontal sound localization mainly relies on interaural differences in sound arrival time and sound level. Despite their perceptual importance, the neural processing of interaural time and level differences (ITDs and ILDs) remains poorly understood. Animal studies suggest that, in the brainstem, ITDs and ILDs are processed independently by different specialized circuits. The aim of the current study was to investigate whether, at higher processing levels, they remain independent or are integrated into a common code of sound laterality. For that, we measured late auditory cortical potentials in response to changes in sound lateralization elicited by perceptually matched changes in ITD and/or ILD. The responses to the ITD and ILD changes exhibited significant morphological differences. At the same time, however, they originated from overlapping areas of the cortex and showed clear evidence for functional coupling. These results suggest that the auditory cortex contains an integrated code of sound laterality, but also retains independent information about ITD and ILD cues. This cue-related information might be used to assess how consistent the cues are, and thus, how likely they would have arisen from the same source.

  3. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain

    PubMed Central

    Portfors, Christine V.

    2013-01-01

    The ubiquity of social vocalization among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve vocal communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. PMID:23726970

  4. Multistability in auditory stream segregation: a predictive coding view

    PubMed Central

    Winkler, István; Denham, Susan; Mill, Robert; Bőhm, Tamás M.; Bendixen, Alexandra

    2012-01-01

    Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm. PMID:22371621

  5. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    PubMed

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Optimal neural population coding of an auditory spatial cue.

    PubMed

    Harper, Nicol S; McAlpine, David

    2004-08-05

    A sound, depending on the position of its source, can take more time to reach one ear than the other. This interaural (between the ears) time difference (ITD) provides a major cue for determining the source location. Many auditory neurons are sensitive to ITDs, but the means by which such neurons represent ITD is a contentious issue. Recent studies question whether the classical general model (the Jeffress model) applies across species. Here we show that ITD coding strategies of different species can be explained by a unifying principle: that the ITDs an animal naturally encounters should be coded with maximal accuracy. Using statistical techniques and a stochastic neural model, we demonstrate that the optimal coding strategy for ITD depends critically on head size and sound frequency. For small head sizes and/or low-frequency sounds, the optimal coding strategy tends towards two distinct sub-populations tuned to ITDs outside the range created by the head. This is consistent with recent observations in small mammals. For large head sizes and/or high frequencies, the optimal strategy is a homogeneous distribution of ITD tunings within the range created by the head. This is consistent with observations in the barn owl. For humans, the optimal strategy to code ITDs from an acoustically measured distribution depends on frequency; above 400 Hz a homogeneous distribution is optimal, and below 400 Hz distinct sub-populations are optimal.

  7. How the owl resolves auditory coding ambiguity

    PubMed Central

    Mazer, James A.

    1998-01-01

    The barn owl (Tyto alba) uses interaural time difference (ITD) cues to localize sounds in the horizontal plane. Low-order binaural auditory neurons with sharp frequency tuning act as narrow-band coincidence detectors; such neurons respond equally well to sounds with a particular ITD and its phase equivalents and are said to be phase ambiguous. Higher-order neurons with broad frequency tuning are unambiguously selective for single ITDs in response to broad-band sounds and show little or no response to phase equivalents. Selectivity for single ITDs is thought to arise from the convergence of parallel, narrow-band frequency channels that originate in the cochlea. ITD tuning to variable bandwidth stimuli was measured in higher-order neurons of the owl’s inferior colliculus to examine the rules that govern the relationship between frequency channel convergence and the resolution of phase ambiguity. Ambiguity decreased as stimulus bandwidth increased, reaching a minimum at 2–3 kHz. Two independent mechanisms appear to contribute to the elimination of ambiguity: one suppressive and one facilitative. The integration of information carried by parallel, distributed processing channels is a common theme of sensory processing that spans both modality and species boundaries. The principles underlying the resolution of phase ambiguity and frequency channel convergence in the owl may have implications for other sensory systems, such as electrolocation in electric fish and the computation of binocular disparity in the avian and mammalian visual systems. PMID:9724807

  8. Changing Auditory Time with Prismatic Goggles

    ERIC Educational Resources Information Center

    Magnani, Barbara; Pavani, Francesco; Frassinetti, Francesca

    2012-01-01

    The aim of the present study was to explore the spatial organization of auditory time and the effects of the manipulation of spatial attention on such a representation. In two experiments, we asked 28 adults to classify the duration of auditory stimuli as "short" or "long". Stimuli were tones of high or low pitch, delivered left or right of the…

  9. Temporal Coding of Periodicity Pitch in the Auditory System: An Overview

    PubMed Central

    Cariani, Peter

    1999-01-01

    This paper outlines a taxonomy of neural pulse codes and reviews neurophysiological evidence for interspike interval-based representations for pitch and timbre in the auditory nerve and cochlear nucleus. Neural pulse codes can be divided into channel-based codes, temporal-pattern codes, and time-of-arrival codes. Timings of discharges in auditory nerve fibers reflect the time structure of acoustic waveforms, such that the interspike intervals that are produced precisely convey information concerning stimulus periodicities. Population-wide inter-spike interval distributions are constructed by summing together intervals from the observed responses of many single Type I auditory nerve fibers. Features in such distributions correspond closely with pitches that are heard by human listeners. The most common all-order interval present in the auditory nerve array almost invariably corresponds to the pitch frequency, whereas the relative fraction of pitchrelated intervals amongst all others qualitatively corresponds to the strength of the pitch. Consequently, many diverse aspects of pitch perception are explained in terms of such temporal representations. Similar stimulus-driven temporal discharge patterns are observed in major neuronal populations of the cochlear nucleus. Population-interval distributions constitute an alternative time-domain strategy for representing sensory information that complements spatially organized sensory maps. Similar autocorrelation-like representations are possible in other sensory systems, in which neural discharges are time-locked to stimulus waveforms. PMID:10714267

  10. Neural Coding of Periodicity in Marmoset Auditory Cortex

    PubMed Central

    Wang, Xiaoqin

    2010-01-01

    Pitch, our perception of how high or low a sound is on a musical scale, crucially depends on a sound's periodicity. If an acoustic signal is temporally jittered so that it becomes aperiodic, the pitch will no longer be perceivable even though other acoustical features that normally covary with pitch are unchanged. Previous electrophysiological studies investigating pitch have typically used only periodic acoustic stimuli, and as such these studies cannot distinguish between a neural representation of pitch and an acoustical feature that only correlates with pitch. In this report, we examine in the auditory cortex of awake marmoset monkeys (Callithrix jacchus) the neural coding of a periodicity's repetition rate, an acoustic feature that covaries with pitch. We first examine if individual neurons show similar repetition rate tuning for different periodic acoustic signals. We next measure how sensitive these neural representations are to the temporal regularity of the acoustic signal. We find that neurons throughout auditory cortex covary their firing rate with the repetition rate of an acoustic signal. However, similar repetition rate tuning across acoustic stimuli and sensitivity to temporal regularity were generally only observed in a small group of neurons found near the anterolateral border of primary auditory cortex, the location of a previously identified putative pitch processing center. These results suggest that although the encoding of repetition rate is a general component of auditory cortical processing, the neural correlate of periodicity is confined to a special class of pitch-selective neurons within the putative pitch processing center of auditory cortex. PMID:20147419

  11. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities

    PubMed Central

    Deneux, Thomas; Kempf, Alexandre; Daret, Aurélie; Ponsot, Emmanuel; Bathellier, Brice

    2016-01-01

    Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. PMID:27580932

  12. Coding of Amplitude Modulation in Primary Auditory Cortex

    PubMed Central

    Yin, Pingbo; Johnson, Jeffrey S.; O'Connor, Kevin N.

    2011-01-01

    Conflicting results have led to different views about how temporal modulation is encoded in primary auditory cortex (A1). Some studies find a substantial population of neurons that change firing rate without synchronizing to temporal modulation, whereas other studies fail to see these nonsynchronized neurons. As a result, the role and scope of synchronized temporal and nonsynchronized rate codes in AM processing in A1 remains unresolved. We recorded A1 neurons' responses in awake macaques to sinusoidal AM noise. We find most (37–78%) neurons synchronize to at least one modulation frequency (MF) without exhibiting nonsynchronized responses. However, we find both exclusively nonsynchronized neurons (7–29%) and “mixed-mode” neurons (13–40%) that synchronize to at least one MF and fire nonsynchronously to at least one other. We introduce new measures for modulation encoding and temporal synchrony that can improve the analysis of how neurons encode temporal modulation. These include comparing AM responses to the responses to unmodulated sounds, and a vector strength measure that is suitable for single-trial analysis. Our data support a transformation from a temporally based population code of AM to a rate-based code as information ascends the auditory pathway. The number of mixed-mode neurons found in A1 indicates this transformation is not yet complete, and A1 neurons may carry multiplexed temporal and rate codes. PMID:21148093

  13. Coding of melodic gestalt in human auditory cortex.

    PubMed

    Schindler, Andreas; Herdener, Marcus; Bartels, Andreas

    2013-12-01

    The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.

  14. Location coding by opponent neural populations in the auditory cortex.

    PubMed

    Stecker, G Christopher; Harrington, Ian A; Middlebrooks, John C

    2005-03-01

    Although the auditory cortex plays a necessary role in sound localization, physiological investigations in the cortex reveal inhomogeneous sampling of auditory space that is difficult to reconcile with localization behavior under the assumption of local spatial coding. Most neurons respond maximally to sounds located far to the left or right side, with few neurons tuned to the frontal midline. Paradoxically, psychophysical studies show optimal spatial acuity across the frontal midline. In this paper, we revisit the problem of inhomogeneous spatial sampling in three fields of cat auditory cortex. In each field, we confirm that neural responses tend to be greatest for lateral positions, but show the greatest modulation for near-midline source locations. Moreover, identification of source locations based on cortical responses shows sharp discrimination of left from right but relatively inaccurate discrimination of locations within each half of space. Motivated by these findings, we explore an opponent-process theory in which sound-source locations are represented by differences in the activity of two broadly tuned channels formed by contra- and ipsilaterally preferring neurons. Finally, we demonstrate a simple model, based on spike-count differences across cortical populations, that provides bias-free, level-invariant localization-and thus also a solution to the "binding problem" of associating spatial information with other nonspatial attributes of sounds.

  15. Diverse cortical codes for scene segmentation in primate auditory cortex.

    PubMed

    Malone, Brian J; Scott, Brian H; Semple, Malcolm N

    2015-04-01

    The temporal coherence of amplitude fluctuations is a critical cue for segmentation of complex auditory scenes. The auditory system must accurately demarcate the onsets and offsets of acoustic signals. We explored how and how well the timing of onsets and offsets of gated tones are encoded by auditory cortical neurons in awake rhesus macaques. Temporal features of this representation were isolated by presenting otherwise identical pure tones of differing durations. Cortical response patterns were diverse, including selective encoding of onset and offset transients, tonic firing, and sustained suppression. Spike train classification methods revealed that many neurons robustly encoded tone duration despite substantial diversity in the encoding process. Excellent discrimination performance was achieved by neurons whose responses were primarily phasic at tone offset and by those that responded robustly while the tone persisted. Although diverse cortical response patterns converged on effective duration discrimination, this diversity significantly constrained the utility of decoding models referenced to a spiking pattern averaged across all responses or averaged within the same response category. Using maximum likelihood-based decoding models, we demonstrated that the spike train recorded in a single trial could support direct estimation of stimulus onset and offset. Comparisons between different decoding models established the substantial contribution of bursts of activity at sound onset and offset to demarcating the temporal boundaries of gated tones. Our results indicate that relatively few neurons suffice to provide temporally precise estimates of such auditory "edges," particularly for models that assume and exploit the heterogeneity of neural responses in awake cortex.

  16. Improving Hearing Performance Using Natural Auditory Coding Strategies

    NASA Astrophysics Data System (ADS)

    Rattay, Frank

    Sound transfer from the human ear to the brain is based on three quite different neural coding principles when the continuous temporal auditory source signal is sent as binary code in excellent quality via 30,000 nerve fibers per ear. Cochlear implants are well-accepted neural prostheses for people with sensory hearing loss, but currently the devices are inspired only by the tonotopic principle. According to this principle, every sound frequency is mapped to a specific place along the cochlea. By electrical stimulation, the frequency content of the acoustic signal is distributed via few contacts of the prosthesis to corresponding places and generates spikes there. In contrast to the natural situation, the artificially evoked information content in the auditory nerve is quite poor, especially because the richness of the temporal fine structure of the neural pattern is replaced by a firing pattern that is strongly synchronized with an artificial cycle duration. Improvement in hearing performance is expected by involving more of the ingenious strategies developed during evolution.

  17. Decreasing auditory Simon effects across reaction time distributions.

    PubMed

    Xiong, Aiping; Proctor, Robert W

    2016-01-01

    The Simon effect for left-right visual stimuli previously has been shown to decrease across the reaction time (RT) distribution. This decrease has been attributed to automatic activation of the corresponding response, which then dissipates over time. In contrast, for left-right tone stimuli, the Simon effect has not been found to decrease across the RT distribution but instead tends to increase. It has been proposed that automatic activation occurs through visuomotor information transmission, whereas the auditory Simon effect reflects cognitive coding interference and not automatic activation. In 4 experiments, we examined distributions of the auditory Simon effect for RT, percentage error (PE), and an inverse efficiency score [IES = RT/(1 - PE)] as a function of tone frequency and duration to determine whether the activation-dissipation account is also applicable to auditory stimuli. Consistent decreasing functions were found for the RT Simon effect distribution with short-duration tones of low frequency and for the PE and IES Simon effect distributions for all durations and frequency sets. Together, these findings provide robust evidence that left and right auditory stimuli also produce decreasing Simon effect distribution functions suggestive of automatic activation and dissipation of the corresponding response.

  18. Time course of dynamic range adaptation in the auditory nerve

    PubMed Central

    Wang, Grace I.; Dean, Isabel; Delgutte, Bertrand

    2012-01-01

    Auditory adaptation to sound-level statistics occurs as early as in the auditory nerve (AN), the first stage of neural auditory processing. In addition to firing rate adaptation characterized by a rate decrement dependent on previous spike activity, AN fibers show dynamic range adaptation, which is characterized by a shift of the rate-level function or dynamic range toward the most frequently occurring levels in a dynamic stimulus, thereby improving the precision of coding of the most common sound levels (Wen B, Wang GI, Dean I, Delgutte B. J Neurosci 29: 13797–13808, 2009). We investigated the time course of dynamic range adaptation by recording from AN fibers with a stimulus in which the sound levels periodically switch from one nonuniform level distribution to another (Dean I, Robinson BL, Harper NS, McAlpine D. J Neurosci 28: 6430–6438, 2008). Dynamic range adaptation occurred rapidly, but its exact time course was difficult to determine directly from the data because of the concomitant firing rate adaptation. To characterize the time course of dynamic range adaptation without the confound of firing rate adaptation, we developed a phenomenological “dual adaptation” model that accounts for both forms of AN adaptation. When fitted to the data, the model predicts that dynamic range adaptation occurs as rapidly as firing rate adaptation, over 100–400 ms, and the time constants of the two forms of adaptation are correlated. These findings suggest that adaptive processing in the auditory periphery in response to changes in mean sound level occurs rapidly enough to have significant impact on the coding of natural sounds. PMID:22457465

  19. Temporal Codes for Amplitude Contrast in Auditory Cortex

    PubMed Central

    Scott, Brian H; Semple, Malcolm N.; Malone, Brian J.

    2010-01-01

    The encoding of sound level is fundamental to auditory signal processing, and the temporal information present in amplitude modulation is crucial to the complex signals used for communication sounds, including human speech. The modulation transfer function, which measures the minimum detectable modulation depth across modulation frequency, has been shown to predict speech intelligibility performance in a range of adverse listening conditions and hearing impairments, and even for users of cochlear implants. We presented sinusoidally amplitude modulated (SAM) tones of varying modulation depths to awake macaque monkeys while measuring the responses of neurons in the auditory core. Using spike train classification methods, we found that thresholds for modulation depth detection and discrimination in the most sensitive units are comparable to psychophysical thresholds when precise temporal discharge patterns rather than average firing rates are considered. Moreover, spike timing information was also superior to average rate information when discriminating static pure tones varying in level but with similar envelopes. The limited utility of average firing rate information in many units also limited the utility of standard measures of sound level tuning, such as the rate level function (RLF), in predicting cortical responses to dynamic signals like SAM. Response modulation typically exceeded that predicted by the slope of the RLF by large factors. The decoupling of the cortical encoding of SAM and static tones indicates that enhancing the representation of acoustic contrast is a cardinal feature of the ascending auditory pathway. PMID:20071542

  20. Interdependence of spatial and temporal coding in the auditory midbrain.

    PubMed

    Koch, U; Grothe, B

    2000-04-01

    To date, most physiological studies that investigated binaural auditory processing have addressed the topic rather exclusively in the context of sound localization. However, there is strong psychophysical evidence that binaural processing serves more than only sound localization. This raises the question of how binaural processing of spatial cues interacts with cues important for feature detection. The temporal structure of a sound is one such feature important for sound recognition. As a first approach, we investigated the influence of binaural cues on temporal processing in the mammalian auditory system. Here, we present evidence that binaural cues, namely interaural intensity differences (IIDs), have profound effects on filter properties for stimulus periodicity of auditory midbrain neurons in the echolocating big brown bat, Eptesicus fuscus. Our data indicate that these effects are partially due to changes in strength and timing of binaural inhibitory inputs. We measured filter characteristics for the periodicity (modulation frequency) of sinusoidally frequency modulated sounds (SFM) under different binaural conditions. As criteria, we used 50% filter cutoff frequencies of modulation transfer functions based on discharge rate as well as synchronicity of discharge to the sound envelope. The binaural conditions were contralateral stimulation only, equal stimulation at both ears (IID = 0 dB), and more intense at the ipsilateral ear (IID = -20, -30 dB). In 32% of neurons, the range of modulation frequencies the neurons responded to changed considerably comparing monaural and binaural (IID =0) stimulation. Moreover, in approximately 50% of neurons the range of modulation frequencies was narrower when the ipsilateral ear was favored (IID = -20) compared with equal stimulation at both ears (IID = 0). In approximately 10% of the neurons synchronization differed when comparing different binaural cues. Blockade of the GABAergic or glycinergic inputs to the cells recorded

  1. Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant

    PubMed Central

    Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa

    2015-01-01

    Introduction  The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. Objective  To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. Methods  This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. Results  There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. Conclusion  There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied. PMID:27413409

  2. Auditory spatial attention using interaural time differences.

    PubMed

    Sach, A J; Hill, N I; Bailey, P J

    2000-04-01

    Previous probe-signal studies of auditory spatial attention have shown faster responses to sounds at an expected versus an unexpected location, making no distinction between the use of interaural time difference (ITD) cues and interaural-level difference cues. In 5 experiments, performance on a same-different spatial discrimination task was used in place of the reaction time metric, and sounds, presented over headphones, were lateralized only by an ITD. In all experiments, performance was better for signals lateralized on the expected side of the head, supporting the conclusion that ITDs can be used as a basis for covert orienting. The performance advantage generalized to all sounds within the spatial focus and was not dissipated by a trial-by-trial rove in frequency or by a rove in spectral profile. Successful use by the listeners of a cross-modal, centrally positioned visual cue provided evidence for top-down attentional control.

  3. Intensity-Invariant Coding in the Auditory System

    PubMed Central

    Barbour, Dennis L.

    2011-01-01

    The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. PMID:21540053

  4. IRIG Serial Time Code Formats

    DTIC Science & Technology

    2016-08-01

    2016 xi Acronyms µs microsecond (10−6s) BCD binary coded decimal BIH Bureau International de l’Heure CF control function d day dc direct...codes contain control functions ( CFs ) that are reserved for encoding various controls, identification, and other special-purpose functions. Time...set of CF bits for the encoding of various control, identification, and other special-purpose functions. The control bits may be programmed in any

  5. Norepinephrine Modulates Coding of Complex Vocalizations in the Songbird Auditory Cortex Independent of Local Neuroestrogen Synthesis.

    PubMed

    Ikeda, Maaya Z; Jeon, Sung David; Cowell, Rosemary A; Remage-Healey, Luke

    2015-06-24

    The catecholamine norepinephrine plays a significant role in auditory processing. Most studies to date have examined the effects of norepinephrine on the neuronal response to relatively simple stimuli, such as tones and calls. It is less clear how norepinephrine shapes the detection of complex syntactical sounds, as well as the coding properties of sensory neurons. Songbirds provide an opportunity to understand how auditory neurons encode complex, learned vocalizations, and the potential role of norepinephrine in modulating the neuronal computations for acoustic communication. Here, we infused norepinephrine into the zebra finch auditory cortex and performed extracellular recordings to study the modulation of song representations in single neurons. Consistent with its proposed role in enhancing signal detection, norepinephrine decreased spontaneous activity and firing during stimuli, yet it significantly enhanced the auditory signal-to-noise ratio. These effects were all mimicked by clonidine, an α-2 receptor agonist. Moreover, a pattern classifier analysis indicated that norepinephrine enhanced the ability of single neurons to accurately encode complex auditory stimuli. Because neuroestrogens are also known to enhance auditory processing in the songbird brain, we tested the hypothesis that norepinephrine actions depend on local estrogen synthesis. Neither norepinephrine nor adrenergic receptor antagonist infusion into the auditory cortex had detectable effects on local estradiol levels. Moreover, pretreatment with fadrozole, a specific aromatase inhibitor, did not block norepinephrine's neuromodulatory effects. Together, these findings indicate that norepinephrine enhances signal detection and information encoding for complex auditory stimuli by suppressing spontaneous "noise" activity and that these actions are independent of local neuroestrogen synthesis.

  6. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.

  7. Predictive coding of multisensory timing

    PubMed Central

    Shi, Zhuanghua; Burr, David

    2016-01-01

    The sense of time is foundational for perception and action, yet it frequently departs significantly from physical time. In the paper we review recent progress on temporal contextual effects, multisensory temporal integration, temporal recalibration, and related computational models. We suggest that subjective time arises from minimizing prediction errors and adaptive recalibration, which can be unified in the framework of predictive coding, a framework rooted in Helmholtz’s ‘perception as inference’. PMID:27695705

  8. GOES satellite time code dissemination

    NASA Technical Reports Server (NTRS)

    Beehler, R. E.

    1983-01-01

    The GOES time code system, the performance achieved to date, and some potential improvements in the future are discussed. The disseminated time code is originated from a triply redundant set of atomic standards, time code generators and related equipment maintained by NBS at NOAA's Wallops Island, VA satellite control facility. It is relayed by two GOES satellites located at 75 W and 135 W longitude on a continuous basis to users within North and South America (with overlapping coverage) and well out into the Atlantic and Pacific ocean areas. Downlink frequencies are near 468 MHz. The signals from both satellites are monitored and controlled from the NBS labs at Boulder, CO with additional monitoring input from geographically separated receivers in Washington, D.C. and Hawaii. Performance experience with the received time codes for periods ranging from several years to one day is discussed. Results are also presented for simultaneous, common-view reception by co-located receivers and by receivers separated by several thousand kilometers.

  9. Sensorineural hearing loss amplifies neural coding of envelope information in the central auditory system of chinchillas

    PubMed Central

    Zhong, Ziwei; Henry, Kenneth S.; Heinz, Michael G.

    2014-01-01

    People with sensorineural hearing loss often have substantial difficulty understanding speech under challenging listening conditions. Behavioral studies suggest that reduced sensitivity to the temporal structure of sound may be responsible, but underlying neurophysiological pathologies are incompletely understood. Here, we investigate the effects of noise-induced hearing loss on coding of envelope (ENV) structure in the central auditory system of anesthetized chinchillas. ENV coding was evaluated noninvasively using auditory evoked potentials recorded from the scalp surface in response to sinusoidally amplitude modulated tones with carrier frequencies of 1, 2, 4, and 8 kHz and a modulation frequency of 140 Hz. Stimuli were presented in quiet and in three levels of white background noise. The latency of scalp-recorded ENV responses was consistent with generation in the auditory midbrain. Hearing loss amplified neural coding of ENV at carrier frequencies of 2 kHz and above. This result may reflect enhanced ENV coding from the periphery and/or an increase in the gain of central auditory neurons. In contrast to expectations, hearing loss was not associated with a stronger adverse effect of increasing masker intensity on ENV coding. The exaggerated neural representation of ENV information shown here at the level of the auditory midbrain helps to explain previous findings of enhanced sensitivity to amplitude modulation in people with hearing loss under some conditions. Furthermore, amplified ENV coding may potentially contribute to speech perception problems in people with cochlear hearing loss by acting as a distraction from more salient acoustic cues, particularly in fluctuating backgrounds. PMID:24315815

  10. Sensorineural hearing loss amplifies neural coding of envelope information in the central auditory system of chinchillas.

    PubMed

    Zhong, Ziwei; Henry, Kenneth S; Heinz, Michael G

    2014-03-01

    People with sensorineural hearing loss often have substantial difficulty understanding speech under challenging listening conditions. Behavioral studies suggest that reduced sensitivity to the temporal structure of sound may be responsible, but underlying neurophysiological pathologies are incompletely understood. Here, we investigate the effects of noise-induced hearing loss on coding of envelope (ENV) structure in the central auditory system of anesthetized chinchillas. ENV coding was evaluated noninvasively using auditory evoked potentials recorded from the scalp surface in response to sinusoidally amplitude modulated tones with carrier frequencies of 1, 2, 4, and 8 kHz and a modulation frequency of 140 Hz. Stimuli were presented in quiet and in three levels of white background noise. The latency of scalp-recorded ENV responses was consistent with generation in the auditory midbrain. Hearing loss amplified neural coding of ENV at carrier frequencies of 2 kHz and above. This result may reflect enhanced ENV coding from the periphery and/or an increase in the gain of central auditory neurons. In contrast to expectations, hearing loss was not associated with a stronger adverse effect of increasing masker intensity on ENV coding. The exaggerated neural representation of ENV information shown here at the level of the auditory midbrain helps to explain previous findings of enhanced sensitivity to amplitude modulation in people with hearing loss under some conditions. Furthermore, amplified ENV coding may potentially contribute to speech perception problems in people with cochlear hearing loss by acting as a distraction from more salient acoustic cues, particularly in fluctuating backgrounds.

  11. Burst Firing is a Neural Code in an Insect Auditory System

    PubMed Central

    Eyherabide, Hugo G.; Rokem, Ariel; Herz, Andreas V. M.; Samengo, Inés

    2008-01-01

    Various classes of neurons alternate between high-frequency discharges and silent intervals. This phenomenon is called burst firing. To analyze burst activity in an insect system, grasshopper auditory receptor neurons were recorded in vivo for several distinct stimulus types. The experimental data show that both burst probability and burst characteristics are strongly influenced by temporal modulations of the acoustic stimulus. The tendency to burst, hence, is not only determined by cell-intrinsic processes, but also by their interaction with the stimulus time course. We study this interaction quantitatively and observe that bursts containing a certain number of spikes occur shortly after stimulus deflections of specific intensity and duration. Our findings suggest a sparse neural code where information about the stimulus is represented by the number of spikes per burst, irrespective of the detailed interspike-interval structure within a burst. This compact representation cannot be interpreted as a firing-rate code. An information-theoretical analysis reveals that the number of spikes per burst reliably conveys information about the amplitude and duration of sound transients, whereas their time of occurrence is reflected by the burst onset time. The investigated neurons encode almost half of the total transmitted information in burst activity. PMID:18946533

  12. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation.

    PubMed

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform-Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.

  13. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    PubMed Central

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  14. Coding of signals in noise by amphibian auditory nerve fibers.

    PubMed

    Narins, P M

    1987-01-01

    Rate-level (R-L) functions derived for pure-tones and pure-tones in broadband noise were obtained for auditory nerve fibers in the treefrog, Eleutherodactylus coqui. Normalized R-L functions for low-frequency, low-threshold fibers exhibit a horizontal rightward shift in the presence of broadband background noise. The magnitude of this shift is directly proportional to the noise spectrum level, and inversely proportional to the fiber's threshold. R-L functions for mid- and high-frequency fibers also show a horizontal shift, but to a lesser degree, consistent with their elevated thresholds relative to the low-frequency fibers. The implications of these findings for the processing of biologically significant sounds in the high levels of background noise in the animal's natural habitat are considered.

  15. Interactive coding of visual spatial frequency and auditory amplitude-modulation rate.

    PubMed

    Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Mossbridge, Julia; Suzuki, Satoru

    2012-03-06

    Spatial frequency is a fundamental visual feature coded in primary visual cortex, relevant for perceiving textures, objects, hierarchical structures, and scenes, as well as for directing attention and eye movements. Temporal amplitude-modulation (AM) rate is a fundamental auditory feature coded in primary auditory cortex, relevant for perceiving auditory objects, scenes, and speech. Spatial frequency and temporal AM rate are thus fundamental building blocks of visual and auditory perception. Recent results suggest that crossmodal interactions are commonplace across the primary sensory cortices and that some of the underlying neural associations develop through consistent multisensory experience such as audio-visually perceiving speech, gender, and objects. We demonstrate that people consistently and absolutely (rather than relatively) match specific auditory AM rates to specific visual spatial frequencies. We further demonstrate that this crossmodal mapping allows amplitude-modulated sounds to guide attention to and modulate awareness of specific visual spatial frequencies. Additional results show that the crossmodal association is approximately linear, based on physical spatial frequency, and generalizes to tactile pulses, suggesting that the association develops through multisensory experience during manual exploration of surfaces. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Representations of Time-Varying Cochlear Implant Stimulation in Auditory Cortex of Awake Marmosets (Callithrix jacchus).

    PubMed

    Johnson, Luke A; Della Santina, Charles C; Wang, Xiaoqin

    2017-07-19

    Electrical stimulation of the auditory periphery organ by cochlear implant (CI) generates highly synchronized inputs to the auditory system. It has long been thought such inputs would lead to highly synchronized neural firing along the ascending auditory pathway. However, neurophysiological studies with hearing animals have shown that the central auditory system progressively converts temporal representations of time-varying sounds to firing rate-based representations. It is not clear whether this coding principle also applies to highly synchronized CI inputs. Higher-frequency modulations in CI stimulation have been found to evoke largely transient responses with little sustained firing in previous studies of the primary auditory cortex (A1) in anesthetized animals. Here, we show that, in addition to neurons displaying synchronized firing to CI stimuli, a large population of A1 neurons in awake marmosets (Callithrix jacchus) responded to rapid time-varying CI stimulation with discharges that were not synchronized to CI stimuli, yet reflected changing repetition frequency by increased firing rate. Marmosets of both sexes were included in this study. By comparing directly each neuron's responses to time-varying acoustic and CI signals, we found that individual A1 neurons encode both modalities with similar firing patterns (stimulus-synchronized or nonsynchronized). These findings suggest that A1 neurons use the same basic coding schemes to represent time-varying acoustic or CI stimulation and provide new insights into mechanisms underlying how the brain processes natural sounds via a CI device.SIGNIFICANCE STATEMENT In modern cochlear implant (CI) processors, the temporal information in speech or environmental sounds is delivered through modulated electric pulse trains. How the auditory cortex represents temporally modulated CI stimulation across multiple time scales has remained largely unclear. In this study, we compared directly neuronal responses in primary

  17. Norepinephrine Modulates Coding of Complex Vocalizations in the Songbird Auditory Cortex Independent of Local Neuroestrogen Synthesis

    PubMed Central

    Ikeda, Maaya Z.; Jeon, Sung David; Cowell, Rosemary A.

    2015-01-01

    The catecholamine norepinephrine plays a significant role in auditory processing. Most studies to date have examined the effects of norepinephrine on the neuronal response to relatively simple stimuli, such as tones and calls. It is less clear how norepinephrine shapes the detection of complex syntactical sounds, as well as the coding properties of sensory neurons. Songbirds provide an opportunity to understand how auditory neurons encode complex, learned vocalizations, and the potential role of norepinephrine in modulating the neuronal computations for acoustic communication. Here, we infused norepinephrine into the zebra finch auditory cortex and performed extracellular recordings to study the modulation of song representations in single neurons. Consistent with its proposed role in enhancing signal detection, norepinephrine decreased spontaneous activity and firing during stimuli, yet it significantly enhanced the auditory signal-to-noise ratio. These effects were all mimicked by clonidine, an α-2 receptor agonist. Moreover, a pattern classifier analysis indicated that norepinephrine enhanced the ability of single neurons to accurately encode complex auditory stimuli. Because neuroestrogens are also known to enhance auditory processing in the songbird brain, we tested the hypothesis that norepinephrine actions depend on local estrogen synthesis. Neither norepinephrine nor adrenergic receptor antagonist infusion into the auditory cortex had detectable effects on local estradiol levels. Moreover, pretreatment with fadrozole, a specific aromatase inhibitor, did not block norepinephrine's neuromodulatory effects. Together, these findings indicate that norepinephrine enhances signal detection and information encoding for complex auditory stimuli by suppressing spontaneous “noise” activity and that these actions are independent of local neuroestrogen synthesis. PMID:26109659

  18. A rate code for sound azimuth in monkey auditory cortex: implications for human neuroimaging studies.

    PubMed

    Werner-Reiss, Uri; Groh, Jennifer M

    2008-04-02

    Is sound location represented in the auditory cortex of humans and monkeys? Human neuroimaging experiments have had only mixed success at demonstrating sound location sensitivity in primary auditory cortex. This is in apparent conflict with studies in monkeys and other animals, in which single-unit recording studies have found stronger evidence for spatial sensitivity. Does this apparent discrepancy reflect a difference between humans and animals, or does it reflect differences in the sensitivity of the methods used for assessing the representation of sound location? The sensitivity of imaging methods such as functional magnetic resonance imaging depends on the following two key aspects of the underlying neuronal population: (1) what kind of spatial sensitivity individual neurons exhibit and (2) whether neurons with similar response preferences are clustered within the brain. To address this question, we conducted a single-unit recording study in monkeys. We investigated the nature of spatial sensitivity in individual auditory cortical neurons to determine whether they have receptive fields (place code) or monotonic (rate code) sensitivity to sound azimuth. Second, we tested how strongly the population of neurons favors contralateral locations. We report here that the majority of neurons show predominantly monotonic azimuthal sensitivity, forming a rate code for sound azimuth, but that at the population level the degree of contralaterality is modest. This suggests that the weakness of the evidence for spatial sensitivity in human neuroimaging studies of auditory cortex may be attributable to limited lateralization at the population level, despite what may be considerable spatial sensitivity in individual neurons.

  19. Auditory information coding by modeled cochlear nucleus neurons.

    PubMed

    Wang, Huan; Isik, Michael; Borst, Alexander; Hemmert, Werner

    2011-06-01

    In this paper we use information theory to quantify the information in the output spike trains of modeled cochlear nucleus globular bushy cells (GBCs). GBCs are part of the sound localization pathway. They are known for their precise temporal processing, and they code amplitude modulations with high fidelity. Here we investigated the information transmission for a natural sound, a recorded vowel. We conclude that the maximum information transmission rate for a single neuron was close to 1,050 bits/s, which corresponds to a value of approximately 5.8 bits per spike. For quasi-periodic signals like voiced speech, the transmitted information saturated as word duration increased. In general, approximately 80% of the available information from the spike trains was transmitted within about 20 ms. Transmitted information for speech signals concentrated around formant frequency regions. The efficiency of neural coding was above 60% up to the highest temporal resolution we investigated (20 μs). The increase in transmitted information to that precision indicates that these neurons are able to code information with extremely high fidelity, which is required for sound localization. On the other hand, only 20% of the information was captured when the temporal resolution was reduced to 4 ms. As the temporal resolution of most speech recognition systems is limited to less than 10 ms, this massive information loss might be one of the reasons which are responsible for the lack of noise robustness of these systems.

  20. The effect of real-time auditory feedback on learning new characters.

    PubMed

    Danna, Jérémy; Fontaine, Maureen; Paz-Villagrán, Vietminh; Gondre, Charles; Thoret, Etienne; Aramaki, Mitsuko; Kronland-Martinet, Richard; Ystad, Sølvi; Velay, Jean-Luc

    2015-10-01

    The present study investigated the effect of handwriting sonification on graphomotor learning. Thirty-two adults, distributed in two groups, learned four new characters with their non-dominant hand. The experimental design included a pre-test, a training session, and two post-tests, one just after the training sessions and another 24h later. Two characters were learned with and two without real-time auditory feedback (FB). The first group first learned the two non-sonified characters and then the two sonified characters whereas the reverse order was adopted for the second group. Results revealed that auditory FB improved the speed and fluency of handwriting movements but reduced, in the short-term only, the spatial accuracy of the trace. Transforming kinematic variables into sounds allows the writer to perceive his/her movement in addition to the written trace and this might facilitate handwriting learning. However, there were no differential effects of auditory FB, neither long-term nor short-term for the subjects who first learned the characters with auditory FB. We hypothesize that the positive effect on the handwriting kinematics was transferred to characters learned without FB. This transfer effect of the auditory FB is discussed in light of the Theory of Event Coding.

  1. Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH)

    PubMed Central

    Tierney, Adam; Kraus, Nina

    2014-01-01

    Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills. PMID:25505879

  2. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    PubMed

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  3. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    PubMed Central

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  4. Odors Bias Time Perception in Visual and Auditory Modalities

    PubMed Central

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  5. Odors Bias Time Perception in Visual and Auditory Modalities.

    PubMed

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  6. Auditory training improves neural timing in the human brainstem.

    PubMed

    Russo, Nicole M; Nicol, Trent G; Zecker, Steven G; Hayes, Erin A; Kraus, Nina

    2005-01-06

    The auditory brainstem response reflects neural encoding of the acoustic characteristic of a speech syllable with remarkable precision. Some children with learning impairments demonstrate abnormalities in this preconscious measure of neural encoding especially in background noise. This study investigated whether auditory training targeted to remediate perceptually-based learning problems would alter the neural brainstem encoding of the acoustic sound structure of speech in such children. Nine subjects, clinically diagnosed with a language-based learning problem (e.g., dyslexia), worked with auditory perceptual training software. Prior to beginning and within three months after completing the training program, brainstem responses to the syllable /da/ were recorded in quiet and background noise. Subjects underwent additional auditory neurophysiological, perceptual, and cognitive testing. Ten control subjects, who did not participate in any remediation program, underwent the same battery of tests at time intervals equivalent to the trained subjects. Transient and sustained (frequency-following response) components of the brainstem response were evaluated. The primary pathway afferent volley -- neural events occurring earlier than 11 ms after stimulus onset -- did not demonstrate plasticity. However, quiet-to-noise inter-response correlations of the sustained response ( approximately 11-50 ms) increased significantly in the trained children, reflecting improved stimulus encoding precision, whereas control subjects did not exhibit this change. Thus, auditory training can alter the preconscious neural encoding of complex sounds by improving neural synchrony in the auditory brainstem. Additionally, several measures of brainstem response timing were related to changes in cortical physiology, as well as perceptual, academic, and cognitive measures from pre- to post-training.

  7. Context-dependent coding and gain control in the auditory system of crickets.

    PubMed

    Clemens, Jan; Rau, Florian; Hennig, R Matthias; Hildebrandt, K Jannis

    2015-10-01

    Sensory systems process stimuli that greatly vary in intensity and complexity. To maintain efficient information transmission, neural systems need to adjust their properties to these different sensory contexts, yielding adaptive or stimulus-dependent codes. Here, we demonstrated adaptive spectrotemporal tuning in a small neural network, i.e. the peripheral auditory system of the cricket. We found that tuning of cricket auditory neurons was sharper for complex multi-band than for simple single-band stimuli. Information theoretical considerations revealed that this sharpening improved information transmission by separating the neural representations of individual stimulus components. A network model inspired by the structure of the cricket auditory system suggested two putative mechanisms underlying this adaptive tuning: a saturating peripheral nonlinearity could change the spectral tuning, whereas broad feed-forward inhibition was able to reproduce the observed adaptive sharpening of temporal tuning. Our study revealed a surprisingly dynamic code usually found in more complex nervous systems and suggested that stimulus-dependent codes could be implemented using common neural computations.

  8. Speech Compensation for Time-Scale-Modified Auditory Feedback

    ERIC Educational Resources Information Center

    Ogane, Rintaro; Honda, Masaaki

    2014-01-01

    Purpose: The purpose of this study was to examine speech compensation in response to time-scale-modified auditory feedback during the transition of the semivowel for a target utterance of /ija/. Method: Each utterance session consisted of 10 control trials in the normal feedback condition followed by 20 perturbed trials in the modified auditory…

  9. Speech Compensation for Time-Scale-Modified Auditory Feedback

    ERIC Educational Resources Information Center

    Ogane, Rintaro; Honda, Masaaki

    2014-01-01

    Purpose: The purpose of this study was to examine speech compensation in response to time-scale-modified auditory feedback during the transition of the semivowel for a target utterance of /ija/. Method: Each utterance session consisted of 10 control trials in the normal feedback condition followed by 20 perturbed trials in the modified auditory…

  10. Unanesthetized auditory cortex exhibits multiple codes for gaps in cochlear implant pulse trains.

    PubMed

    Kirby, Alana E; Middlebrooks, John C

    2012-02-01

    Cochlear implant listeners receive auditory stimulation through amplitude-modulated electric pulse trains. Auditory nerve studies in animals demonstrate qualitatively different patterns of firing elicited by low versus high pulse rates, suggesting that stimulus pulse rate might influence the transmission of temporal information through the auditory pathway. We tested in awake guinea pigs the temporal acuity of auditory cortical neurons for gaps in cochlear implant pulse trains. Consistent with results using anesthetized conditions, temporal acuity improved with increasing pulse rates. Unlike the anesthetized condition, however, cortical neurons responded in the awake state to multiple distinct features of the gap-containing pulse trains, with the dominant features varying with stimulus pulse rate. Responses to the onset of the trailing pulse train (Trail-ON) provided the most sensitive gap detection at 1,017 and 4,069 pulse-per-second (pps) rates, particularly for short (25 ms) leading pulse trains. In contrast, under conditions of 254 pps rate and long (200 ms) leading pulse trains, a sizeable fraction of units demonstrated greater temporal acuity in the form of robust responses to the offsets of the leading pulse train (Lead-OFF). Finally, TONIC responses exhibited decrements in firing rate during gaps, but were rarely the most sensitive feature. Unlike results from anesthetized conditions, temporal acuity of the most sensitive units was nearly as sharp for brief as for long leading bursts. The differences in stimulus coding across pulse rates likely originate from pulse rate-dependent variations in adaptation in the auditory nerve. Two marked differences from responses to acoustic stimulation were: first, Trail-ON responses to 4,069 pps trains encoded substantially shorter gaps than have been observed with acoustic stimuli; and second, the Lead-OFF gap coding seen for <15 ms gaps in 254 pps stimuli is not seen in responses to sounds. The current results may help

  11. The Time Course of Neural Changes Underlying Auditory Perceptual Learning

    PubMed Central

    Atienza, Mercedes; Cantero, Jose L.; Dominguez-Marin, Elena

    2002-01-01

    Improvement in perception takes place within the training session and from one session to the next. The present study aims at determining the time course of perceptual learning as revealed by changes in auditory event-related potentials (ERPs) reflecting preattentive processes. Subjects were trained to discriminate two complex auditory patterns in a single session. ERPs were recorded just before and after training, while subjects read a book and ignored stimulation. ERPs showed a negative wave called mismatch negativity (MMN)—which indexes automatic detection of a change in a homogeneous auditory sequence—just after subjects learned to consciously discriminate the two patterns. ERPs were recorded again 12, 24, 36, and 48 h later, just before testing performance on the discrimination task. Additional behavioral and neurophysiological changes were found several hours after the training session: an enhanced P2 at 24 h followed by shorter reaction times, and an enhanced MMN at 36 h. These results indicate that gains in performance on the discrimination of two complex auditory patterns are accompanied by different learning-dependent neurophysiological events evolving within different time frames, supporting the hypothesis that fast and slow neural changes underlie the acquisition of improved perception. PMID:12075002

  12. Development of Visuo-Auditory Integration in Space and Time

    PubMed Central

    Gori, Monica; Sandini, Giulio; Burr, David

    2012-01-01

    Adults integrate multisensory information optimally (e.g., Ernst and Banks, 2002) while children do not integrate multisensory visual-haptic cues until 8–10 years of age (e.g., Gori et al., 2008). Before that age strong unisensory dominance occurs for size and orientation visual-haptic judgments, possibly reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. Within the framework of the cross-sensory calibration hypothesis, we investigate visual-auditory integration in both space and time with child-friendly spatial and temporal bisection tasks. Unimodal and bimodal (conflictual and not) audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. On the contrary, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (for PSEs), and bimodal thresholds higher than the Bayesian prediction. Only in the adult group did bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behavior develops late. We suggest that the visual dominance for space and the auditory dominance for time could reflect a cross-sensory comparison of vision in the spatial visuo-audio task and a cross-sensory comparison of audition in the temporal visuo-audio task. PMID:23060759

  13. Quantifying envelope and fine-structure coding in auditory nerve responses to chimaeric speech.

    PubMed

    Heinz, Michael G; Swaminathan, Jayaganesh

    2009-09-01

    Any sound can be separated mathematically into a slowly varying envelope and rapidly varying fine-structure component. This property has motivated numerous perceptual studies to understand the relative importance of each component for speech and music perception. Specialized acoustic stimuli, such as auditory chimaeras with the envelope of one sound and fine structure of another have been used to separate the perceptual roles for envelope and fine structure. Cochlear narrowband filtering limits the ability to isolate fine structure from envelope; however, envelope recovery from fine structure has been difficult to evaluate physiologically. To evaluate envelope recovery at the output of the cochlea, neural cross-correlation coefficients were developed that quantify the similarity between two sets of spike-train responses. Shuffled auto- and cross-correlogram analyses were used to compute separate correlations for responses to envelope and fine structure based on both model and recorded spike trains from auditory nerve fibers. Previous correlogram analyses were extended to isolate envelope coding more effectively in auditory nerve fibers with low center frequencies, which are particularly important for speech coding. Recovered speech envelopes were present in both model and recorded responses to one- and 16-band speech fine-structure chimaeras and were significantly greater for the one-band case, consistent with perceptual studies. Model predictions suggest that cochlear recovered envelopes are reduced following sensorineural hearing loss due to broadened tuning associated with outer-hair cell dysfunction. In addition to the within-fiber cross-stimulus cases considered here, these neural cross-correlation coefficients can also be used to evaluate spatiotemporal coding by applying them to cross-fiber within-stimulus conditions. Thus, these neural metrics can be used to quantitatively evaluate a wide range of perceptually significant temporal coding issues relevant to

  14. Speech compensation for time-scale-modified auditory feedback.

    PubMed

    Ogane, Rintaro; Honda, Masaaki

    2014-04-01

    PURPOSE The purpose of this study was to examine speech compensation in response to time-scale-modified auditory feedback during the transition of the semivowel for a target utterance of /ija/. METHOD Each utterance session consisted of 10 control trials in the normal feedback condition followed by 20 perturbed trials in the modified auditory feedback condition and 10 return trials in the normal feedback condition. The authors examined speech compensation and the aftereffect in terms of 3 acoustic features: the maximum velocities on the (a) F1 and (b) F2 trajectories (VF1 and VF2) and (c) the F1-F2 onset time difference (TD) during the transition. They also conducted a syllable perception test on the feedback speech. RESULTS Speech compensation was observed in VF1, VF2, and TD. The magnitudes of speech compensation in VF1 and TD monotonically increased as the amount of the time-scale perturbation increased. The amount of speech compensation increased as the phonemic perception change increased. CONCLUSIONS Speech compensation for time-scale-modified auditory feedback is carried out primarily by changing VF1 and secondarily by adjusting VF2 and TD. Furthermore, it is activated primarily by detecting the speed change in altered feedback speech and secondarily by detecting the phonemic categorical change.

  15. Auditory objects of attention: the role of interaural time differences.

    PubMed

    Darwin, C J; Hukin, R W

    1999-06-01

    The role of interaural time difference (ITD) in perceptual grouping and selective attention was explored in 3 experiments. Experiment 1 showed that listeners can use small differences in ITD between 2 sentences to say which of 2 short, constant target words was part of the attended sentence, in the absence of talker or fundamental frequency differences. Experiments 2 and 3 showed that listeners do not explicitly track components that share a common ITD. Their inability to segregate a harmonic from a target vowel by a difference in ITD was not substantially changed by the vowel being placed in a sentence context, where the sentence shared the same ITD as the rest of the vowel. The results indicate that in following a particular auditory sound source over time, listeners attend to perceived auditory objects at particular azimuthal positions rather than attend explicitly to those frequency components that share a common ITD.

  16. Getting back on the beat: links between auditory-motor integration and precise auditory processing at fast time scales.

    PubMed

    Tierney, Adam; Kraus, Nina

    2016-03-01

    The auditory system is unique in its ability to precisely detect the timing of perceptual events and use this information to update motor plans, a skill that is crucial for language. However, the characteristics of the auditory system that enable this temporal precision are only beginning to be understood. Previous work has shown that participants who can tap consistently to a metronome have neural responses to sound with greater phase coherence from trial to trial. We hypothesized that this relationship is driven by a link between the updating of motor output by auditory feedback and neural precision. Moreover, we hypothesized that neural phase coherence at both fast time scales (reflecting subcortical processing) and slow time scales (reflecting cortical processing) would be linked to auditory-motor timing integration. To test these hypotheses, we asked participants to synchronize to a pacing stimulus, and then changed either the tempo or the timing of the stimulus to assess whether they could rapidly adapt. Participants who could rapidly and accurately resume synchronization had neural responses to sound with greater phase coherence. However, this precise timing was limited to the time scale of 10 ms (100 Hz) or faster; neural phase coherence at slower time scales was unrelated to performance on this task. Auditory-motor adaptation therefore specifically depends upon consistent auditory processing at fast, but not slow, time scales. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Seasonal plasticity of precise spike timing in the avian auditory system.

    PubMed

    Caras, Melissa L; Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A

    2015-02-25

    Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding.

  18. Seasonal Plasticity of Precise Spike Timing in the Avian Auditory System

    PubMed Central

    Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A.

    2015-01-01

    Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843

  19. Auditory cortical field coding long-lasting tonal offsets in mice

    PubMed Central

    Baba, Hironori; Tsukano, Hiroaki; Hishida, Ryuichi; Takahashi, Kuniyuki; Horii, Arata; Takahashi, Sugata; Shibuki, Katsuei

    2016-01-01

    Although temporal information processing is important in auditory perception, the mechanisms for coding tonal offsets are unknown. We investigated cortical responses elicited at the offset of tonal stimuli using flavoprotein fluorescence imaging in mice. Off-responses were clearly observed at the offset of tonal stimuli lasting for 7 s, but not after stimuli lasting for 1 s. Off-responses to the short stimuli appeared in a similar cortical region, when conditioning tonal stimuli lasting for 5–20 s preceded the stimuli. MK-801, an inhibitor of NMDA receptors, suppressed the two types of off-responses, suggesting that disinhibition produced by NMDA receptor-dependent synaptic depression might be involved in the off-responses. The peak off-responses were localized in a small region adjacent to the primary auditory cortex, and no frequency-dependent shift of the response peaks was found. Frequency matching of preceding tonal stimuli with short test stimuli was not required for inducing off-responses to short stimuli. Two-photon calcium imaging demonstrated significantly larger neuronal off-responses to stimuli lasting for 7 s in this field, compared with off-responses to stimuli lasting for 1 s. The present results indicate the presence of an auditory cortical field responding to long-lasting tonal offsets, possibly for temporal information processing. PMID:27687766

  20. The Neural Code for Auditory Space Depends on Sound Frequency and Head Size in an Optimal Manner

    PubMed Central

    Harper, Nicol S.; Scott, Brian H.; Semple, Malcolm N.; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)–the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron’s maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  1. The neural code for auditory space depends on sound frequency and head size in an optimal manner.

    PubMed

    Harper, Nicol S; Scott, Brian H; Semple, Malcolm N; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species.

  2. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex.

    PubMed

    Trapeau, Régis; Schönwiesner, Marc

    2015-09-01

    The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere.

  3. Repetition suppression and expectation suppression are dissociable in time in early auditory evoked fields.

    PubMed

    Todorovic, Ana; de Lange, Floris P

    2012-09-26

    Repetition of a stimulus, as well as valid expectation that a stimulus will occur, both attenuate the neural response to it. These effects, repetition suppression and expectation suppression, are typically confounded in paradigms in which the nonrepeated stimulus is also relatively rare (e.g., in oddball blocks of mismatch negativity paradigms, or in repetition suppression paradigms with multiple repetitions before an alternation). However, recent hierarchical models of sensory processing inspire the hypothesis that the two might be separable in time, with repetition suppression occurring earlier, as a consequence of local transition probabilities, and suppression by expectation occurring later, as a consequence of learnt statistical regularities. Here we test this hypothesis in an auditory experiment by orthogonally manipulating stimulus repetition and stimulus expectation and, using magnetoencephalography, measuring the neural response over time in human subjects. We found that stimulus repetition (but not stimulus expectation) attenuates the early auditory response (40-60 ms), while stimulus expectation (but not stimulus repetition) attenuates the subsequent, intermediate stage of auditory processing (100-200 ms). These findings are well in line with hierarchical predictive coding models, which posit sequential stages of prediction error resolution, contingent on the level at which the hypothesis is generated.

  4. Neural evidence for predictive coding in auditory cortex during speech production.

    PubMed

    Okada, Kayoko; Matchin, William; Hickok, Gregory

    2017-04-10

    Recent models of speech production suggest that motor commands generate forward predictions of the auditory consequences of those commands, that these forward predications can be used to monitor and correct speech output, and that this system is hierarchically organized (Hickok, Houde, & Rong, Neuron, 69(3), 407--422, 2011; Pickering & Garrod, Behavior and Brain Sciences, 36(4), 329--347, 2013). Recent psycholinguistic research has shown that internally generated speech (i.e., imagined speech) produces different types of errors than does overt speech (Oppenheim & Dell, Cognition, 106(1), 528--537, 2008; Oppenheim & Dell, Memory & Cognition, 38(8), 1147-1160, 2010). These studies suggest that articulated speech might involve predictive coding at additional levels than imagined speech. The current fMRI experiment investigates neural evidence of predictive coding in speech production. Twenty-four participants from UC Irvine were recruited for the study. Participants were scanned while they were visually presented with a sequence of words that they reproduced in sync with a visual metronome. On each trial, they were cued to either silently articulate the sequence or to imagine the sequence without overt articulation. As expected, silent articulation and imagined speech both engaged a left hemisphere network previously implicated in speech production. A contrast of silent articulation with imagined speech revealed greater activation for articulated speech in inferior frontal cortex, premotor cortex and the insula in the left hemisphere, consistent with greater articulatory load. Although both conditions were silent, this contrast also produced significantly greater activation in auditory cortex in dorsal superior temporal gyrus in both hemispheres. We suggest that these activations reflect forward predictions arising from additional levels of the perceptual/motor hierarchy that are involved in monitoring the intended speech output.

  5. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    PubMed

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control.

  6. A Temporal Predictive Code for Voice Motor Control: Evidence from ERP and Behavioral Responses to Pitch-shifted Auditory Feedback

    PubMed Central

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R.

    2016-01-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control. PMID:26835556

  7. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    PubMed

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  8. Distinct Correlation Structure Supporting a Rate-Code for Sound Localization in the Owl’s Auditory Forebrain

    PubMed Central

    2017-01-01

    Abstract While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space. PMID:28674698

  9. Predicted effects of sensorineural hearing loss on across-fiber envelope coding in the auditory nervea

    PubMed Central

    Swaminathan, Jayaganesh; Heinz, Michael G.

    2011-01-01

    Cross-channel envelope correlations are hypothesized to influence speech intelligibility, particularly in adverse conditions. Acoustic analyses suggest speech envelope correlations differ for syllabic and phonemic ranges of modulation frequency. The influence of cochlear filtering was examined here by predicting cross-channel envelope correlations in different speech modulation ranges for normal and impaired auditory-nerve (AN) responses. Neural cross-correlation coefficients quantified across-fiber envelope coding in syllabic (0–5 Hz), phonemic (5–64 Hz), and periodicity (64–300 Hz) modulation ranges. Spike trains were generated from a physiologically based AN model. Correlations were also computed using the model with selective hair-cell damage. Neural predictions revealed that envelope cross-correlation decreased with increased characteristic-frequency separation for all modulation ranges (with greater syllabic-envelope correlation than phonemic or periodicity). Syllabic envelope was highly correlated across many spectral channels, whereas phonemic and periodicity envelopes were correlated mainly between adjacent channels. Outer-hair-cell impairment increased the degree of cross-channel correlation for phonemic and periodicity ranges for speech in quiet and in noise, thereby reducing the number of independent neural information channels for envelope coding. In contrast, outer-hair-cell impairment was predicted to decrease cross-channel correlation for syllabic envelopes in noise, which may partially account for the reduced ability of hearing-impaired listeners to segregate speech in complex backgrounds. PMID:21682421

  10. Associative learning shapes the neural code for stimulus magnitude in primary auditory cortex.

    PubMed

    Polley, Daniel B; Heiser, Marc A; Blake, David T; Schreiner, Christoph E; Merzenich, Michael M

    2004-11-16

    Since the dawn of experimental psychology, researchers have sought an understanding of the fundamental relationship between the amplitude of sensory stimuli and the magnitudes of their perceptual representations. Contemporary theories support the view that magnitude is encoded by a linear increase in firing rate established in the primary afferent pathways. In the present study, we have investigated sound intensity coding in the rat primary auditory cortex (AI) and describe its plasticity by following paired stimulus reinforcement and instrumental conditioning paradigms. In trained animals, population-response strengths in AI became more strongly nonlinear with increasing stimulus intensity. Individual AI responses became selective to more restricted ranges of sound intensities and, as a population, represented a broader range of preferred sound levels. These experiments demonstrate that the representation of stimulus magnitude can be powerfully reshaped by associative learning processes and suggest that the code for sound intensity within AI can be derived from intensity-tuned neurons that change, rather than simply increase, their firing rates in proportion to increases in sound intensity.

  11. Understanding auditory spectro-temporal receptive fields and their changes with input statistics by efficient coding principles.

    PubMed

    Zhao, Lingyun; Zhaoping, Li

    2011-08-01

    Spectro-temporal receptive fields (STRFs) have been widely used as linear approximations to the signal transform from sound spectrograms to neural responses along the auditory pathway. Their dependence on statistical attributes of the stimuli, such as sound intensity, is usually explained by nonlinear mechanisms and models. Here, we apply an efficient coding principle which has been successfully used to understand receptive fields in early stages of visual processing, in order to provide a computational understanding of the STRFs. According to this principle, STRFs result from an optimal tradeoff between maximizing the sensory information the brain receives, and minimizing the cost of the neural activities required to represent and transmit this information. Both terms depend on the statistical properties of the sensory inputs and the noise that corrupts them. The STRFs should therefore depend on the input power spectrum and the signal-to-noise ratio, which is assumed to increase with input intensity. We analytically derive the optimal STRFs when signal and noise are approximated as Gaussians. Under the constraint that they should be spectro-temporally local, the STRFs are predicted to adapt from being band-pass to low-pass filters as the input intensity reduces, or the input correlation becomes longer range in sound frequency or time. These predictions qualitatively match physiological observations. Our prediction as to how the STRFs should be determined by the input power spectrum could readily be tested, since this spectrum depends on the stimulus ensemble. The potentials and limitations of the efficient coding principle are discussed.

  12. Auditory reafferences: the influence of real-time feedback on movement control

    PubMed Central

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes. PMID:25688230

  13. Coding of Visual, Auditory, Rule, and Response Information in the Brain: 10 Years of Multivoxel Pattern Analysis.

    PubMed

    Woolgar, Alexandra; Jackson, Jade; Duncan, John

    2016-10-01

    How is the processing of task information organized in the brain? Many views of brain function emphasize modularity, with different regions specialized for processing different types of information. However, recent accounts also highlight flexibility, pointing especially to the highly consistent pattern of frontoparietal activation across many tasks. Although early insights from functional imaging were based on overall activation levels during different cognitive operations, in the last decade many researchers have used multivoxel pattern analyses to interrogate the representational content of activations, mapping out the brain regions that make particular stimulus, rule, or response distinctions. Here, we drew on 100 searchlight decoding analyses from 57 published papers to characterize the information coded in different brain networks. The outcome was highly structured. Visual, auditory, and motor networks predominantly (but not exclusively) coded visual, auditory, and motor information, respectively. By contrast, the frontoparietal multiple-demand network was characterized by domain generality, coding visual, auditory, motor, and rule information. The contribution of the default mode network and voxels elsewhere was minor. The data suggest a balanced picture of brain organization in which sensory and motor networks are relatively specialized for information in their own domain, whereas a specific frontoparietal network acts as a domain-general "core" with the capacity to code many different aspects of a task.

  14. Reaction time facilitation for horizontally moving auditory-visual stimuli.

    PubMed

    Harrison, Neil R; Wuerger, Sophie M; Meyer, Georg F

    2010-12-16

    For moving targets, bimodal facilitation of reaction time has been observed for motion in the depth plane (C. Cappe, G. Thut, B. Romei, & M. M. Murray, 2009), but it is unclear whether analogous RT facilitation is observed for auditory-visual motion stimuli in the horizontal plane, as perception of horizontal motion relies on very different cues. Here we found that bimodal motion cues resulted in significant RT facilitation at threshold level, which could not be explained using an independent decisions model (race model). Bimodal facilitation was observed at suprathreshold levels when the RTs for suprathreshold unimodal stimuli were roughly equated, and significant RT gains were observed for direction-discrimination tasks with abrupt-onset motion stimuli and with motion preceded by a stationary phase. We found no speeded responses for bimodal signals when a motion signal in one modality was paired with a spatially co-localized stationary signal in the other modality, but faster response times could be explained by statistical facilitation when the motion signals traveled in opposite directions. These results strongly suggest that integration of motion cues led to the speeded bimodal responses. Finally, our results highlight the importance of matching the unimodal reaction times to obtain response facilitation for bimodal motion signals in the linear plane.

  15. The Use of Auditory Output for Time-Critical Information

    DTIC Science & Technology

    1992-12-01

    that uses the auditory sense for alerts in the Combat Information Center (ClC). The immediate goal was to compare operator performance using voice ...tracking task on a scenario simulation of the ClC. Alerts were presented by voice , auditory icons, or buzzers. Four different causes for alerts were...alert so that three related sounds corresponded to three alerts of each alert cause. RESULTS 1. Overall, the results showed that both voice and

  16. Multiple time scales of adaptation in the auditory system as revealed by human evoked potentials.

    PubMed

    Costa-Faidella, Jordi; Grimm, Sabine; Slabu, Lavinia; Díaz-Santaella, Francisco; Escera, Carles

    2011-06-01

    Single neurons in the primary auditory cortex of the cat show faster adaptation time constants to short- than long-term stimulus history. This ability to encode the complex past auditory stimulation in multiple time scales would enable the auditory system to generate expectations of the incoming stimuli. Here, we tested whether large neural populations exhibit this ability as well, by recording human auditory evoked potentials (AEP) to pure tones in a sequence embedding short- and long-term aspects of stimulus history. Our results yielded dynamic amplitude modulations of the P2 AEP to stimulus repetition spanning from milliseconds to tens of seconds concurrently, as well as amplitude modulations of the mismatch negativity AEP to regularity violations. A simple linear model of expectancy accounting for both short- and long-term stimulus history described our results, paralleling the behavior of neurons in the primary auditory cortex.

  17. Effects of sensorineural hearing loss on temporal coding of narrowband and broadband signals in the auditory periphery

    PubMed Central

    Henry, Kenneth S.; Heinz, Michael G.

    2013-01-01

    People with sensorineural hearing loss have substantial difficulty understanding speech under degraded listening conditions. Behavioral studies suggest that this difficulty may be caused by changes in auditory processing of the rapidly-varying temporal fine structure (TFS) of acoustic signals. In this paper, we review the presently known effects of sensorineural hearing loss on processing of TFS and slower envelope modulations in the peripheral auditory system of mammals. Cochlear damage has relatively subtle effects on phase locking by auditory-nerve fibers to the temporal structure of narrowband signals under quiet conditions. In background noise, however, sensorineural loss does substantially reduce phase locking to the TFS of pure-tone stimuli. For auditory processing of broadband stimuli, sensorineural hearing loss has been shown to severely alter the neural representation of temporal information along the tonotopic axis of the cochlea. Notably, auditory-nerve fibers innervating the high-frequency part of the cochlea grow increasingly responsive to low-frequency TFS information and less responsive to temporal information near their characteristic frequency (CF). Cochlear damage also increases the correlation of the response to TFS across fibers of varying CF, decreases the traveling-wave delay between TFS responses of fibers with different CFs, and can increase the range of temporal modulation frequencies encoded in the periphery for broadband sounds. Weaker neural coding of temporal structure in background noise and degraded coding of broadband signals along the tonotopic axis of the cochlea are expected to contribute considerably to speech perception problems in people with sensorineural hearing loss. PMID:23376018

  18. Stability of Auditory Steady State Responses Over Time.

    PubMed

    Van Eeckhoutte, Maaike; Luke, Robert; Wouters, Jan; Francart, Tom

    2017-08-26

    Auditory steady state responses (ASSRs) are used in clinical practice for objective hearing assessments. The response is called steady state because it is assumed to be stable over time, and because it is evoked by a stimulus with a certain periodicity, which will lead to discrete frequency components that are stable in amplitude and phase over time. However, the stimuli commonly used to evoke ASSRs are also known to be able to induce loudness adaptation behaviorally. Researchers and clinicians using ASSRs assume that the response remains stable over time. This study investigates (1) the stability of ASSR amplitudes over time, within one recording, and (2) whether loudness adaptation can be reflected in ASSRs. ASSRs were measured from 14 normal-hearing participants. The ASSRs were evoked by the stimuli that caused the most loudness adaptation in a previous behavioral study, that is, mixed-modulated sinusoids with carrier frequencies of either 500 or 2000 Hz, a modulation frequency of 40 Hz, and a low sensation level of 30 dB SL. For each carrier frequency and participant, 40 repetitions of 92 sec recordings were made. Two types of analyses were used to investigate the ASSR amplitudes over time: with the more traditionally used Fast Fourier Transform and with a novel Kalman filtering approach. Robust correlations between the ASSR amplitudes and behavioral loudness adaptation ratings were also calculated. Overall, ASSR amplitudes were stable. Over all individual recordings, the median change of the amplitudes over time was -0.0001 μV/s. Based on group analysis, a significant but very weak decrease in amplitude over time was found, with the decrease in amplitude over time around -0.0002 μV/s. Correlation coefficients between ASSR amplitudes and behavioral loudness adaptation ratings were significant but low to moderate, with r = 0.27 and r = 0.39 for the 500 and 2000 Hz carrier frequency, respectively. The decrease in amplitude of ASSRs over time (92 sec) is small

  19. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations

    PubMed Central

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems. PMID:26292285

  20. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations.

    PubMed

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.

  1. Bat auditory cortex – model for general mammalian auditory computation or special design solution for active time perception?

    PubMed

    Kössl, Manfred; Hechavarria, Julio; Voss, Cornelia; Schaefer, Markus; Vater, Marianne

    2015-03-01

    Audition in bats serves passive orientation, alerting functions and communication as it does in other vertebrates. In addition, bats have evolved echolocation for orientation and prey detection and capture. This put a selective pressure on the auditory system in regard to echolocation-relevant temporal computation and frequency analysis. The present review attempts to evaluate in which respect the processing modules of bat auditory cortex (AC) are a model for typical mammalian AC function or are designed for echolocation-unique purposes. We conclude that, while cortical area arrangement and cortical frequency processing does not deviate greatly from that of other mammals, the echo delay time-sensitive dorsal cortex regions contain special designs for very powerful time perception. Different bat species have either a unique chronotopic cortex topography or a distributed salt-and-pepper representation of echo delay. The two designs seem to enable similar behavioural performance.

  2. Linear Coding of Voice Onset Time

    PubMed Central

    Frye, Richard E.; Fisher, Janet McGraw; Coty, Alexis; Zarella, Melissa; Liederman, Jacqueline; Halgren, Eric

    2008-01-01

    Voice-onset time (VOT) provides an important auditory cue for recognizing spoken consonant-vowel syllables. Although changes in the neuromagnetic response to consonant-vowel syllables with different VOT have been examined, such experiments have only manipulated VOT with respect to voicing. We utilized the characteristics of a previously developed asymmetric VOT continuum (Liederman et al., 2005) to determine if changes in the prominent M100 neuromagnetic response were linearly modulated by VOT. Eight right-handed English-speaking, normally developing participants performed a VOT discrimination task during a whole-head neuromagnetic recording. The M100 was identified in the gradiometers overlying the right and left temporal cortices and single dipoles were fit to each M100 waveform. A repeated measures analysis of variance with post-hoc contrast test for linear trend was used to determine whether characteristics of the M100 were linearly modulated by VOT. The morphology of the M100 gradiometer waveform and the peak latency of the dipole waveform were linearly modulated by VOT. This modulation was much greater in the left, as compared to the right, hemisphere. The M100 dipole moved in a linear fashion as VOT increased in both hemispheres, but along different axes in each hemisphere. This study suggests that VOT may linearly modulate characteristics of the M100, predominately in the left hemisphere, and suggests that the VOT of consonant-vowel syllables, instead of, or in addition to, voicing, should be examined in future experiments. PMID:17714009

  3. [A comparison of time resolution among auditory, tactile and promontory electrical stimulation--superiority of cochlear implants as human communication aids].

    PubMed

    Matsushima, J; Kumagai, M; Harada, C; Takahashi, K; Inuyama, Y; Ifukube, T

    1992-09-01

    Our previous reports showed that second formant information, using a speech coding method, could be transmitted through an electrode on the promontory. However, second formant information can also be transmitted by tactile stimulation. Therefore, to find out whether electrical stimulation of the auditory nerve would be superior to tactile stimulation for our speech coding method, the time resolutions of the two modes of stimulation were compared. The results showed that the time resolution of electrical promontory stimulation was three times better than the time resolution of tactile stimulation of the finger. This indicates that electrical stimulation of the auditory nerve is much better for our speech coding method than tactile stimulation of the finger.

  4. Timing predictability enhances regularity encoding in the human subcortical auditory pathway.

    PubMed

    Gorina-Careta, Natàlia; Zarnowiec, Katarzyna; Costa-Faidella, Jordi; Escera, Carles

    2016-11-17

    The encoding of temporal regularities is a critical property of the auditory system, as short-term neural representations of environmental statistics serve to auditory object formation and detection of potentially relevant novel stimuli. A putative neural mechanism underlying regularity encoding is repetition suppression, the reduction of neural activity to repeated stimulation. Although repetitive stimulation per se has shown to reduce auditory neural activity in animal cortical and subcortical levels and in the human cerebral cortex, other factors such as timing may influence the encoding of statistical regularities. This study was set out to investigate whether temporal predictability in the ongoing auditory input modulates repetition suppression in subcortical stages of the auditory processing hierarchy. Human auditory frequency-following responses (FFR) were recorded to a repeating consonant-vowel stimuli (/wa/) delivered in temporally predictable and unpredictable conditions. FFR amplitude was attenuated by repetition independently of temporal predictability, yet we observed an accentuated suppression when the incoming stimulation was temporally predictable. These findings support the view that regularity encoding spans across the auditory hierarchy and point to temporal predictability as a modulatory factor of regularity encoding in early stages of the auditory pathway.

  5. Timing predictability enhances regularity encoding in the human subcortical auditory pathway

    PubMed Central

    Gorina-Careta, Natàlia; Zarnowiec, Katarzyna; Costa-Faidella, Jordi; Escera, Carles

    2016-01-01

    The encoding of temporal regularities is a critical property of the auditory system, as short-term neural representations of environmental statistics serve to auditory object formation and detection of potentially relevant novel stimuli. A putative neural mechanism underlying regularity encoding is repetition suppression, the reduction of neural activity to repeated stimulation. Although repetitive stimulation per se has shown to reduce auditory neural activity in animal cortical and subcortical levels and in the human cerebral cortex, other factors such as timing may influence the encoding of statistical regularities. This study was set out to investigate whether temporal predictability in the ongoing auditory input modulates repetition suppression in subcortical stages of the auditory processing hierarchy. Human auditory frequency–following responses (FFR) were recorded to a repeating consonant–vowel stimuli (/wa/) delivered in temporally predictable and unpredictable conditions. FFR amplitude was attenuated by repetition independently of temporal predictability, yet we observed an accentuated suppression when the incoming stimulation was temporally predictable. These findings support the view that regularity encoding spans across the auditory hierarchy and point to temporal predictability as a modulatory factor of regularity encoding in early stages of the auditory pathway. PMID:27853313

  6. Speech Enhancement for Listeners with Hearing Loss Based on a Model for Vowel Coding in the Auditory Midbrain

    PubMed Central

    Rao, Akshay; Carney, Laurel H.

    2015-01-01

    A novel signal-processing strategy is proposed to enhance speech for listeners with hearing loss. The strategy focuses on improving vowel perception based on a recent hypothesis for vowel coding in the auditory system. Traditionally, studies of neural vowel encoding have focused on the representation of formants (peaks in vowel spectra) in the discharge patterns of the population of auditory-nerve (AN) fibers. A recent hypothesis focuses instead on vowel encoding in the auditory midbrain, and suggests a robust representation of formants. AN fiber discharge rates are characterized by pitch-related fluctuations having frequency-dependent modulation depths. Fibers tuned to frequencies near formants exhibit weaker pitch-related fluctuations than those tuned to frequencies between formants. Many auditory midbrain neurons show tuning to amplitude modulation frequency in addition to audio frequency. According to the auditory midbrain vowel encoding hypothesis, the response-map of a population of midbrain neurons tuned to modulations near voice-pitch exhibits minima near formant frequencies, due to the lack of strong pitch-related fluctuations at their inputs. This representation is robust over the range of noise conditions in which speech intelligibility is also robust for normal-hearing listeners. Based on this hypothesis, a vowel-enhancement strategy has been proposed that aims to restore vowel-encoding at the level of the auditory midbrain. The signal-processing consists of pitch tracking, formant-tracking and formant enhancement. The novel formant-tracking method proposed here estimates the first two formant frequencies by modeling characteristics of the auditory periphery, such as saturated discharge-rates of AN fibers and modulation tuning properties of auditory midbrain neurons. The formant enhancement stage aims to restore the representation of formants at the level of the midbrain by increasing the dominance of a single harmonic near each formant and saturating

  7. Speech enhancement for listeners with hearing loss based on a model for vowel coding in the auditory midbrain.

    PubMed

    Rao, Akshay; Carney, Laurel H

    2014-07-01

    A novel signal-processing strategy is proposed to enhance speech for listeners with hearing loss. The strategy focuses on improving vowel perception based on a recent hypothesis for vowel coding in the auditory system. Traditionally, studies of neural vowel encoding have focused on the representation of formants (peaks in vowel spectra) in the discharge patterns of the population of auditory-nerve (AN) fibers. A recent hypothesis focuses instead on vowel encoding in the auditory midbrain, and suggests a robust representation of formants. AN fiber discharge rates are characterized by pitch-related fluctuations having frequency-dependent modulation depths. Fibers tuned to frequencies near formants exhibit weaker pitch-related fluctuations than those tuned to frequencies between formants. Many auditory midbrain neurons show tuning to amplitude modulation frequency in addition to audio frequency. According to the auditory midbrain vowel encoding hypothesis, the response map of a population of midbrain neurons tuned to modulations near voice pitch exhibits minima near formant frequencies, due to the lack of strong pitch-related fluctuations at their inputs. This representation is robust over the range of noise conditions in which speech intelligibility is also robust for normal-hearing listeners. Based on this hypothesis, a vowel-enhancement strategy has been proposed that aims to restore vowel encoding at the level of the auditory midbrain. The signal processing consists of pitch tracking, formant tracking, and formant enhancement. The novel formant-tracking method proposed here estimates the first two formant frequencies by modeling characteristics of the auditory periphery, such as saturated discharge rates of AN fibers and modulation tuning properties of auditory midbrain neurons. The formant enhancement stage aims to restore the representation of formants at the level of the midbrain by increasing the dominance of a single harmonic near each formant and saturating

  8. Time-dependent neural processing of auditory feedback during voice pitch error detection.

    PubMed

    Behroozmand, Roozbeh; Liu, Hanjun; Larson, Charles R

    2011-05-01

    The neural responses to sensory consequences of a self-produced motor act are suppressed compared with those in response to a similar but externally generated stimulus. Previous studies in the somatosensory and auditory systems have shown that the motor-induced suppression of the sensory mechanisms is sensitive to delays between the motor act and the onset of the stimulus. The present study investigated time-dependent neural processing of auditory feedback in response to self-produced vocalizations. ERPs were recorded in response to normal and pitch-shifted voice auditory feedback during active vocalization and passive listening to the playback of the same vocalizations. The pitch-shifted stimulus was delivered to the subjects' auditory feedback after a randomly chosen time delay between the vocal onset and the stimulus presentation. Results showed that the neural responses to delayed feedback perturbations were significantly larger than those in response to the pitch-shifted stimulus occurring at vocal onset. Active vocalization was shown to enhance neural responsiveness to feedback alterations only for nonzero delays compared with passive listening to the playback. These findings indicated that the neural mechanisms of auditory feedback processing are sensitive to timing between the vocal motor commands and the incoming auditory feedback. Time-dependent neural processing of auditory feedback may be an important feature of the audio-vocal integration system that helps to improve the feedback-based monitoring and control of voice structure through vocal error detection and correction.

  9. Time-dependent Neural Processing of Auditory Feedback during Voice Pitch Error Detection

    PubMed Central

    Behroozmand, Roozbeh; Liu, Hanjun; Larson, Charles R.

    2012-01-01

    The neural responses to sensory consequences of a self-produced motor act are suppressed compared with those in response to a similar but externally generated stimulus. Previous studies in the somatosensory and auditory systems have shown that the motor-induced suppression of the sensory mechanisms is sensitive to delays between the motor act and the onset of the stimulus. The present study investigated time-dependent neural processing of auditory feedback in response to self-produced vocalizations. ERPs were recorded in response to normal and pitch-shifted voice auditory feedback during active vocalization and passive listening to the playback of the same vocalizations. The pitch-shifted stimulus was delivered to the subjects’ auditory feedback after a randomly chosen time delay between the vocal onset and the stimulus presentation. Results showed that the neural responses to delayed feedback perturbations were significantly larger than those in response to the pitch-shifted stimulus occurring at vocal onset. Active vocalization was shown to enhance neural responsiveness to feedback alterations only for nonzero delays compared with passive listening to the playback. These findings indicated that the neural mechanisms of auditory feedback processing are sensitive to timing between the vocal motor commands and the incoming auditory feedback. Time-dependent neural processing of auditory feedback may be an important feature of the audio-vocal integration system that helps to improve the feedback-based monitoring and control of voice structure through vocal error detection and correction. PMID:20146608

  10. Understanding Auditory Spectro-Temporal Receptive Fields and Their Changes with Input Statistics by Efficient Coding Principles

    PubMed Central

    Zhao, Lingyun; Zhaoping, Li

    2011-01-01

    Spectro-temporal receptive fields (STRFs) have been widely used as linear approximations to the signal transform from sound spectrograms to neural responses along the auditory pathway. Their dependence on statistical attributes of the stimuli, such as sound intensity, is usually explained by nonlinear mechanisms and models. Here, we apply an efficient coding principle which has been successfully used to understand receptive fields in early stages of visual processing, in order to provide a computational understanding of the STRFs. According to this principle, STRFs result from an optimal tradeoff between maximizing the sensory information the brain receives, and minimizing the cost of the neural activities required to represent and transmit this information. Both terms depend on the statistical properties of the sensory inputs and the noise that corrupts them. The STRFs should therefore depend on the input power spectrum and the signal-to-noise ratio, which is assumed to increase with input intensity. We analytically derive the optimal STRFs when signal and noise are approximated as Gaussians. Under the constraint that they should be spectro-temporally local, the STRFs are predicted to adapt from being band-pass to low-pass filters as the input intensity reduces, or the input correlation becomes longer range in sound frequency or time. These predictions qualitatively match physiological observations. Our prediction as to how the STRFs should be determined by the input power spectrum could readily be tested, since this spectrum depends on the stimulus ensemble. The potentials and limitations of the efficient coding principle are discussed. PMID:21887121

  11. Linear coding of voice onset time.

    PubMed

    Frye, Richard E; Fisher, Janet McGraw; Coty, Alexis; Zarella, Melissa; Liederman, Jacqueline; Halgren, Eric

    2007-09-01

    Voice onset time (VOT) provides an important auditory cue for recognizing spoken consonant-vowel syllables. Although changes in the neuromagnetic response to consonant-vowel syllables with different VOT have been examined, such experiments have only manipulated VOT with respect to voicing. We utilized the characteristics of a previously developed asymmetric VOT continuum [Liederman, J., Frye, R. E., McGraw Fisher, J., Greenwood, K., & Alexander, R. A temporally dynamic contextual effect that disrupts voice onset time discrimination of rapidly successive stimuli. Psychonomic Bulletin and Review, 12, 380-386, 2005] to determine if changes in the prominent M100 neuromagnetic response were linearly modulated by VOT. Eight right-handed, English-speaking, normally developing participants performed a VOT discrimination task during a whole-head neuromagnetic recording. The M100 was identified in the gradiometers overlying the right and left temporal cortices and single dipoles were fit to each M100 waveform. A repeated measures analysis of variance with post hoc contrast test for linear trend was used to determine whether characteristics of the M100 were linearly modulated by VOT. The morphology of the M100 gradiometer waveform and the peak latency of the dipole waveform were linearly modulated by VOT. This modulation was much greater in the left, as compared to the right, hemisphere. The M100 dipole moved in a linear fashion as VOT increased in both hemispheres, but along different axes in each hemisphere. This study suggests that VOT may linearly modulate characteristics of the M100, predominately in the left hemisphere, and suggests that the VOT of consonant-vowel syllables, instead of, or in addition to, voicing, should be examined in future experiments.

  12. Time-window-of-integration (TWIN) model for saccadic reaction time: effect of auditory masker level on visual-auditory spatial interaction in elevation.

    PubMed

    Colonius, Hans; Diederich, Adele; Steenken, Rike

    2009-05-01

    Saccadic reaction time (SRT) to a visual target tends to be shorter when auditory stimuli are presented in close temporal and spatial proximity, even when subjects are instructed to ignore the auditory non-target (focused attention paradigm). Previous studies using pairs of visual and auditory stimuli differing in both azimuth and vertical position suggest that the amount of SRT facilitation decreases not with the physical but with the perceivable distance between visual target and auditory non-target. Steenken et al. (Brain Res 1220:150-156, 2008) presented an additional white-noise masker background of three seconds duration. Increasing the masker level had a diametrical effect on SRTs in spatially coincident versus disparate stimulus configurations: saccadic responses to coincident visual-auditory stimuli are slowed down, whereas saccadic responses to disparate stimuli are speeded up. Here we show that the time-window-of-integration model accounts for this observation by variation of a perceivable-distance parameter in the second stage of the model whose value does not depend on stimulus onset asynchrony between target and non-target.

  13. Time coded distribution via broadcasting stations

    NASA Technical Reports Server (NTRS)

    Leschiutta, S.; Pettiti, V.; Detoma, E.

    1979-01-01

    The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.

  14. Anodal transcranial direct current stimulation over auditory cortex degrades frequency discrimination by affecting temporal, but not place, coding.

    PubMed

    Tang, Matthew F; Hammond, Geoffrey R

    2013-09-01

    We report three studies of the effects of anodal transcranial direct current stimulation (tDCS) over auditory cortex on audition in humans. Experiment 1 examined whether tDCS enhances rapid frequency discrimination learning. Human subjects were trained on a frequency discrimination task for 2 days with anodal tDCS applied during the first day with the second day used to assess effects of stimulation on retention. This revealed that tDCS did not affect learning but did degrade frequency discrimination during both days. Follow-up testing 2-3 months after stimulation showed no long-term effects. Following the unexpected results, two additional experiments examined the effects of tDCS on the underlying mechanisms of frequency discrimination, place and temporal coding. Place coding underlies frequency selectivity and was measured using psychophysical tuning curves with broader curves indicating poorer frequency selectivity. Temporal coding is determined by measuring the ability to discriminate sounds with different fine temporal structure. We found that tDCS does not broaden frequency selectivity but instead degraded the ability to discriminate tones with different fine temporal structure. The overall results suggest anodal tDCS applied over auditory cortex degrades frequency discrimination by affecting temporal, but not place, coding mechanisms.

  15. Onset timing of cross-sensory activations and multisensory interactions in auditory and visual sensory cortices.

    PubMed

    Raij, Tommi; Ahveninen, Jyrki; Lin, Fa-Hsuan; Witzel, Thomas; Jääskeläinen, Iiro P; Letham, Benjamin; Israeli, Emily; Sahyoun, Cherif; Vasios, Christos; Stufflebeam, Steven; Hämäläinen, Matti; Belliveau, John W

    2010-05-01

    Here we report early cross-sensory activations and audiovisual interactions at the visual and auditory cortices using magnetoencephalography (MEG) to obtain accurate timing information. Data from an identical fMRI experiment were employed to support MEG source localization results. Simple auditory and visual stimuli (300-ms noise bursts and checkerboards) were presented to seven healthy humans. MEG source analysis suggested generators in the auditory and visual sensory cortices for both within-modality and cross-sensory activations. fMRI cross-sensory activations were strong in the visual but almost absent in the auditory cortex; this discrepancy with MEG possibly reflects the influence of acoustical scanner noise in fMRI. In the primary auditory cortices (Heschl's gyrus) the onset of activity to auditory stimuli was observed at 23 ms in both hemispheres, and to visual stimuli at 82 ms in the left and at 75 ms in the right hemisphere. In the primary visual cortex (Calcarine fissure) the activations to visual stimuli started at 43 ms and to auditory stimuli at 53 ms. Cross-sensory activations thus started later than sensory-specific activations, by 55 ms in the auditory cortex and by 10 ms in the visual cortex, suggesting that the origins of the cross-sensory activations may be in the primary sensory cortices of the opposite modality, with conduction delays (from one sensory cortex to another) of 30-35 ms. Audiovisual interactions started at 85 ms in the left auditory, 80 ms in the right auditory and 74 ms in the visual cortex, i.e., 3-21 ms after inputs from the two modalities converged.

  16. Onset timing of cross-sensory activations and multisensory interactions in auditory and visual sensory cortices

    PubMed Central

    Raij, Tommi; Ahveninen, Jyrki; Lin, Fa-Hsuan; Witzel, Thomas; Jääskeläinen, Iiro P.; Letham, Benjamin; Israeli, Emily; Sahyoun, Cherif; Vasios, Christos; Stufflebeam, Steven; Hämäläinen, Matti; Belliveau, John W.

    2010-01-01

    Here we report early cross-sensory activations and audiovisual interactions at the visual and auditory cortices using magnetoencephalography (MEG) to obtain accurate timing information. Data from an identical fMRI experiment were employed to support MEG source localization results. Simple auditory and visual stimuli (300-ms noise bursts and checkerboards) were presented to seven healthy humans. MEG source analysis suggested generators in the auditory and visual sensory cortices for both within-modality and cross-sensory activations. fMRI cross-sensory activations were strong in the visual but almost absent in the auditory cortex; this discrepancy with MEG possibly reflects influence of acoustical scanner noise in fMRI. In the primary auditory cortices (Heschl’s gyrus) onset of activity to auditory stimuli was observed at 23 ms in both hemispheres, and to visual stimuli at 82 ms in the left and at 75 ms in the right hemisphere. In the primary visual cortex (Calcarine fissure) the activations to visual stimuli started at 43 ms and to auditory stimuli at 53 ms. Cross-sensory activations thus started later than sensory-specific activations, by 55 ms in the auditory cortex and by 10 ms in the visual cortex, suggesting that the origins of the cross-sensory activations may be in the primary sensory cortices of the opposite modality, with conduction delays (from one sensory cortex to another) of 30–35 ms. Audiovisual interactions started at 85 ms in the left auditory, 80 ms in the right auditory, and 74 ms in the visual cortex, i.e., 3–21 ms after inputs from both modalities converged. PMID:20584181

  17. Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks

    PubMed Central

    Dai, Lengshi; Shinn-Cunningham, Barbara G.

    2016-01-01

    Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics

  18. Code for Calculating Regional Seismic Travel Time

    SciTech Connect

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    2009-07-10

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minus predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.

  19. Code for Calculating Regional Seismic Travel Time

    SciTech Connect

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    2009-07-10

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minus predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.

  20. Slow Cholinergic Modulation of Spike Probability in Ultra-Fast Time-Coding Sensory Neurons

    PubMed Central

    Goyer, David; Kurth, Stefanie; Rübsamen, Rudolf

    2016-01-01

    Abstract Sensory processing in the lower auditory pathway is generally considered to be rigid and thus less subject to modulation than central processing. However, in addition to the powerful bottom-up excitation by auditory nerve fibers, the ventral cochlear nucleus also receives efferent cholinergic innervation from both auditory and nonauditory top–down sources. We thus tested the influence of cholinergic modulation on highly precise time-coding neurons in the cochlear nucleus of the Mongolian gerbil. By combining electrophysiological recordings with pharmacological application in vitro and in vivo, we found 55–72% of spherical bushy cells (SBCs) to be depolarized by carbachol on two time scales, ranging from hundreds of milliseconds to minutes. These effects were mediated by nicotinic and muscarinic acetylcholine receptors, respectively. Pharmacological block of muscarinic receptors hyperpolarized the resting membrane potential, suggesting a novel mechanism of setting the resting membrane potential for SBC. The cholinergic depolarization led to an increase of spike probability in SBCs without compromising the temporal precision of the SBC output in vitro. In vivo, iontophoretic application of carbachol resulted in an increase in spontaneous SBC activity. The inclusion of cholinergic modulation in an SBC model predicted an expansion of the dynamic range of sound responses and increased temporal acuity. Our results thus suggest of a top–down modulatory system mediated by acetylcholine which influences temporally precise information processing in the lower auditory pathway. PMID:27699207

  1. Neural coding of sound intensity and loudness in the human auditory system.

    PubMed

    Röhl, Markus; Uppenkamp, Stefan

    2012-06-01

    Inter-individual differences in loudness sensation of 45 young normal-hearing participants were employed to investigate how and at what stage of the auditory pathway perceived loudness, the perceptual correlate of sound intensity, is transformed into neural activation. Loudness sensation was assessed by categorical loudness scaling, a psychoacoustical scaling procedure, whereas neural activation in the auditory cortex, inferior colliculi, and medial geniculate bodies was investigated with functional magnetic resonance imaging (fMRI). We observed an almost linear increase of perceived loudness and percent signal change from baseline (PSC) in all examined stages of the upper auditory pathway. Across individuals, the slope of the underlying growth function for perceived loudness was significantly correlated with the slope of the growth function for the PSC in the auditory cortex, but not in subcortical structures. In conclusion, the fMRI correlate of neural activity in the auditory cortex as measured by the blood oxygen level-dependent effect appears to be more a linear reflection of subjective loudness sensation rather than a display of physical sound pressure level, as measured using a sound-level meter.

  2. Some optimal partial-unit-memory codes. [time-invariant binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Lauer, G. S.

    1979-01-01

    A class of time-invariant binary convolutional codes is defined, called partial-unit-memory codes. These codes are optimal in the sense of having maximum free distance for given values of R, k (the number of encoder inputs), and mu (the number of encoder memory cells). Optimal codes are given for rates R = 1/4, 1/3, 1/2, and 2/3, with mu not greater than 4 and k not greater than mu + 3, whenever such a code is better than previously known codes. An infinite class of optimal partial-unit-memory codes is also constructed based on equidistant block codes.

  3. Impaired timing adjustments in response to time-varying auditory perturbation during connected speech production in persons who stutter.

    PubMed

    Cai, Shanqing; Beal, Deryk S; Ghosh, Satrajit S; Guenther, Frank H; Perkell, Joseph S

    2014-02-01

    Auditory feedback (AF), the speech signal received by a speaker's own auditory system, contributes to the online control of speech movements. Recent studies based on AF perturbation provided evidence for abnormalities in the integration of auditory error with ongoing articulation and phonation in persons who stutter (PWS), but stopped short of examining connected speech. This is a crucial limitation considering the importance of sequencing and timing in stuttering. In the current study, we imposed time-varying perturbations on AF while PWS and fluent participants uttered a multisyllabic sentence. Two distinct types of perturbations were used to separately probe the control of the spatial and temporal parameters of articulation. While PWS exhibited only subtle anomalies in the AF-based spatial control, their AF-based fine-tuning of articulatory timing was substantially weaker than normal, especially in early parts of the responses, indicating slowness in the auditory-motor integration for temporal control. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Auditory and visual differences in time perception? An investigation from a developmental perspective with neuropsychological tests.

    PubMed

    Zélanti, Pierre S; Droit-Volet, Sylvie

    2012-07-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results showed an age-related improvement in the ability to discriminate time regardless of the sensory modality and duration. However, this improvement was seen to occur more quickly for auditory signals than for visual signals and for short durations rather than for long durations. The younger children exhibited the poorest ability to discriminate time for long durations presented in the visual modality. Statistical analyses of the neuropsychological scores revealed that an increase in working memory and attentional capacities in the visuospatial modality was the best predictor of age-related changes in temporal bisection performance for both visual and auditory stimuli. In addition, the poorer time sensitivity for visual stimuli than for auditory stimuli, especially in the younger children, was explained by the fact that the temporal processing of visual stimuli requires more executive attention than that of auditory stimuli. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Plasticity in the neural coding of auditory space in the mammalian brain

    PubMed Central

    King, Andrew J.; Parsons, Carl H.; Moore, David R.

    2000-01-01

    Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the “cocktail party effect”) are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy. PMID:11050215

  6. Frequency tuning and intensity coding of sound in the auditory periphery of the lake sturgeon, Acipenser fulvescens.

    PubMed

    Meyer, Michaela; Fay, Richard R; Popper, Arthur N

    2010-05-01

    Acipenser fulvescens, the lake sturgeon, belongs to one of the few extant non-teleost ray-finned (bony) fishes. The sturgeons (family Acipenseridae) have a phylogenetic history that dates back about 250 million years. The study reported here is the first investigation of peripheral coding strategies for spectral analysis in the auditory system in a non-teleost bony fish. We used a shaker system to simulate the particle motion component of sound during electrophysiological recordings of isolated single units from the eighth nerve innervating the saccule and lagena. Background activity and response characteristics of saccular and lagenar afferents (such as thresholds, response-level functions and temporal firing) resembled the ones found in teleosts. The distribution of best frequencies also resembled data in teleosts (except for Carassius auratus, goldfish) tested with the same stimulation method. The saccule and lagena in A. fulvescens contain otoconia, in contrast to the solid otoliths found in teleosts, however, this difference in otolith structure did not appear to affect threshold, frequency tuning, intensity- or temporal responses of auditory afferents. In general, the physiological characteristics common to A. fulvescens, teleosts and land vertebrates reflect important functions of the auditory system that may have been conserved throughout the evolution of vertebrates.

  7. Frequency tuning and intensity coding of sound in the auditory periphery of the lake sturgeon, Acipenser fulvescens

    PubMed Central

    Meyer, Michaela; Fay, Richard R.; Popper, Arthur N.

    2010-01-01

    Acipenser fulvescens, the lake sturgeon, belongs to one of the few extant non-teleost ray-finned (bony) fishes. The sturgeons (family Acipenseridae) have a phylogenetic history that dates back about 250 million years. The study reported here is the first investigation of peripheral coding strategies for spectral analysis in the auditory system in a non-teleost bony fish. We used a shaker system to simulate the particle motion component of sound during electrophysiological recordings of isolated single units from the eighth nerve innervating the saccule and lagena. Background activity and response characteristics of saccular and lagenar afferents (such as thresholds, response–level functions and temporal firing) resembled the ones found in teleosts. The distribution of best frequencies also resembled data in teleosts (except for Carassius auratus, goldfish) tested with the same stimulation method. The saccule and lagena in A. fulvescens contain otoconia, in contrast to the solid otoliths found in teleosts, however, this difference in otolith structure did not appear to affect threshold, frequency tuning, intensity- or temporal responses of auditory afferents. In general, the physiological characteristics common to A. fulvescens, teleosts and land vertebrates reflect important functions of the auditory system that may have been conserved throughout the evolution of vertebrates. PMID:20400642

  8. Interactive rhythmic auditory stimulation reinstates natural 1/f timing in gait of Parkinson's patients.

    PubMed

    Hove, Michael J; Suzuki, Kazuki; Uchitomi, Hirotaka; Orimo, Satoshi; Miyake, Yoshihiro

    2012-01-01

    Parkinson's disease (PD) and basal ganglia dysfunction impair movement timing, which leads to gait instability and falls. Parkinsonian gait consists of random, disconnected stride times--rather than the 1/f structure observed in healthy gait--and this randomness of stride times (low fractal scaling) predicts falling. Walking with fixed-tempo Rhythmic Auditory Stimulation (RAS) can improve many aspects of gait timing; however, it lowers fractal scaling (away from healthy 1/f structure) and requires attention. Here we show that interactive rhythmic auditory stimulation reestablishes healthy gait dynamics in PD patients. In the experiment, PD patients and healthy participants walked with a) no auditory stimulation, b) fixed-tempo RAS, and c) interactive rhythmic auditory stimulation. The interactive system used foot sensors and nonlinear oscillators to track and mutually entrain with the human's step timing. Patients consistently synchronized with the interactive system, their fractal scaling returned to levels of healthy participants, and their gait felt more stable to them. Patients and healthy participants rarely synchronized with fixed-tempo RAS, and when they did synchronize their fractal scaling declined from healthy 1/f levels. Five minutes after removing the interactive rhythmic stimulation, the PD patients' gait retained high fractal scaling, suggesting that the interaction stabilized the internal rhythm generating system and reintegrated timing networks. The experiment demonstrates that complex interaction is important in the (re)emergence of 1/f structure in human behavior and that interactive rhythmic auditory stimulation is a promising therapeutic tool for improving gait of PD patients.

  9. Using Spatial Manipulation to Examine Interactions between Visual and Auditory Encoding of Pitch and Time

    PubMed Central

    McLachlan, Neil M.; Greco, Loretta J.; Toner, Emily C.; Wilson, Sarah J.

    2010-01-01

    Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians. PMID:21833287

  10. I can see what you are saying: Auditory labels reduce visual search times.

    PubMed

    Cho, Kit W

    2016-10-01

    The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes.

  11. System Measures Errors Between Time-Code Signals

    NASA Technical Reports Server (NTRS)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  12. System Measures Errors Between Time-Code Signals

    NASA Technical Reports Server (NTRS)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  13. Space-Time Network Codes Utilizing Transform-Based Coding

    DTIC Science & Technology

    2010-12-01

    1− prn if βrn = 1 prn if βrn = 0 , (17) where prn is the symbol error rate (SER) for detecting xn at Ur. For M- QAM modulation , it can be shown...time, time-division multiple access (TDMA) would be the most commonly-used technique in many applications . However, TDMA is extremely inefficient in...r 6= n, where xn is from an M- QAM constellation X. At the end of this phase, each client node Ur for r = 1, 2, ..., N possesses a set of N symbols

  14. Onset coding is degraded in auditory nerve fibers from mutant mice lacking synaptic ribbons.

    PubMed

    Buran, Bradley N; Strenzke, Nicola; Neef, Andreas; Gundelfinger, Eckart D; Moser, Tobias; Liberman, M Charles

    2010-06-02

    Synaptic ribbons, found at the presynaptic membrane of sensory cells in both ear and eye, have been implicated in the vesicle-pool dynamics of synaptic transmission. To elucidate ribbon function, we characterized the response properties of single auditory nerve fibers in mice lacking Bassoon, a scaffolding protein involved in anchoring ribbons to the membrane. In bassoon mutants, immunohistochemistry showed that fewer than 3% of the hair cells' afferent synapses retained anchored ribbons. Auditory nerve fibers from mutants had normal threshold, dynamic range, and postonset adaptation in response to tone bursts, and they were able to phase lock with normal precision to amplitude-modulated tones. However, spontaneous and sound-evoked discharge rates were reduced, and the reliability of spikes, particularly at stimulus onset, was significantly degraded as shown by an increased variance of first-spike latencies. Modeling based on in vitro studies of normal and mutant hair cells links these findings to reduced release rates at the synapse. The degradation of response reliability in these mutants suggests that the ribbon and/or Bassoon normally facilitate high rates of exocytosis and that its absence significantly compromises the temporal resolving power of the auditory system.

  15. Temporal envelope of time-compressed speech represented in the human auditory cortex

    PubMed Central

    Nourski, Kirill V.; Reale, Richard A.; Oya, Hiroyuki; Kawasaki, Hiroto; Kovach, Christopher K.; Chen, Haiming; Howard, Matthew A.; Brugge, John F.

    2010-01-01

    Speech comprehension relies on temporal cues contained in the speech envelope, and the auditory cortex has been implicated as playing a critical role in encoding this temporal information. We investigated auditory cortical responses to speech stimuli in subjects undergoing invasive electrophysiological monitoring for pharmacologically refractory epilepsy. Recordings were made from multi-contact electrodes implanted in Heschl’s gyrus (HG). Speech sentences, time-compressed from 0.75 to 0.20 of natural speaking rate, elicited average evoked potentials (AEPs) and increases in event-related band power (ERBP) of cortical high frequency (70–250 Hz) activity. Cortex of posteromedial HG, the presumed core of human auditory cortex, represented the envelope of speech stimuli in the AEP and ERBP. Envelope-following in ERBP, but not in AEP, was evident in both language dominant and non-dominant hemispheres for relatively high degrees of compression where speech was not comprehensible. Compared to posteromedial HG, responses from anterolateral HG — an auditory belt field — exhibited longer latencies, lower amplitudes and little or no time locking to the speech envelope. The ability of the core auditory cortex to follow the temporal speech envelope over a wide range of speaking rates leads us to conclude that such capacity in itself is not a limiting factor for speech comprehension. PMID:20007480

  16. [Development of auditory-visual spatial integration using saccadic response time as the index].

    PubMed

    Kato, Masaharu; Konishi, Kaoru; Kurosawa, Makiko; Konishi, Yukuo

    2006-05-01

    We measured saccadic response time (SRT) to investigate developmental changes related to spatially aligned or misaligned auditory and visual stimuli responses. We exposed 4-, 5-, and 11-month-old infants to ipsilateral or contralateral auditory-visual stimuli and monitored their eye movements using an electro-oculographic (EOG) system. The SRT analyses revealed four main results. First, saccades were triggered by visual stimuli but not always triggered by auditory stimuli. Second, SRTs became shorter as the children grew older. Third, SRTs for the ipsilateral and visual-only conditions were the same in all infants. Fourth, SRTs for the contralateral condition were longer than for the ipsilateral and visual-only conditions in 11-month-old infants but were the same for all three conditions in 4- and 5-month-old infants. These findings suggest that infants acquire the function of auditory-visual spatial integration underlying saccadic eye movement between the ages of 5 and 11 months. The dependency of SRTs on the spatial configuration of auditory and visual stimuli can be explained by cortical control of the superior colliculus. Our finding of no differences in SRTs between the ipsilateral and visual-only conditions suggests that there are multiple pathways for controlling the superior colliculus and that these pathways have different developmental time courses.

  17. A comparative study of simple auditory reaction time in blind (congenitally) and sighted subjects.

    PubMed

    Gandhi, Pritesh Hariprasad; Gokhale, Pradnya A; Mehta, H B; Shah, C J

    2013-07-01

    Reaction time is the time interval between the application of a stimulus and the appearance of appropriate voluntary response by a subject. It involves stimulus processing, decision making, and response programming. Reaction time study has been popular due to their implication in sports physiology. Reaction time has been widely studied as its practical implications may be of great consequence e.g., a slower than normal reaction time while driving can have grave results. To study simple auditory reaction time in congenitally blind subjects and in age sex matched sighted subjects. To compare the simple auditory reaction time between congenitally blind subjects and healthy control subjects. STUDY HAD BEEN CARRIED OUT IN TWO GROUPS: The 1(st) of 50 congenitally blind subjects and 2(nd) group comprises of 50 healthy controls. It was carried out on Multiple Choice Reaction Time Apparatus, Inco Ambala Ltd. (Accuracy±0.001 s) in a sitting position at Government Medical College and Hospital, Bhavnagar and at a Blind School, PNR campus, Bhavnagar, Gujarat, India. Simple auditory reaction time response with four different type of sound (horn, bell, ring, and whistle) was recorded in both groups. According to our study, there is no significant different in reaction time between congenital blind and normal healthy persons. Blind individuals commonly utilize tactual and auditory cues for information and orientation and they reliance on touch and audition, together with more practice in using these modalities to guide behavior, is often reflected in better performance of blind relative to sighted participants in tactile or auditory discrimination tasks, but there is not any difference in reaction time between congenitally blind and sighted people.

  18. A Comparative Study of Simple Auditory Reaction Time in Blind (Congenitally) and Sighted Subjects

    PubMed Central

    Gandhi, Pritesh Hariprasad; Gokhale, Pradnya A.; Mehta, H. B.; Shah, C. J.

    2013-01-01

    Background: Reaction time is the time interval between the application of a stimulus and the appearance of appropriate voluntary response by a subject. It involves stimulus processing, decision making, and response programming. Reaction time study has been popular due to their implication in sports physiology. Reaction time has been widely studied as its practical implications may be of great consequence e.g., a slower than normal reaction time while driving can have grave results. Objective: To study simple auditory reaction time in congenitally blind subjects and in age sex matched sighted subjects. To compare the simple auditory reaction time between congenitally blind subjects and healthy control subjects. Materials and Methods: Study had been carried out in two groups: The 1st of 50 congenitally blind subjects and 2nd group comprises of 50 healthy controls. It was carried out on Multiple Choice Reaction Time Apparatus, Inco Ambala Ltd. (Accuracy±0.001 s) in a sitting position at Government Medical College and Hospital, Bhavnagar and at a Blind School, PNR campus, Bhavnagar, Gujarat, India. Observations/Results: Simple auditory reaction time response with four different type of sound (horn, bell, ring, and whistle) was recorded in both groups. According to our study, there is no significant different in reaction time between congenital blind and normal healthy persons. Conclusion: Blind individuals commonly utilize tactual and auditory cues for information and orientation and they reliance on touch and audition, together with more practice in using these modalities to guide behavior, is often reflected in better performance of blind relative to sighted participants in tactile or auditory discrimination tasks, but there is not any difference in reaction time between congenitally blind and sighted people. PMID:24249930

  19. Coding of sound pressure level in the barn owl's auditory nerve.

    PubMed

    Köppl, C; Yates, G

    1999-11-01

    Rate-intensity functions, i.e., the relation between discharge rate and sound pressure level, were recorded from single auditory nerve fibers in the barn owl. Differences in sound pressure level between the owl's two ears are known to be an important cue in sound localization. One objective was therefore to quantify the discharge rates of auditory nerve fibers, as a basis for higher-order processing of sound pressure level. The second aim was to investigate the rate-intensity functions for cues to the underlying cochlear mechanisms, using a model developed in mammals. Rate-intensity functions at the most sensitive frequency mostly showed a well-defined breakpoint between an initial steep segment and a progressively flattening segment. This shape has, in mammals, been convincingly traced to a compressive nonlinearity in the cochlear mechanics, which in turn is a reflection of the cochlear amplifier enhancing low-level stimuli. The similarity of the rate-intensity functions of the barn owl is thus further evidence for a similar mechanism in birds. An interesting difference from mammalian data was that this compressive nonlinearity was not shared among fibers of similar characteristic frequency, suggesting a different mechanism with a more locally differentiated operation than in mammals. In all fibers, the steepest change in discharge rate with rising sound pressure level occurred within 10-20 dB of their respective thresholds. Because the range of neural thresholds at any one characteristic frequency is small in the owl, auditory nerve fibers were collectively most sensitive for changes in sound pressure level within approximately 30 dB of the best thresholds. Fibers most sensitive to high frequencies (>6-7 kHz) showed a smaller increase of rate above spontaneous discharge rate than did lower-frequency fibers.

  20. Auditory-evoked spike firing in the lateral amygdala and Pavlovian fear conditioning: mnemonic code or fear bias?

    PubMed

    Goosens, Ki A; Hobin, Jennifer A; Maren, Stephen

    2003-12-04

    Amygdala neuroplasticity has emerged as a candidate substrate for Pavlovian fear memory. By this view, conditional stimulus (CS)-evoked activity represents a mnemonic code that initiates the expression of fear behaviors. However, a fear state may nonassociatively enhance sensory processing, biasing CS-evoked activity in amygdala neurons. Here we describe experiments that dissociate auditory CS-evoked spike firing in the lateral amygdala (LA) and both conditional fear behavior and LA excitability in rats. We found that the expression of conditional freezing and increased LA excitability was neither necessary nor sufficient for the expression of conditional increases in CS-evoked spike firing. Rather, conditioning-related changes in CS-evoked spike firing were solely determined by the associative history of the CS. Thus, our data support a model in which associative activity in the LA encodes fear memory and contributes to the expression of learned fear behaviors.

  1. Perceptual Grouping over Time within and across Auditory and Tactile Modalities

    PubMed Central

    Lin, I-Fan; Kashino, Makio

    2012-01-01

    In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time. PMID:22844509

  2. Auditory Attention to Frequency and Time: An Analogy to Visual Local-Global Stimuli

    ERIC Educational Resources Information Center

    Justus, Timothy; List, Alexandra

    2005-01-01

    Two priming experiments demonstrated exogenous attentional persistence to the fundamental auditory dimensions of frequency (Experiment 1) and time (Experiment 2). In a divided-attention task, participants responded to an independent dimension, the identification of three-tone sequence patterns, for both prime and probe stimuli. The stimuli were…

  3. Auditory Imagery Shapes Movement Timing and Kinematics: Evidence from a Musical Task

    ERIC Educational Resources Information Center

    Keller, Peter E.; Dalla Bella, Simone; Koch, Iring

    2010-01-01

    The role of anticipatory auditory imagery in music-like sequential action was investigated by examining timing accuracy and kinematics using a motion capture system. Musicians responded to metronomic pacing signals by producing three unpaced taps on three vertically aligned keys at the given tempo. Taps triggered tones in two out of three blocked…

  4. Probing the time course of head-motion cues integration during auditory scene analysis.

    PubMed

    Kondo, Hirohito M; Toshima, Iwaki; Pressnitzer, Daniel; Kashino, Makio

    2014-01-01

    The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and rate their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues), we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.

  5. Auditory imagery shapes movement timing and kinematics: evidence from a musical task.

    PubMed

    Keller, Peter E; Dalla Bella, Simone; Koch, Iring

    2010-04-01

    The role of anticipatory auditory imagery in music-like sequential action was investigated by examining timing accuracy and kinematics using a motion capture system. Musicians responded to metronomic pacing signals by producing three unpaced taps on three vertically aligned keys at the given tempo. Taps triggered tones in two out of three blocked feedback conditions, where key-to-tone mappings were compatible or incompatible in terms of spatial and pitch height. Results indicate that, while timing was most accurate without tones, movements were smaller in amplitude and less forceful (i.e., acceleration prior to impact was lowest) when tones were present. Moreover, timing was more accurate and movements were less forceful with compatible than with incompatible auditory feedback. Observing these effects at the first tap (before tone onset) suggests that anticipatory auditory imagery modulates the temporal kinematics of regularly timed auditory action sequences, like those found in music. Such cross-modal ideomotor processes may function to facilitate planning efficiency and biomechanical economy in voluntary action. Copyright 2010 APA, all rights reserved.

  6. Reaction Time and Accuracy in Individuals with Aphasia during Auditory Vigilance Tasks

    ERIC Educational Resources Information Center

    Laures, Jacqueline S.

    2005-01-01

    Research indicates that attentional deficits exist in aphasic individuals. However, relatively little is known about auditory vigilance performance in individuals with aphasia. The current study explores reaction time (RT) and accuracy in 10 aphasic participants and 10 nonbrain-damaged controls during linguistic and nonlinguistic auditory…

  7. Auditory Imagery Shapes Movement Timing and Kinematics: Evidence from a Musical Task

    ERIC Educational Resources Information Center

    Keller, Peter E.; Dalla Bella, Simone; Koch, Iring

    2010-01-01

    The role of anticipatory auditory imagery in music-like sequential action was investigated by examining timing accuracy and kinematics using a motion capture system. Musicians responded to metronomic pacing signals by producing three unpaced taps on three vertically aligned keys at the given tempo. Taps triggered tones in two out of three blocked…

  8. Auditory and motor contributions to the timing of melodies under cognitive load.

    PubMed

    Maes, Pieter-Jan; Giacofci, Madison; Leman, Marc

    2015-10-01

    Current theoretical models and empirical research suggest that sensorimotor control and feedback processes may guide time perception and production. In the current study, we investigated the role of motor control and auditory feedback in an interval-production task performed under heightened cognitive load. We hypothesized that general associative learning mechanisms enable the calibration of time against patterns of dynamic change in motor control processes and auditory feedback information. In Experiment 1, we applied a dual-task interference paradigm consisting of a finger-tapping (continuation) task in combination with a working memory task. Participants (nonmusicians) had to either perform or avoid arm movements between successive key presses (continuous vs. discrete). Auditory feedback from a key press (a piano tone) filled either the complete duration of the target interval or only a small part (long vs. short). Results suggested that both continuous movement control and long piano feedback tones contributed to regular timing production. In Experiment 2, we gradually adjusted the duration of the long auditory feedback tones throughout the duration of a trial. The results showed that a gradual shortening of tones throughout time increased the rate at which participants performed tone onsets. Overall, our findings suggest that the human perceptual-motor system may be important in guiding temporal behavior under cognitive load. (c) 2015 APA, all rights reserved).

  9. Space-Time Code Designs for Broadband Wireless Communications

    DTIC Science & Technology

    2005-03-01

    Decoding Algorithms (i). Fast iterative decoding algorithms for lattice based space-time coded MIMO systems and single antenna vector OFDM systems: We...Information Theory, vol. 49, p.313, Jan. 2003. 5. G. Fan and X.-G. Xia, " Wavelet - Based Texture Analysis and Synthesis Using Hidden Markov Models," IEEE...PSK, and CPM signals, lattice based space-time codes, and unitary differential space-time codes for large number of transmit antennas. We want to

  10. Coding in Ireland: Time for Recognition.

    PubMed

    Murphy, Deirdre

    2010-10-01

    Recognition of skilled coders' work within the Irish health system is long overdue. A project being undertaken in Ireland now by the central office for coding at the Economic and Social Research Institute (ESRI) is exploring ways to raise the coders' profile, promote a profession of clinical coders and ensure quality benchmarks for all stakeholders, including the introduction of accredited training. The Hospital Inpatient Enquiry (HIPE) at the ESRI uses ICD-10-AM and trains and supports coders in all aspects of their work. This paper also presents some preliminary findings of a HIPE workforce study undertaken in early 2010. The establishment of a recognised clinical coder profession through engagement with all stakeholders and the accreditation of Irish coder education would enhance the position and recognition of coding as a skilled profession within the Irish healthcare system, and also ensure those data meet the highest national and international data quality standards.

  11. Neural codes for perceptual discrimination of acoustic flutter in the primate auditory cortex

    PubMed Central

    Lemus, Luis; Hernández, Adrián; Romo, Ranulfo

    2009-01-01

    We recorded from single neurons of the primary auditory cortex (A1), while trained monkeys reported a decision based on the comparison of 2 acoustic flutter stimuli. Crucially, to form the decision, monkeys had to compare the second stimulus rate to the memory trace of the first stimulus rate. We found that the responses of A1 neurons encode stimulus rates both through their periodicity and through their firing rates during the stimulation periods, but not during the working memory and decision components of this task. Neurometric thresholds based on firing rate were very similar to the monkey's discrimination thresholds, whereas neurometric thresholds based on periodicity were lower than the experimental thresholds. Thus, an observer could solve this task with a precision similar to that of the monkey based only on the firing rates evoked by the stimuli. These results suggest that the A1 is exclusively associated with the sensory and not with the cognitive components of this task. PMID:19458263

  12. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar

    PubMed Central

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-01-01

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power. PMID:27483261

  13. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar.

    PubMed

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-07-28

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power.

  14. Activity in the left auditory cortex is associated with individual impulsivity in time discounting.

    PubMed

    Han, Ruokang; Takahashi, Taiki; Miyazaki, Akane; Kadoya, Tomoka; Kato, Shinya; Yokosawa, Koichi

    2015-01-01

    Impulsivity dictates individual decision-making behavior. Therefore, it can reflect consumption behavior and risk of addiction and thus underlies social activities as well. Neuroscience has been applied to explain social activities; however, the brain function controlling impulsivity has remained unclear. It is known that impulsivity is related to individual time perception, i.e., a person who perceives a certain physical time as being longer is impulsive. Here we show that activity of the left auditory cortex is related to individual impulsivity. Individual impulsivity was evaluated by a self-answered questionnaire in twelve healthy right-handed adults, and activities of the auditory cortices of bilateral hemispheres when listening to continuous tones were recorded by magnetoencephalography. Sustained activity of the left auditory cortex was significantly correlated to impulsivity, that is, larger sustained activity indicated stronger impulsivity. The results suggest that the left auditory cortex represent time perception, probably because the area is involved in speech perception, and that it represents impulsivity indirectly.

  15. SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code

    SciTech Connect

    Hua, D; Fowler, T

    2004-06-15

    A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.

  16. Coding of sound direction in the auditory periphery of the lake sturgeon, Acipenser fulvescens

    PubMed Central

    Popper, Arthur N.; Fay, Richard R.

    2012-01-01

    The lake sturgeon, Acipenser fulvescens, belongs to one of the few extant nonteleost ray-finned fishes and diverged from the main vertebrate lineage about 250 million years ago. The aim of this study was to use this species to explore the peripheral neural coding strategies for sound direction and compare these results to modern bony fishes (teleosts). Extracellular recordings were made from afferent neurons innervating the saccule and lagena of the inner ear while the fish was stimulated using a shaker system. Afferents were highly directional and strongly phase locked to the stimulus. Directional response profiles resembled cosine functions, and directional preferences occurred at a wide range of stimulus intensities (spanning at least 60 dB re 1 nm displacement). Seventy-six percent of afferents were directionally selective for stimuli in the vertical plane near 90° (up down) and did not respond to horizontal stimulation. Sixty-two percent of afferents responsive to horizontal stimulation had their best axis in azimuths near 0° (front back). These findings suggest that in the lake sturgeon, in contrast to teleosts, the saccule and lagena may convey more limited information about the direction of a sound source, raising the possibility that this species uses a different mechanism for localizing sound. For azimuth, a mechanism could involve the utricle or perhaps the computation of arrival time differences. For elevation, behavioral strategies such as directing the head to maximize input to the area of best sensitivity may be used. Alternatively, the lake sturgeon may have a more limited ability for sound source localization compared with teleosts. PMID:22031776

  17. Coding of sound direction in the auditory periphery of the lake sturgeon, Acipenser fulvescens.

    PubMed

    Meyer, Michaela; Popper, Arthur N; Fay, Richard R

    2012-01-01

    The lake sturgeon, Acipenser fulvescens, belongs to one of the few extant nonteleost ray-finned fishes and diverged from the main vertebrate lineage about 250 million years ago. The aim of this study was to use this species to explore the peripheral neural coding strategies for sound direction and compare these results to modern bony fishes (teleosts). Extracellular recordings were made from afferent neurons innervating the saccule and lagena of the inner ear while the fish was stimulated using a shaker system. Afferents were highly directional and strongly phase locked to the stimulus. Directional response profiles resembled cosine functions, and directional preferences occurred at a wide range of stimulus intensities (spanning at least 60 dB re 1 nm displacement). Seventy-six percent of afferents were directionally selective for stimuli in the vertical plane near 90° (up down) and did not respond to horizontal stimulation. Sixty-two percent of afferents responsive to horizontal stimulation had their best axis in azimuths near 0° (front back). These findings suggest that in the lake sturgeon, in contrast to teleosts, the saccule and lagena may convey more limited information about the direction of a sound source, raising the possibility that this species uses a different mechanism for localizing sound. For azimuth, a mechanism could involve the utricle or perhaps the computation of arrival time differences. For elevation, behavioral strategies such as directing the head to maximize input to the area of best sensitivity may be used. Alternatively, the lake sturgeon may have a more limited ability for sound source localization compared with teleosts.

  18. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses

    PubMed Central

    Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli

    2015-01-01

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in

  19. Auditory forebrain neurons track temporal features of time-warped natural stimuli.

    PubMed

    Maddox, Ross K; Sen, Kamal; Billimoria, Cyrus P

    2014-02-01

    A fundamental challenge for sensory systems is to recognize natural stimuli despite stimulus variations. A compelling example occurs in speech, where the auditory system can recognize words spoken at a wide range of speeds. To date, there have been more computational models for time-warp invariance than experimental studies that investigate responses to time-warped stimuli at the neural level. Here, we address this problem in the model system of zebra finches anesthetized with urethane. In behavioral experiments, we found high discrimination accuracy well beyond the observed natural range of song variations. We artificially sped up or slowed down songs (preserving pitch) and recorded auditory responses from neurons in field L, the avian primary auditory cortex homolog. We found that field L neurons responded robustly to time-warped songs, tracking the temporal features of the stimuli over a broad range of warp factors. Time-warp invariance was not observed per se, but there was sufficient information in the neural responses to reliably classify which of two songs was presented. Furthermore, the average spike rate was close to constant over the range of time warps, contrary to recent modeling predictions. We discuss how this response pattern is surprising given current computational models of time-warp invariance and how such a response could be decoded downstream to achieve time-warp-invariant recognition of sounds.

  20. Effect of vestibular stimulation on auditory and visual reaction time in relation to stress

    PubMed Central

    Rajagopalan, Archana; Kumar, Sai Sailesh; Mukkadan, Joseph Kurien

    2017-01-01

    The present study was undertaken to provide scientific evidence and for beneficial effects of vestibular stimulation for the management of stress-induced changes in auditory and visual reaction time (RT). A total of 240 healthy college students of the age group of 18–24 of either gender were a part of this research after obtaining written consent from them. RT for right and left response was measured for two auditory stimuli (low and high pitch) and visual stimuli (red and green) were recorded. A significant decrease in the visual RT for green light and red light was observed and stress-induced changes was effectively prevented followed by vestibular stimulation. Auditory RT for high pitch right and left response was significantly decreased and stress-induced changes was effectively prevented followed by vestibular stimulation. Vestibular stimulation is effective in boosting auditory and visual RT and preventing stress-induced changes in RT in males and females. We recommend incorporation of vestibular stimulation by swinging in our lifestyle for improving cognitive functions. PMID:28217553

  1. Effect of vestibular stimulation on auditory and visual reaction time in relation to stress.

    PubMed

    Rajagopalan, Archana; Kumar, Sai Sailesh; Mukkadan, Joseph Kurien

    2017-01-01

    The present study was undertaken to provide scientific evidence and for beneficial effects of vestibular stimulation for the management of stress-induced changes in auditory and visual reaction time (RT). A total of 240 healthy college students of the age group of 18-24 of either gender were a part of this research after obtaining written consent from them. RT for right and left response was measured for two auditory stimuli (low and high pitch) and visual stimuli (red and green) were recorded. A significant decrease in the visual RT for green light and red light was observed and stress-induced changes was effectively prevented followed by vestibular stimulation. Auditory RT for high pitch right and left response was significantly decreased and stress-induced changes was effectively prevented followed by vestibular stimulation. Vestibular stimulation is effective in boosting auditory and visual RT and preventing stress-induced changes in RT in males and females. We recommend incorporation of vestibular stimulation by swinging in our lifestyle for improving cognitive functions.

  2. Visual and auditory reaction time for air traffic controllers using quantitative electroencephalograph (QEEG) data.

    PubMed

    Abbass, Hussein A; Tang, Jiangjun; Ellejmi, Mohamed; Kirby, Stephen

    2014-12-01

    The use of quantitative electroencephalograph in the analysis of air traffic controllers' performance can reveal with a high temporal resolution those mental responses associated with different task demands. To understand the relationship between visual and auditory correct responses, reaction time, and the corresponding brain areas and functions, air traffic controllers were given an integrated visual and auditory continuous reaction task. Strong correlations were found between correct responses to the visual target and the theta band in the frontal lobe, the total power in the medial of the parietal lobe and the theta-to-beta ratio in the left side of the occipital lobe. Incorrect visual responses triggered activations in additional bands including the alpha band in the medial of the frontal and parietal lobes, and the Sensorimotor Rhythm in the medial of the parietal lobe. Controllers' responses to visual cues were found to be more accurate but slower than their corresponding performance on auditory cues. These results suggest that controllers are more susceptible to overload when more visual cues are used in the air traffic control system, and more errors are pruned as more auditory cues are used. Therefore, workload studies should be carried out to assess the usefulness of additional cues and their interactions with the air traffic control environment.

  3. Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation.

    PubMed

    Kristjánsson, Tómas; Thorvaldsson, Tómas Páll; Kristjánsson, Arni

    2014-01-01

    Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.

  4. Event-related EEG time-frequency analysis and the Orienting Reflex to auditory stimuli.

    PubMed

    Barry, Robert J; Steiner, Genevieve Z; De Blasio, Frances M

    2012-06-01

    Sokolov's classic works discussed electroencephalogram (EEG) alpha desynchronization as a measure of the Orienting Reflex (OR). Early studies confirmed that this reduced with repeated auditory stimulation, but without reliable stimulus-significance effects. We presented an auditory habituation series with counterbalanced indifferent and significant (counting) instructions. Time-frequency analysis of electrooculogram (EOG)-corrected EEG was used to explore prestimulus levels and the timing and amplitude of event-related increases and decreases in 4 classic EEG bands. Decrement over trials and response recovery were substantial for the transient increase (in delta, theta, and alpha) and subsequent desynchronization (in theta, alpha, and beta). There was little evidence of dishabituation and few effects of counting. Expected effects in stimulus-induced alpha desynchronization were confirmed. Two EEG response patterns over trials and conditions, distinct from the full OR pattern, warrant further research.

  5. Ability of primary auditory cortical neurons to detect amplitude modulation with rate and temporal codes: neurometric analysis

    PubMed Central

    Johnson, Jeffrey S.; Yin, Pingbo; O'Connor, Kevin N.

    2012-01-01

    Amplitude modulation (AM) is a common feature of natural sounds, and its detection is biologically important. Even though most sounds are not fully modulated, the majority of physiological studies have focused on fully modulated (100% modulation depth) sounds. We presented AM noise at a range of modulation depths to awake macaque monkeys while recording from neurons in primary auditory cortex (A1). The ability of neurons to detect partial AM with rate and temporal codes was assessed with signal detection methods. On average, single-cell synchrony was as or more sensitive than spike count in modulation detection. Cells are less sensitive to modulation depth if tested away from their best modulation frequency, particularly for temporal measures. Mean neural modulation detection thresholds in A1 are not as sensitive as behavioral thresholds, but with phase locking the most sensitive neurons are more sensitive, suggesting that for temporal measures the lower-envelope principle cannot account for thresholds. Three methods of preanalysis pooling of spike trains (multiunit, similar to convergence from a cortical column; within cell, similar to convergence of cells with matched response properties; across cell, similar to indiscriminate convergence of cells) all result in an increase in neural sensitivity to modulation depth for both temporal and rate codes. For the across-cell method, pooling of a few dozen cells can result in detection thresholds that approximate those of the behaving animal. With synchrony measures, indiscriminate pooling results in sensitive detection of modulation frequencies between 20 and 60 Hz, suggesting that differences in AM response phase are minor in A1. PMID:22422997

  6. Functional asymmetry in primary auditory cortex for processing musical sounds: temporal pattern analysis of fMRI time series.

    PubMed

    Izumi, Shuji; Itoh, Kosuke; Matsuzawa, Hitoshi; Takahashi, Sugata; Kwee, Ingrid L; Nakada, Tsutomu

    2011-07-13

    Hemispheric differences in the temporal processing of musical sounds within the primary auditory cortex were investigated using functional magnetic resonance imaging (fMRI) time series analysis on a 3.0 T system in right-handed individuals who had no formal training in music. The two hemispheres exhibited a clear-cut asymmetry in the time pattern of fMRI signals. A large transient signal component was observed in the left primary auditory cortex immediately after the onset of musical sounds, while only sustained activation, without an initial transient component, was seen in the right primary auditory cortex. The observed difference was believed to reflect differential segmentation in primary auditory cortical sound processing. Although the left primary auditory cortex processed the entire 30-s musical sound stimulus as a single event, the right primary auditory cortex had low-level processing of sounds with multiple segmentations of shorter time scales. The study indicated that musical sounds are processed as 'sounds with contents', similar to how language is processed in the left primary auditory cortex.

  7. Method of optical image coding by time integration

    NASA Astrophysics Data System (ADS)

    Evtikhiev, Nikolay N.; Starikov, Sergey N.; Cheryomkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.

    2012-06-01

    Method of optical image coding by time integration is proposed. Coding in proposed method is accomplished by shifting object image over photosensor area of digital camera during registration. It results in optically calculated convolution of original image with shifts trajectory. As opposed to optical coding methods based on the use of diffractive optical elements the described coding method is feasible for implementation in totally incoherent light. The method was preliminary tested by using LC monitor for image displaying and shifting. Shifting of object image is realized by displaying video consisting of frames with image to be encoded at different locations on screen of LC monitor while registering it by camera. Optical encoding and numerical decoding of test images were performed successfully. Also more practical experimental implementation of the method with use of LCOS SLM Holoeye PLUTO VIS was realized. Objects images to be encoded were formed in monochromatic spatially incoherent light. Shifting of object image over camera photosensor area was accomplished by displaying video consisting of frames with blazed gratings on LCOS SLM. Each blazed grating deflects reflecting from SLM light at different angle. Results of image optical coding and encoded images numerical restoration are presented. Obtained experimental results are compared with results of numerical modeling. Optical image coding with time integration could be used for accessible quality estimation of optical image coding using diffractive optical elements or as independent optical coding method which can be implemented in incoherent light.

  8. Auditory detection of ultrasonic coded transmitters by seals and sea lions.

    PubMed

    Cunningham, Kane A; Hayes, Sean A; Michelle Wargo Rub, A; Reichmuth, Colleen

    2014-04-01

    Ultrasonic coded transmitters (UCTs) are high-frequency acoustic tags that are often used to conduct survivorship studies of vulnerable fish species. Recent observations of differential mortality in tag control studies suggest that fish instrumented with UCTs may be selectively targeted by marine mammal predators, thereby skewing valuable survivorship data. In order to better understand the ability of pinnipeds to detect UCT outputs, behavioral high-frequency hearing thresholds were obtained from a trained harbor seal (Phoca vitulina) and a trained California sea lion (Zalophus californianus). Thresholds were measured for extended (500 ms) and brief (10 ms) 69 kHz narrowband stimuli, as well as for a stimulus recorded directly from a Vemco V16-3H UCT, which consisted of eight 10 ms, 69 kHz pure-tone pulses. Detection thresholds for the harbor seal were as expected based on existing audiometric data for this species, while the California sea lion was much more sensitive than predicted. Given measured detection thresholds of 113 dB re 1 μPa and 124 dB re 1 μPa, respectively, both species are likely able to detect acoustic outputs of the Vemco V16-3H under water from distances exceeding 200 m in typical natural conditions, suggesting that these species are capable of using UCTs to detect free-ranging fish.

  9. Using Reaction Time and Equal Latency Contours to Derive Auditory Weighting Functions in Sea Lions and Dolphins.

    PubMed

    Finneran, James J; Mulsow, Jason; Schlundt, Carolyn E

    2016-01-01

    Subjective loudness measurements are used to create equal-loudness contours and auditory weighting functions for human noise-mitigation criteria; however, comparable direct measurements of subjective loudness with animal subjects are difficult to conduct. In this study, simple reaction time to pure tones was measured as a proxy for subjective loudness in a Tursiops truncatus and Zalophus californianus. Contours fit to equal reaction-time curves were then used to estimate the shapes of auditory weighting functions.

  10. Coding for Communication Channels with Dead-Time Constraints

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2004-01-01

    Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM

  11. Timing of cochlear responses inferred from frequency-threshold tuning curves of auditory-nerve fibers

    PubMed Central

    Temchin, Andrei N.; Recio-Spinoso, Alberto; Ruggero, Mario A.

    2010-01-01

    Links between frequency tuning and timing were explored in the responses to sound of auditory-nerve fibers. Synthetic transfer functions were constructed by combining filter functions, derived via minimum-phase computations from average frequency-threshold tuning curves of chinchilla auditory-nerve fibers with high spontaneous activity (A. N. Temchin et al., J. Neurophysiol. 100: 2889–2898, 2008), and signal-front delays specified by the latencies of basilar-membrane and auditory-nerve fiber responses to intense clicks (A. N. Temchin et al., J. Neurophysiol. 93: 3635–3648, 2005). The transfer functions predict several features of the phase-frequency curves of cochlear responses to tones, including their shape transitions in the regions with characteristic frequencies of 1 kHz and 3–4 kHz (A. N. Temchin and M. A. Ruggero, JARO 11: 297–318, 2010). The transfer functions also predict the shapes of cochlear impulse responses, including the polarities of their frequency sweeps and their transition at characteristic frequencies around 1 kHz. Predictions are especially accurate for characteristic frequencies < 1 kHz. PMID:20951191

  12. Spike Time-Dependent Plasticity Induced by Intra-Cortical Microstimulation in the Auditory Cortex

    NASA Astrophysics Data System (ADS)

    Takahashi, Hirokazu; Yokota, Ryo; Suzrikawa, Jun; Kanzaki, Ryohei

    Intrinsic plastic properties in the auditory cortex can cause dynamic remodeling of the functional organization according to trainings. Neurorehabilitation will therefore potentially benefit from electrical stimulation that can modify synaptic strength as desired. Here we show that the auditory cortex of rats can be modified by intracortical microstimulation (ICMS) associated with tone stimuli on the basis of the spike time-dependent plasticity (STDP). Two kinds of ICMS were applied; a pairing ICMS following a tone-induced excitatory synaptic input and an anti-paring ICMS preceding a tone-induced input. The pairing and anti-pairing ICMS produced potentiation and depression, respectively, in responses to the paired tones with a particular test frequency, and thereby modified the tuning property of the auditory cortical neurons. In addition, we demonstrated that our experimental setup has a potential to directly measure how anesthetic agents and pharmacological manipulation affect ICMS-induced plasticity, and thus will serve as a powerful platform to investigate the neural basis of the plasticity.

  13. Changes across time in the temporal responses of auditory nerve fibers stimulated by electric pulse trains.

    PubMed

    Miller, Charles A; Hu, Ning; Zhang, Fawen; Robinson, Barbara K; Abbas, Paul J

    2008-03-01

    Most auditory prostheses use modulated electric pulse trains to excite the auditory nerve. There are, however, scant data regarding the effects of pulse trains on auditory nerve fiber (ANF) responses across the duration of such stimuli. We examined how temporal ANF properties changed with level and pulse rate across 300-ms pulse trains. Four measures were examined: (1) first-spike latency, (2) interspike interval (ISI), (3) vector strength (VS), and (4) Fano factor (FF, an index of the temporal variability of responsiveness). Data were obtained using 250-, 1,000-, and 5,000-pulse/s stimuli. First-spike latency decreased with increasing spike rate, with relatively small decrements observed for 5,000-pulse/s trains, presumably reflecting integration. ISIs to low-rate (250 pulse/s) trains were strongly locked to the stimuli, whereas ISIs evoked with 5,000-pulse/s trains were dominated by refractory and adaptation effects. Across time, VS decreased for low-rate trains but not for 5,000-pulse/s stimuli. At relatively high spike rates (>200 spike/s), VS values for 5,000-pulse/s trains were lower than those obtained with 250-pulse/s stimuli (even after accounting for the smaller periods of the 5,000-pulse/s stimuli), indicating a desynchronizing effect of high-rate stimuli. FF measures also indicated a desynchronizing effect of high-rate trains. Across a wide range of response rates, FF underwent relatively fast increases (i.e., within 100 ms) for 5,000-pulse/s stimuli. With a few exceptions, ISI, VS, and FF measures approached asymptotic values within the 300-ms duration of the low- and high-rate trains. These findings may have implications for designs of cochlear implant stimulus protocols, understanding electrically evoked compound action potentials, and interpretation of neural measures obtained at central nuclei, which depend on understanding the output of the auditory nerve.

  14. Novel Spectro-Temporal Codes and Computations for Auditory Signal Representation and Separation

    DTIC Science & Technology

    2013-02-01

    peristimulus time histograms (PSTHs) for a five-formant synthetic vowel a)Department of Electrical, Computer and Biomedical Engineering, University of Rhode...different CF regions driven by different dominant, formant-region harmonics of the multi-formant vowel . Note that in Figure 1a other non-dominant...harmonics in the vowel formant regions are not explicitly represented. (a) (b) Peristimulus time (ms) --> fre qu en cy in k H z Low freq High freq High freq

  15. Neural mechanisms underlying auditory feedback control of speech.

    PubMed

    Tourville, Jason A; Reilly, Kevin J; Guenther, Frank H

    2008-02-01

    The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 136 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech.

  16. Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Squires, N.; Galambos, R.

    1975-01-01

    Short latency (under 10 msec) responses elicited by bursts of white noise were recorded from the scalps of human subjects. Response alterations produced by changes in the noise burst duration (on-time), inter-burst interval (off-time), and onset and offset shapes were analyzed. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise time but was unaffected by changes in fall time. Increases in stimulus duration, and therefore in loudness, resulted in a systematic increase in latency. This was probably due to response recovery processes, since the effect was eliminated with increases in stimulus off-time. The amplitude of wave V was insensitive to changes in signal rise and fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It was concluded that wave V of the human auditory brainstem evoked response is solely an onset response.

  17. Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Squires, N.; Galambos, R.

    1975-01-01

    Short latency (under 10 msec) evoked responses elicited by bursts of white noise were recorded from the scalp of human subjects. Response alterations produced by changes in the noise burst duration (on-time) inter-burst interval (off-time), and onset and offset shapes are reported and evaluated. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise-time but was unaffected by changes in fall-time. The amplitude of wave V was insensitive to changes in signal rise-and-fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It is concluded that wave V of the human auditory brainstem evoked response is solely an onset response.

  18. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields

    PubMed Central

    Lee, Christopher M.; Osman, Ahmad F.; Volgushev, Maxim; Escabí, Monty A.

    2016-01-01

    Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices. PMID:26843599

  19. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields.

    PubMed

    Lee, Christopher M; Osman, Ahmad F; Volgushev, Maxim; Escabí, Monty A; Read, Heather L

    2016-04-01

    Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices.

  20. Working Memory and Auditory Imagery Predict Sensorimotor Synchronization with Expressively Timed Music.

    PubMed

    Colley, Ian D; Keller, Peter E; Halpern, Andrea R

    2017-08-11

    Sensorimotor synchronization (SMS) is prevalent and readily studied in musical settings, as most people are able to perceive and synchronize with a beat (e.g. by finger tapping). We took an individual differences approach to understanding SMS to real music characterized by expressive timing (i.e. fluctuating beat regularity). Given the dynamic nature of SMS, we hypothesized that individual differences in working memory and auditory imagery-both fluid cognitive processes-would predict SMS at two levels: 1) mean absolute asynchrony (a measure of synchronization error), and 2) anticipatory timing (i.e. predicting, rather than reacting to beat intervals). In Experiment 1, participants completed two working memory tasks, four auditory imagery tasks, and an SMS-tapping task. Hierarchical regression models were used to predict SMS performance, with results showing dissociations among imagery types in relation to mean absolute asynchrony, and evidence of a role for working memory in anticipatory timing. In Experiment 2, a new sample of participants completed an expressive timing perception task to examine the role of imagery in perception without action. Results suggest that imagery vividness is important for perceiving and control is important for synchronizing with, irregular but ecologically valid musical time series. Working memory is implicated in synchronizing by anticipating events in the series.

  1. A real-time virtual auditory system for spatially dynamic perception research

    NASA Astrophysics Data System (ADS)

    Scarpaci, Jacob W.; Colburn, H. Steven

    2004-05-01

    A Real Time Virtual Auditory System (RT-VAS) is being developed to provide a high-performance, cost-effective, flexible system that can dynamically update filter coefficients in hard real time on a PC. An InterSense head tracker is incorporated to provide low-latency head tracking to allow studies with head motion. Processing is done using a real-time Linux Kernel (RTAI kernel patch) which allows for precise processor scheduling, resulting in negligible time jitter of output samples. Output is calculated at a sample rate of 44.1 kHz and displayed using a National Instruments DAQ. Object oriented approach to system development allows for customizable input, position, and calculation routines as well as multiple independent auditory objects. Input and position may be calculated in real-time or read from a file. Calculation of output may include filtering with spatially sampled HRTFs or analytic models and head movements may be recorded to file. Limitations of the system are tied to the speed of the processor, thus complexity of experiments scales with speed of computer hardware. The current system handles multiple moving sources while tracking head position. Preliminary psychoacoustic results with head motion will be shown, as well as a demonstration of the system. [Work supported by NIH DC00100.

  2. Time-dependent activity of primary auditory neurons in the presence of neurotrophins and antibiotics.

    PubMed

    Cai, Helen Q; Gillespie, Lisa N; Wright, Tess; Brown, William G A; Minter, Ricki; Nayagam, Bryony A; O'Leary, Stephen J; Needham, Karina

    2017-07-01

    In vitro cultures provide a valuable tool in studies examining the survival, morphology and function of cells in the auditory system. Primary cultures of primary auditory neurons have most notably provided critical insights into the role of neurotrophins in cell survival and morphology. Functional studies have also utilized in vitro models to study neuronal physiology and the ion channels that dictate these patterns of activity. Here we examine what influence time-in-culture has on the activity of primary auditory neurons, and how this affects our interpretation of neurotrophin and antibiotic-mediated effects in this population. Using dissociated cell culture we analyzed whole-cell patch-clamp recordings of spiral ganglion neurons grown in the presence or absence of neurotrophins and/or penicillin and streptomycin for 1-3 days in vitro. Firing threshold decreased, and both action potential number and latency increased over time regardless of treatment, whilst input resistance was lowest where neurotrophins were present. Differences in firing properties were seen with neurotrophin concentration but were not consistently maintained over the 3 days in vitro. The exclusion of antibiotics from culture media influenced most firing properties at 1 day in vitro in both untreated and neurotrophin-treated conditions. The only difference still present at 3 days was an increase in input resistance in neurotrophin-treated neurons. These results highlight the potential of neurotrophins and antibiotics to influence neural firing patterns in vitro in a time-dependent manner, and advise the careful consideration of their impact on SGN function in future studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. The GOES Time Code Service, 1974–2004: A Retrospective

    PubMed Central

    Lombardi, Michael A.; Hanson, D. Wayne

    2005-01-01

    NIST ended its Geostationary Operational Environmental Satellites (GOES) time code service at 0 hours, 0 minutes Coordinated Universal Time (UTC) on January 1, 2005. To commemorate the end of this historically significant service, this article provides a retrospective look at the GOES service and the important role it played in the history of satellite timekeeping. PMID:27308105

  4. The Visual and Auditory Reaction Time of Adolescents with Respect to Their Academic Achievements

    ERIC Educational Resources Information Center

    Taskin, Cengiz

    2016-01-01

    The aim of this study was to examine in visual and auditory reaction time of adolescents with respect to their academic achievement level. Five hundred adolescent children from the Turkey, (age=15.24±0.78 years; height=168.80±4.89 cm; weight=65.24±4.30 kg) for two hundred fifty male and (age=15.28±0.74; height=160.40±5.77 cm; weight=55.32±4.13 kg)…

  5. Influence of preparation time and pitch separation in switching of auditory attention between streams.

    PubMed

    Larson, Eric; Lee, Adrian K C

    2013-08-01

    The ability to consciously switch attention between speakers of interest is necessary for communication in many environments, especially when multiple talkers speak simultaneously. Segregating sounds of interest from the background, which is necessary for selective attention, depends on stimulus acoustics such as differences in spectrotemporal properties of the target and masker. However, the relationship between top-down attention control and bottom-up stimulus segregation is not well understood. Here, two experiments were conducted to examine the time necessary for listeners to switch auditory attention, and how the ability to switch attention relates to the pitch separation cue available for bottom-up stream segregation.

  6. Auditory time-interval perception as causal inference on sound sources.

    PubMed

    Sawai, Ken-Ichi; Sato, Yoshiyuki; Aihara, Kazuyuki

    2012-01-01

    Perception of a temporal pattern in a sub-second time scale is fundamental to conversation, music perception, and other kinds of sound communication. However, its mechanism is not fully understood. A simple example is hearing three successive sounds with short time intervals. The following misperception of the latter interval is known: underestimation of the latter interval when the former is a little shorter or much longer than the latter, and overestimation of the latter when the former is a little longer or much shorter than the latter. Although this misperception of auditory time intervals for simple stimuli might be a cue to understanding the mechanism of time-interval perception, there exists no model that comprehensively explains it. Considering a previous experiment demonstrating that illusory perception does not occur for stimulus sounds with different frequencies, it might be plausible to think that the underlying mechanism of time-interval perception involves a causal inference on sound sources: herein, different frequencies provide cues for different causes. We construct a Bayesian observer model of this time-interval perception. We introduce a probabilistic variable representing the causality of sounds in the model. As prior knowledge, the observer assumes that a single sound source produces periodic and short time intervals, which is consistent with several previous works. We conducted numerical simulations and confirmed that our model can reproduce the misperception of auditory time intervals. A similar phenomenon has also been reported in visual and tactile modalities, though the time ranges for these are wider. This suggests the existence of a common mechanism for temporal pattern perception over modalities. This is because these different properties can be interpreted as a difference in time resolutions, given that the time resolutions for vision and touch are lower than those for audition.

  7. Neural code alterations and abnormal time patterns in Parkinson's disease.

    PubMed

    Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo

    2015-04-01

    The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson's disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson's disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson's disease.

  8. Neural time and movement time in choice of whistle or pulse burst responses to different auditory stimuli by dolphins.

    PubMed

    Ridgway, Sam H

    2011-02-01

    Echolocating dolphins emit trains of clicks and receive echoes from ocean targets. They often emit each successive ranging click about 20 ms after arrival of the target echo. In echolocation, decisions must be made about the target--fish or fowl, predator or food. In the first test of dolphin auditory decision speed, three bottlenose dolphins (Tursiops truncatus) chose whistle or pulse burst responses to different auditory stimuli randomly presented without warning in rapid succession under computer control. The animals were trained to hold pressure catheters in the nasal cavity so that pressure increases required for sound production could be used to split response time (RT) into neural time and movement time. Mean RT in the youngest and fastest dolphin ranged from 175 to 213 ms when responding to tones and from 213 to 275 ms responding to pulse trains. The fastest neural times and movement times were around 60 ms. The results suggest that echolocating dolphins tune to a rhythm so that succeeding pulses in a train are produced about 20 ms over target round-trip travel time. The dolphin nervous system has evolved for rapid processing of acoustic stimuli to accommodate for the more rapid sound speed in water compared to air.

  9. Using LDPC Code Constraints to Aid Recovery of Symbol Timing

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban

    2008-01-01

    A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation

  10. Weighted adaptively grouped multilevel space time trellis codes

    NASA Astrophysics Data System (ADS)

    Jain, Dharmvir; Sharma, Sanjay

    2015-05-01

    In existing grouped multilevel space-time trellis codes (GMLSTTCs), the groups of transmit antennas are predefined, and the transmit power is equally distributed across all transmit antennas. When the channel parameters are perfectly known at the transmitter, adaptive antenna grouping and beamforming scheme can achieve the better performance by optimum grouping of transmit antennas and properly weighting transmitted signals based on the available channel information. In this paper, we present a new code designed by combining GMLSTTCs, adaptive antenna grouping and beamforming using the channel state information at transmitter (CSIT), henceforth referred to as weighted adaptively grouped multilevel space time trellis codes (WAGMLSTTCs). The CSIT is used to adaptively group the transmitting antennas and provide a beamforming scheme by allocating the different powers to the transmit antennas. Simulation results show that WAGMLSTTCs provide improvement in error performance of 2.6 dB over GMLSTTCs.

  11. Coded cause of death and timing of COPD diagnosis.

    PubMed

    Pickard, A Simon; Jung, Eunmi; Bartle, Brian; Weiss, Kevin B; Lee, Todd A

    2009-02-01

    The aims of this study were to characterize causes of death among veterans with COPD using multiple cause of death coding, and to examine whether causes of death differed according to timing of COPD diagnosis. Veterans with COPD who died during a five-year follow-up period were identified from national VA databases linked to National Death Index files. Primary, secondary, underlying, and all-coded causes of death were compared between recent and preexistent COPD cohorts using proportional mortality ratios (PMRs), which compares proportion dying from specific causes as opposed to absolute risk of death. Of 26,357 decedents, 7,729 were categorized preexistent and 18,628 were recent COPD cases. Unspecified COPD was listed as underlying cause of death in a significantly greater proportion of preexistent COPD cases compared to recent cases, 20% vs 10%, PMR = 2.0 (95% CI: 1.9-2.1). A relatively higher proportion of recently diagnosed cases died from lung/bronchus, prostate, and site-unspecified cancers. Respiratory failure (J969) was rarely coded as an underlying or primary cause (< 1%), but was a second-code cause of death in 9% of recent and 12% of preexistent cases. Differences in coded causes of death between patients with a recent diagnosis of COPD compared to a preexistent diagnosis of COPD suggests that there is either coded cause-related bias or true differences in cause of death related to length of time with diagnosis. Thus, methods used to identify cohorts of COPD patients, i.e., incidence versus prevalence-based approaches, and coded cause of death can affect estimates of cause-specific mortality.

  12. Perceptual consequences of disrupted auditory nerve activity.

    PubMed

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  13. Event-related EEG time-frequency PCA and the orienting reflex to auditory stimuli.

    PubMed

    Barry, Robert J; De Blasio, Frances M; Bernat, Edward M; Steiner, Genevieve Z

    2015-04-01

    We recently reported an auditory habituation series with counterbalanced indifferent and significant (counting) instructions. Time-frequency (t-f) analysis of electrooculogram-corrected EEG was used to explore event-related synchronization (ERS)/desynchronization (ERD) in four EEG bands using arbitrarily selected time epochs and traditional frequency ranges. ERS in delta, theta, and alpha, and subsequent ERD in theta, alpha, and beta, showed substantial decrement over trials, yet effects of stimulus significance (count vs. no-task) were minimal. Here, we used principal components analysis (PCA) of the t-f data to investigate the natural frequency and time combinations involved in such stimulus processing. We identified four ERS and four ERD t-f components: six showed decrement over trials, four showed count > no-task effects, and six showed Significance × Trial interactions. This increased sensitivity argues for the wider use of our data-driven t-f PCA approach.

  14. Emotion and auditory virtual environments: affect-based judgments of music reproduced with virtual reverberation times.

    PubMed

    Västfjäll, Daniel; Larsson, Pontus; Kleiner, Mendel

    2002-02-01

    Emotions are experienced both in real and virtual environments (VEs). Most research to date have focused on the content that causes emotional reactions, but noncontent features of a VE (such as the realism and quality of object rendering) may also influence emotional reactions to the mediated object. The present research studied how noncontent features (different reverberation times) of an auditory VE influenced 76 participants' ratings of emotional reactions and expressed emotional qualities of the sounds. The results showed that the two emotion dimensions of pleasantness and arousal were systematically affected if the same musical piece was rendered with different reverberation times. Overall, it was found that high reverberation time was perceived as most unpleasant. Taken together, the results suggested that noncontent features of a VE influence emotional reactions to mediated objects. Moreover, the study suggests that emotional reactions may be a important aspect of the VE experience that can help complementing standard presence questionnaires and quality evaluations.

  15. Effect of red bull energy drink on auditory reaction time and maximal voluntary contraction.

    PubMed

    Goel, Vartika; Manjunatha, S; Pai, Kirtana M

    2014-01-01

    The use of "Energy Drinks" (ED) is increasing in India. Students specially use these drinks to rejuvenate after strenuous exercises or as a stimulant during exam times. The most common ingredient in EDs is caffeine and a popular ED available and commonly used is Red Bull, containing 80 mg of caffeine in 250 ml bottle. The primary aim of this study was to investigate the effects of Red Bull energy drink on Auditory reaction time and Maximal voluntary contraction. A homogeneous group containing twenty medical students (10 males, 10 females) participated in a crossover study in which they were randomized to supplement with Red Bull (2 mg/kg body weight of caffeine) or isoenergetic isovolumetric noncaffeinated control drink (a combination of Appy Fizz, Cranberry juice and soda) separated by 7 days. Maximal voluntary contraction (MVC) was recorded as the highest of the 3 values of maximal isometric force generated from the dominant hand using hand grip dynamometer (Biopac systems). Auditory reaction time (ART) was the average of 10 values of the time interval between the click sound and response by pressing the push button using hand held switch (Biopac systems). The energy and control drinks after one hour of consumption significantly reduced the Auditory reaction time in males (ED 232 ± 59 Vs 204 ± 34 s and Control 223 ± 57 Vs 210 ± 51 s; p < 0.05) as well as in females (ED 227 ± 56 Vs 214 ± 48 s and Control 224 ± 45 Vs 215 ± 36 s; p < 0.05) but had no effect on MVC in either sex (males ED 381 ± 37 Vs 371 ± 36 and Control 375 ± 61 Vs 363 ± 36 Newton, females ED 227 ± 23 Vs 227 ± 32 and Control 234 ± 46 Vs 228 ± 37 Newton). When compared across the gender groups, there was no significant difference between males and females in the effects of any of the drinks on the ART but there was an overall significantly lower MVC in females compared to males. Both energy drink and the control drink significantly improve the reaction time but may not have any effect

  16. Effect of red bull energy drink on auditory reaction time and maximal voluntary contraction.

    PubMed

    Goel, Vartika; Manjunatha, S; Pai, Kirtana M

    2014-01-01

    The use of "Energy Drinks" (ED) is increasing in India. Students specially use these drinks to rejuvenate after strenuous exercises or as a stimulant during exam times. The most common ingredient in EDs is caffeine and a popular ED available and commonly used is Red Bull, containing 80 mg of caffeine in 250 ml bottle. The primary aim of this study was to investigate the effects of Red Bull energy drink on Auditory reaction time and Maximal voluntary contraction. A homogeneous group containing twenty medical students (10 males, 10 females) participated in a crossover study in which they were randomized to supplement with Red Bull (2 mg/kg body weight of caffeine) or isoenergetic isovolumetric noncaffeinated control drink (a combination of Appy Fizz, Cranberry juice and soda) separated by 7 days. Maximal voluntary contraction (MVC) was recorded as the highest of the 3 values of maximal isometric force generated from the dominant hand using hand grip dynamometer (Biopac systems). Auditory reaction time (ART) was the average of 10 values of the time interval between the click sound and response by pressing the push button using hand held switch (Biopac systems). The energy and control drinks after one hour of consumption significantly reduced the Auditory reaction time in males (ED 232 ± 59 Vs 204 ± 34 s and Control 223 ± 57 Vs 210 ± 51 s; p < 0.05) as well as in females (ED 227 ± 56 Vs 214 ± 48 s and Control 224 ± 45 Vs 215 ± 36 s; p < 0.05) but had no effect on MVC in either sex (males ED 381 ± 37 Vs 371 ± 36 and Control 375 ± 61 Vs 363 ± 36 Newton, females ED 227 ± 23 Vs 227 ± 32 and Control 234 ± 46 Vs 228 ± 37 Newton). When compared across the gender groups, there was no significant difference between males and females in the effects of any of the drinks on the ART but there was an overall significantly lower MVC in females compared to males. Both energy drink and the control drink significantly improve the reaction time but may not have any effect

  17. Time Shifted PN Codes for CW Lidar, Radar, and Sonar

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F. (Inventor); Prasad, Narasimha S. (Inventor); Harrison, Fenton W. (Inventor); Flood, Michael A. (Inventor)

    2013-01-01

    A continuous wave Light Detection and Ranging (CW LiDAR) system utilizes two or more laser frequencies and time or range shifted pseudorandom noise (PN) codes to discriminate between the laser frequencies. The performance of these codes can be improved by subtracting out the bias before processing. The CW LiDAR system may be mounted to an artificial satellite orbiting the earth, and the relative strength of the return signal for each frequency can be utilized to determine the concentration of selected gases or other substances in the atmosphere.

  18. On the Application of Time-Reversed Space-Time Block Code to Aeronautical Telemetry

    DTIC Science & Technology

    2014-06-01

    Keying (SOQPSK), bit error rate (BER), Orthogonal Frequency Division Multiplexing ( OFDM ), Generalized time-reversed space-time block codes (GTR-STBC) 16...Alamouti code [4]) is optimum [2]. Although OFDM is generally applied on a per subcarrier basis in frequency selective fading, it is not a viable

  19. Time course and cost of misdirecting auditory spatial attention in younger and older adults.

    PubMed

    Singh, Gurjit; Pichora-Fuller, M Kathleen; Schneider, Bruce A

    2013-01-01

    The effects of directing, switching, and misdirecting auditory spatial attention in a complex listening situation were investigated in 8 younger and 8 older listeners with normal-hearing sensitivity below 4 kHz. In two companion experiments, a target sentence was presented from one spatial location and two competing sentences were presented simultaneously, one from each of two different locations. Pretrial, listeners were informed of the call-sign cue that identified which of the three sentences was the target and of the probability of the target sentence being presented from each of the three possible locations. Four different probability conditions varied in the likelihood of the target being presented at the left, center, and right locations. In Experiment 1, four timing conditions were tested: the original (unedited) sentences (which contained about 300 msec of filler speech between the call-sign cue and the onset of the target words), or modified (edited) sentences with silent pauses of 0, 150, or 300 msec replacing the filler speech. In Experiment 2, when the cued sentence was presented from an unlikely (side) listening location, for half of the trials the listener's task was to report target words from the cued sentence (cue condition); for the remaining trials, the listener's task was to report target words from the sentence presented from the opposite, unlikely (side) listening location (anticue condition). In Experiment 1, for targets presented from the likely (center) location, word identification was better for the unedited than for modified sentences. For targets presented from unlikely (side) locations, word identification was better when there was more time between the call-sign cue and target words. All listeners benefited similarly from the availability of more compared with less time and the presentation of continuous compared with interrupted speech. In Experiment 2, the key finding was that age-related performance deficits were observed in

  20. BAASTA: Battery for the Assessment of Auditory Sensorimotor and Timing Abilities.

    PubMed

    Dalla Bella, Simone; Farrugia, Nicolas; Benoit, Charles-Etienne; Begel, Valentin; Verga, Laura; Harding, Eleanor; Kotz, Sonja A

    2017-06-01

    The Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA) is a new tool for the systematic assessment of perceptual and sensorimotor timing skills. It spans a broad range of timing skills aimed at differentiating individual timing profiles. BAASTA consists of sensitive time perception and production tasks. Perceptual tasks include duration discrimination, anisochrony detection (with tones and music), and a version of the Beat Alignment Task. Perceptual thresholds for duration discrimination and anisochrony detection are estimated with a maximum likelihood procedure (MLP) algorithm. Production tasks use finger tapping and include unpaced and paced tapping (with tones and music), synchronization-continuation, and adaptive tapping to a sequence with a tempo change. BAASTA was tested in a proof-of-concept study with 20 non-musicians (Experiment 1). To validate the results of the MLP procedure, less widespread than standard staircase methods, three perceptual tasks of the battery (duration discrimination, anisochrony detection with tones, and with music) were further tested in a second group of non-musicians using 2 down / 1 up and 3 down / 1 up staircase paradigms (n = 24) (Experiment 2). The results show that the timing profiles provided by BAASTA allow to detect cases of timing/rhythm disorders. In addition, perceptual thresholds yielded by the MLP algorithm, although generally comparable to the results provided by standard staircase, tend to be slightly lower. In sum, BAASTA provides a comprehensive battery to test perceptual and sensorimotor timing skills, and to detect timing/rhythm deficits.

  1. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  2. Discrimination of time intervals presented in sequences: spatial effects with multiple auditory sources.

    PubMed

    Grondin, Simon; Plourde, Marilyn

    2007-10-01

    This article discusses two experiments on the discrimination of time intervals presented in sequences marked by brief auditory signals. Participants had to indicate whether the last interval in a series of three intervals marked by four auditory signals was shorter or longer than the previous intervals. Three base durations were under investigation: 75, 150, and 225 ms. In Experiment 1, sounds were presented through headphones, from a single-speaker in front of the participants or by four equally spaced speakers. In all three presentation modes, the highest different threshold was obtained in the lower base duration condition (75 ms), thus indicating an impairment of temporal processing when sounds are presented too rapidly. The results also indicate the presence, in each presentation mode, of a 'time-shrinking effect' (i.e., with the last interval being perceived as briefer than the preceding ones) at 75 ms, but not at 225 ms. Lastly, using different sound sources to mark time did not significantly impair discrimination. In Experiment 2, three signals were presented from the same source, and the last signal was presented at one of two locations, either close or far. The perceived duration was not influenced by the location of the fourth signal when the participant knew before each trial where the sounds would be delivered. However, when the participant was uncertain as to its location, more space between markers resulted in longer perceived duration, a finding that applies only at 150 and 225 ms. Moreover, the perceived duration was affected by the direction of the sequences (left-right vs. right-left).

  3. Static Enforcement of Timing Policies Using Code Certification

    DTIC Science & Technology

    2006-08-07

    13 2.2 TBF File Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3...each its due in space and time. —Guy L. Steele Jr. [65] Computers are useful precisely because they can be programmed. The success of program- ming...pattern for defining the patterns that programmers can use for their real work and their main goal. — Guy Steele [65] The code certification machinery

  4. Code-Time Diversity for Direct Sequence Spread Spectrum Systems

    PubMed Central

    Hassan, A. Y.

    2014-01-01

    Time diversity is achieved in direct sequence spread spectrum by receiving different faded delayed copies of the transmitted symbols from different uncorrelated channel paths when the transmission signal bandwidth is greater than the coherence bandwidth of the channel. In this paper, a new time diversity scheme is proposed for spread spectrum systems. It is called code-time diversity. In this new scheme, N spreading codes are used to transmit one data symbol over N successive symbols interval. The diversity order in the proposed scheme equals to the number of the used spreading codes N multiplied by the number of the uncorrelated paths of the channel L. The paper represents the transmitted signal model. Two demodulators structures will be proposed based on the received signal models from Rayleigh flat and frequency selective fading channels. Probability of error in the proposed diversity scheme is also calculated for the same two fading channels. Finally, simulation results are represented and compared with that of maximal ration combiner (MRC) and multiple-input and multiple-output (MIMO) systems. PMID:24982925

  5. Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli

    PubMed Central

    Laback, Bernhard; Savel, Sophie; Ystad, Sølvi; Balazs, Peter; Meunier, Sabine; Kronland-Martinet, Richard

    2016-01-01

    Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally

  6. Reducing EnergyPlus Run Time For Code Compliance Tools

    SciTech Connect

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.; Glazer, Jason

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and three climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.

  7. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    ERIC Educational Resources Information Center

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  8. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    ERIC Educational Resources Information Center

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  9. Effects of location and timing of co-activated neurons in the auditory midbrain on cortical activity: implications for a new central auditory prosthesis

    NASA Astrophysics Data System (ADS)

    Straka, Małgorzata M.; McMahon, Melissa; Markovitz, Craig D.; Lim, Hubert H.

    2014-08-01

    Objective. An increasing number of deaf individuals are being implanted with central auditory prostheses, but their performance has generally been poorer than for cochlear implant users. The goal of this study is to investigate stimulation strategies for improving hearing performance with a new auditory midbrain implant (AMI). Previous studies have shown that repeated electrical stimulation of a single site in each isofrequency lamina of the central nucleus of the inferior colliculus (ICC) causes strong suppressive effects in elicited responses within the primary auditory cortex (A1). Here we investigate if improved cortical activity can be achieved by co-activating neurons with different timing and locations across an ICC lamina and if this cortical activity varies across A1. Approach. We electrically stimulated two sites at different locations across an isofrequency ICC lamina using varying delays in ketamine-anesthetized guinea pigs. We recorded and analyzed spike activity and local field potentials across different layers and locations of A1. Results. Co-activating two sites within an isofrequency lamina with short inter-pulse intervals (<5 ms) could elicit cortical activity that is enhanced beyond a linear summation of activity elicited by the individual sites. A significantly greater extent of normalized cortical activity was observed for stimulation of the rostral-lateral region of an ICC lamina compared to the caudal-medial region. We did not identify any location trends across A1, but the most cortical enhancement was observed in supragranular layers, suggesting further integration of the stimuli through the cortical layers. Significance. The topographic organization identified by this study provides further evidence for the presence of functional zones across an ICC lamina with locations consistent with those identified by previous studies. Clinically, these results suggest that co-activating different neural populations in the rostral-lateral ICC rather

  10. A scalable population code for time in the striatum.

    PubMed

    Mello, Gustavo B M; Soares, Sofia; Paton, Joseph J

    2015-05-04

    To guide behavior and learn from its consequences, the brain must represent time over many scales. Yet, the neural signals used to encode time in the seconds-to-minute range are not known. The striatum is a major input area of the basal ganglia associated with learning and motor function. Previous studies have also shown that the striatum is necessary for normal timing behavior. To address how striatal signals might be involved in timing, we recorded from striatal neurons in rats performing an interval timing task. We found that neurons fired at delays spanning tens of seconds and that this pattern of responding reflected the interaction between time and the animals' ongoing sensorimotor state. Surprisingly, cells rescaled responses in time when intervals changed, indicating that striatal populations encoded relative time. Moreover, time estimates decoded from activity predicted timing behavior as animals adjusted to new intervals, and disrupting striatal function led to a decrease in timing performance. These results suggest that striatal activity forms a scalable population code for time, providing timing signals that animals use to guide their actions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Independent or integrated processing of interaural time and level differences in human auditory cortex?

    PubMed

    Altmann, Christian F; Terada, Satoshi; Kashino, Makio; Goto, Kazuhiro; Mima, Tatsuya; Fukuyama, Hidenao; Furukawa, Shigeto

    2014-06-01

    Sound localization in the horizontal plane is mainly determined by interaural time differences (ITD) and interaural level differences (ILD). Both cues result in an estimate of sound source location and in many real-life situations these two cues are roughly congruent. When stimulating listeners with headphones it is possible to counterbalance the two cues, so called ITD/ILD trading. This phenomenon speaks for integrated ITD/ILD processing at the behavioral level. However, it is unclear at what stages of the auditory processing stream ITD and ILD cues are integrated to provide a unified percept of sound lateralization. Therefore, we set out to test with human electroencephalography for integrated versus independent ITD/ILD processing at the level of preattentive cortical processing by measuring the mismatch negativity (MMN) to changes in sound lateralization. We presented a series of diotic standards (perceived at a midline position) that were interrupted by deviants that entailed either a change in a) ITD only, b) ILD only, c) congruent ITD and ILD, or d) counterbalanced ITD/ILD (ITD/ILD trading). The sound stimuli were either i) pure tones with a frequency of 500 Hz, or ii) amplitude modulated tones with a carrier frequency of 4000 Hz and a modulation frequency of 125 Hz. We observed significant MMN for the ITD/ILD traded deviants in case of the 500 Hz pure tones, and for the 4000 Hz amplitude-modulated tone. This speaks for independent processing of ITD and ILD at the level of the MMN within auditory cortex. However, the combined ITD/ILD cues elicited smaller MMN than the sum of the MMN induced in response to ITD and ILD cues presented in isolation for 500 Hz, but not 4000 Hz, suggesting independent processing for the higher frequency only. Thus, the two markers for independent processing - additivity and cue-conflict - resulted in contradicting conclusions with a dissociation between the lower (500 Hz) and higher frequency (4000 Hz) bands.

  12. Effects of sensorineural hearing loss on temporal coding of harmonic and inharmonic tone complexes in the auditory nerve.

    PubMed

    Kale, Sushrut; Micheyl, Christophe; Heinz, Michael G

    2013-01-01

    Listeners with sensorineural hearing loss (SNHL) often show poorer thresholds for fundamental-frequency (F0) discrimination and poorer discrimination between harmonic and frequency-shifted (inharmonic) complex tones, than normal-hearing (NH) listeners-especially when these tones contain resolved or partially resolved components. It has been suggested that these perceptual deficits reflect reduced access to temporal-fine-structure (TFS) information and could be due to degraded phase locking in the auditory nerve (AN) with SNHL. In the present study, TFS and temporal-envelope (ENV) cues in single AN-fiber responses to band-pass-filtered harmonic and inharmonic complex tones were -measured in chinchillas with either normal-hearing or noise-induced SNHL. The stimuli were comparable to those used in recent psychophysical studies of F0 and harmonic/inharmonic discrimination. As in those studies, the rank of the center component was manipulated to produce -different resolvability conditions, different phase relationships (cosine and random phase) were tested, and background noise was present. Neural TFS and ENV cues were quantified using cross-correlation coefficients computed using shuffled cross correlograms between neural responses to REF (harmonic) and TEST (F0- or frequency-shifted) stimuli. In animals with SNHL, AN-fiber tuning curves showed elevated thresholds, broadened tuning, best-frequency shifts, and downward shifts in the dominant TFS response component; however, no significant degradation in the ability of AN fibers to encode TFS or ENV cues was found. Consistent with optimal-observer analyses, the results indicate that TFS and ENV cues depended only on the relevant frequency shift in Hz and thus were not degraded because phase locking remained intact. These results suggest that perceptual "TFS-processing" deficits do not simply reflect degraded phase locking at the level of the AN. To the extent that performance in F0- and harmonic/inharmonic discrimination

  13. Auditory Distance Coding in Rabbit Midbrain Neurons and Human Perception: Monaural Amplitude Modulation Depth as a Cue

    PubMed Central

    Zahorik, Pavel; Carney, Laurel H.; Bishop, Brian B.; Kuwada, Shigeyuki

    2015-01-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35–200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060

  14. Secular Slowing of Auditory Simple Reaction Time in Sweden (1959–1985)

    PubMed Central

    Madison, Guy; Woodley of Menie, Michael A.; Sänger, Justus

    2016-01-01

    There are indications that simple reaction time might have slowed in Western populations, based on both cohort- and multi-study comparisons. A possible limitation of the latter method in particular is measurement error stemming from methods variance, which results from the fact that instruments and experimental conditions change over time and between studies. We therefore set out to measure the simple auditory reaction time (SRT) of 7,081 individuals (2,997 males and 4,084 females) born in Sweden 1959–1985 (subjects were aged between 27 and 54 years at time of measurement). Depending on age cut-offs and adjustment for aging related slowing of SRT, the data indicate that SRT has increased by between 3 and 16 ms in the 27 birth years covered in the present sample. This slowing is unlikely to be explained by attrition, which was evaluated by comparing the general intelligence × birth-year interactions and standard deviations for both male participants and dropouts, utilizing military conscript cognitive ability data. The present result is consistent with previous studies employing alternative methods, and may indicate the operation of several synergistic factors, such as recent micro-evolutionary trends favoring lower g in Sweden and the effects of industrially produced neurotoxic substances on peripheral nerve conduction velocity. PMID:27588000

  15. Inhibitory and Excitatory Spike-Timing-Dependent Plasticity in the Auditory Cortex

    PubMed Central

    D'amour, James A.; Froemke, Robert C.

    2015-01-01

    Summary Synapses are plastic and can be modified by changes of spike timing. While most studies of long-term synaptic plasticity focus on excitation, inhibitory plasticity may be critical for controlling information processing, memory storage, and overall excitability in neural circuits. Here we examine spike-timing-dependent plasticity (STDP) of inhibitory synapses onto layer 5 neurons in slices of mouse auditory cortex, together with concomitant STDP of excitatory synapses. Pairing pre- and postsynaptic spikes potentiated inhibitory inputs irrespective of precise temporal order within ~10 msec. This was in contrast to excitatory inputs, which displayed an asymmetrical STDP time window. These combined synaptic modifications both required NMDA receptor activation, and adjusted the excitatory-inhibitory ratio of events paired together with postsynaptic spiking. Finally, subthreshold events became suprathreshold, and the time window between excitation and inhibition became more precise. These findings demonstrate that cortical inhibitory plasticity requires interactions with co-activated excitatory synapses to properly regulate excitatory-inhibitory balance. PMID:25843405

  16. Secular Slowing of Auditory Simple Reaction Time in Sweden (1959-1985).

    PubMed

    Madison, Guy; Woodley Of Menie, Michael A; Sänger, Justus

    2016-01-01

    There are indications that simple reaction time might have slowed in Western populations, based on both cohort- and multi-study comparisons. A possible limitation of the latter method in particular is measurement error stemming from methods variance, which results from the fact that instruments and experimental conditions change over time and between studies. We therefore set out to measure the simple auditory reaction time (SRT) of 7,081 individuals (2,997 males and 4,084 females) born in Sweden 1959-1985 (subjects were aged between 27 and 54 years at time of measurement). Depending on age cut-offs and adjustment for aging related slowing of SRT, the data indicate that SRT has increased by between 3 and 16 ms in the 27 birth years covered in the present sample. This slowing is unlikely to be explained by attrition, which was evaluated by comparing the general intelligence × birth-year interactions and standard deviations for both male participants and dropouts, utilizing military conscript cognitive ability data. The present result is consistent with previous studies employing alternative methods, and may indicate the operation of several synergistic factors, such as recent micro-evolutionary trends favoring lower g in Sweden and the effects of industrially produced neurotoxic substances on peripheral nerve conduction velocity.

  17. Study of Auditory, Visual Reaction Time and Glycemic Control (HBA1C) in Chronic Type II Diabetes Mellitus.

    PubMed

    M, Muhil; Sembian, Umapathy; Babitha; N, Ethiya; K, Muthuselvi

    2014-09-01

    Diabetes mellitus is a disease of insulin deficiencyleads to micro and macro vascular disorder. Neuropathy is one of the major complication of chronic uncontrolled Diabetes affecting the Reaction time. To study the correlation between the glycosylated HbA1C and Auditory, visual Reaction time in chronic Type II diabetes (40-60y) of on oral hypoglycemic drugs of>10 y duration in two groups (n-100 in each group , both Males & females) and compared within the study groups and also with the age matched control group (100). HbA1C-Glycosylated HbA1C was measured by Particle enhanced immunoturbidimetric test method. Auditory and visual reaction time (ART, VRT) were measured by PC 1000 Reaction timer for control & study groups i.e. Group-I - Chronic Type II DM for >10 y with HbA1c < 7.0, and Group II - chronic Type-IIDM for >10 y with HbA1c > 7.0 ie impaired glycemic control. Exclusion Criteria- Subjects with Auditory and visual disturbances, alcoholism and smoking. Statistical Analysis - One-way ANOVA. Using SPSS 21 software. Both the groups had prolonged ART and VRT than controls. Among the study group, G-II (DM with HbA1C >7) had increased Auditory & Visual Reaction time than Group I which is statistically significant p-value <0.05. Impairment of sensory motor function of peripheral nervous system is more in chronic diabetic with less glycemic control ie., HbA1C>7 who have shown increased Auditory and Visual Reaction time than chronic DM with HbA1C<7.Severity of Peripheral neuropathy in Type II Diabetics could be due to elevated HbA1C.

  18. Neural Basis of the Time Window for Subjective Motor-Auditory Integration

    PubMed Central

    Toida, Koichi; Ueno, Kanako; Shimada, Sotaro

    2016-01-01

    Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs) elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN) and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2) and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms), and hence reduction in the feeling of authorship of the sound (the sense of agency). In contrast, the enhanced-P2 was most prominent in short-delay (≤200 ms) conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components. PMID:26779000

  19. Recursive time-varying filter banks for subband image coding

    NASA Technical Reports Server (NTRS)

    Smith, Mark J. T.; Chung, Wilson C.

    1992-01-01

    Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.

  20. Auditory and linguistic factors in the perception of voice offset time as a cue for preaspiration.

    PubMed

    Pind, J

    1998-04-01

    Previous research [J. Pind, Acta Psychol. 89, 53-81 (1995)] has shown that preaspiration in Icelandic, an [h]-like sound inserted between a vowel and the following closure, can be cued by Voice Offset Time (VOffT), a speech cue which is the mirror image of Voice Onset Time (VOT). Research has also revealed that VOffT is much more sensitive to the duration of the neighboring vowel than is VOT [J. Pind, Q. J. Exp. Psychol. 49A, 745-764 (1996)]. This paper explores the hypothesis that it is primarily the perceived quantity of the vowel that is responsible for the effect of the vowel on the perception of preaspiration. This hypothesis is based on the linguistic fact that preaspiration can only follow a phonemically short vowel. This linguistic hypothesis is contrasted with an auditory hypothesis in terms of forward masking. Perceptual experiments show that the perceptual boundaries for preaspiration can be affected either by changing the preceding vowel's duration or its spectrum. If the spectrum of the vowel changes towards that of a long vowel, longer VOffT's are needed for listeners to perceive preaspiration, thus lending support to the linguistic hypothesis.

  1. Early, Low-Level Auditory-Somatosensory Multisensory Interactions Impact Reaction Time Speed

    PubMed Central

    Sperdin, Holger F.; Cappe, Céline; Foxe, John J.; Murray, Micah M.

    2009-01-01

    Several lines of research have documented early-latency non-linear response interactions between audition and touch in humans and non-human primates. That these effects have been obtained under anesthesia, passive stimulation, as well as speeded reaction time tasks would suggest that some multisensory effects are not directly influencing behavioral outcome. We investigated whether the initial non-linear neural response interactions have a direct bearing on the speed of reaction times. Electrical neuroimaging analyses were applied to event-related potentials in response to auditory, somatosensory, or simultaneous auditory–somatosensory multisensory stimulation that were in turn averaged according to trials leading to fast and slow reaction times (using a median split of individual subject data for each experimental condition). Responses to multisensory stimulus pairs were contrasted with each unisensory response as well as summed responses from the constituent unisensory conditions. Behavioral analyses indicated that neural response interactions were only implicated in the case of trials producing fast reaction times, as evidenced by facilitation in excess of probability summation. In agreement, supra-additive non-linear neural response interactions between multisensory and the sum of the constituent unisensory stimuli were evident over the 40–84 ms post-stimulus period only when reaction times were fast, whereas subsequent effects (86–128 ms) were observed independently of reaction time speed. Distributed source estimations further revealed that these earlier effects followed from supra-additive modulation of activity within posterior superior temporal cortices. These results indicate the behavioral relevance of early multisensory phenomena. PMID:19404410

  2. Dependency Structures in Differentially Coded Cardiovascular Time Series

    PubMed Central

    Tasic, Tatjana; Jovanovic, Sladjana; Mohamoud, Omer; Skoric, Tamara; Japundzic-Zigon, Nina

    2017-01-01

    Objectives. This paper analyses temporal dependency in the time series recorded from aging rats, the healthy ones and those with early developed hypertension. The aim is to explore effects of age and hypertension on mutual sample relationship along the time axis. Methods. A copula method is applied to raw and to differentially coded signals. The latter ones were additionally binary encoded for a joint conditional entropy application. The signals were recorded from freely moving male Wistar rats and from spontaneous hypertensive rats, aged 3 months and 12 months. Results. The highest level of comonotonic behavior of pulse interval with respect to systolic blood pressure is observed at time lags τ = 0, 3, and 4, while a strong counter-monotonic behavior occurs at time lags τ = 1 and 2. Conclusion. Dynamic range of aging rats is considerably reduced in hypertensive groups. Conditional entropy of systolic blood pressure signal, compared to unconditional, shows an increased level of discrepancy, except for a time lag 1, where the equality is preserved in spite of the memory of differential coder. The antiparallel streams play an important role at single beat time lag. PMID:28127384

  3. Dependency Structures in Differentially Coded Cardiovascular Time Series.

    PubMed

    Tasic, Tatjana; Jovanovic, Sladjana; Mohamoud, Omer; Skoric, Tamara; Japundzic-Zigon, Nina; Bajic, Dragana

    2017-01-01

    Objectives. This paper analyses temporal dependency in the time series recorded from aging rats, the healthy ones and those with early developed hypertension. The aim is to explore effects of age and hypertension on mutual sample relationship along the time axis. Methods. A copula method is applied to raw and to differentially coded signals. The latter ones were additionally binary encoded for a joint conditional entropy application. The signals were recorded from freely moving male Wistar rats and from spontaneous hypertensive rats, aged 3 months and 12 months. Results. The highest level of comonotonic behavior of pulse interval with respect to systolic blood pressure is observed at time lags τ = 0, 3, and 4, while a strong counter-monotonic behavior occurs at time lags τ = 1 and 2. Conclusion. Dynamic range of aging rats is considerably reduced in hypertensive groups. Conditional entropy of systolic blood pressure signal, compared to unconditional, shows an increased level of discrepancy, except for a time lag 1, where the equality is preserved in spite of the memory of differential coder. The antiparallel streams play an important role at single beat time lag.

  4. Predicting spike timing in highly synchronous auditory neurons at different sound levels

    PubMed Central

    Fontaine, Bertrand; Benichoux, Victor; Joris, Philip X.

    2013-01-01

    A challenge for sensory systems is to encode natural signals that vary in amplitude by orders of magnitude. The spike trains of neurons in the auditory system must represent the fine temporal structure of sounds despite a tremendous variation in sound level in natural environments. It has been shown in vitro that the transformation from dynamic signals into precise spike trains can be accurately captured by simple integrate-and-fire models. In this work, we show that the in vivo responses of cochlear nucleus bushy cells to sounds across a wide range of levels can be precisely predicted by deterministic integrate-and-fire models with adaptive spike threshold. Our model can predict both the spike timings and the firing rate in response to novel sounds, across a large input level range. A noisy version of the model accounts for the statistical structure of spike trains, including the reliability and temporal precision of responses. Spike threshold adaptation was critical to ensure that predictions remain accurate at different levels. These results confirm that simple integrate-and-fire models provide an accurate phenomenological account of spike train statistics and emphasize the functional relevance of spike threshold adaptation. PMID:23864375

  5. Auditory discrimination of voice-onset time and its relationship with reading ability.

    PubMed

    Arciuli, Joanne; Rankine, Tracey; Monaghan, Padraic

    2010-05-01

    The perception of voice-onset time (VOT) during dichotic listening provides unique insight regarding auditory discrimination processes and, as such, an opportunity to learn more about individual differences in reading ability. We analysed the responses elicited by four VOT conditions: short-long pairs (SL), where a syllable with a short VOT was presented to the left ear and a syllable with a long VOT was presented to the right ear, as well as long-short (LS), short-short (SS), and long-long (LL) pairs. Stimuli were presented in three attention conditions, where participants were instructed to attend to either the left or right ear, or received no instruction. By around 9.5 years of age children perform similarly to adults in terms of the size and relative magnitude of the right ear advantage (REA) elicited by each of the four VOT conditions. Overall, SL pairs elicited the largest REA and LS pairs elicited a left ear advantage (LEA), reflecting stimulus-driven bottom-up processes. However, children were less able to modulate their responses according to attention condition, reflecting a lack of top-down control. Effective direction of attention to one ear or the other was related to measures of reading accuracy and comprehension, indicating that reading skill is associated with top-down control of bottom-up perceptual processes.

  6. Asynchronous inputs alter excitability, spike timing, and topography in primary auditory cortex

    PubMed Central

    Pandya, Pritesh K.; Moucha, Raluca; Engineer, Navzer D.; Rathbun, Daniel L.; Vazquez, Jessica; Kilgard, Michael P.

    2010-01-01

    Correlation-based synaptic plasticity provides a potential cellular mechanism for learning and memory. Studies in the visual and somatosensory systems have shown that behavioral and surgical manipulation of sensory inputs leads to changes in cortical organization that are consistent with the operation of these learning rules. In this study, we examine how the organization of primary auditory cortex (A1) is altered by tones designed to decrease the average input correlation across the frequency map. After one month of separately pairing nucleus basalis stimulation with 2 and 14 kHz tones, a greater proportion of A1 neurons responded to frequencies below 2 kHz and above 14 kHz. Despite the expanded representation of these tones, cortical excitability was specifically reduced in the high and low frequency regions of A1, as evidenced by increased neural thresholds and decreased response strength. In contrast, in the frequency region between the two paired tones, driven rates were unaffected and spontaneous firing rate was increased. Neural response latencies were increased across the frequency map when nucleus basalis stimulation was associated with asynchronous activation of the high and low frequency regions of A1. This set of changes did not occur when pulsed noise bursts were paired with nucleus basalis stimulation. These results are consistent with earlier observations that sensory input statistics can shape cortical map organization and spike timing. PMID:15855025

  7. Comment Codes: Improving Turnaround Time for Students Reports.

    ERIC Educational Resources Information Center

    O'Keefe, Robert D.

    1996-01-01

    A coding system expediting grading of student reports in a marketing class is described. The system uses twelve codes corresponding to constructive criticisms of content and form, allowing the teacher to comment while reading and to read more efficiently. A brief summary can also be included. Most frequent codes are recorded in the gradebook to…

  8. Time-Dependent, Parallel Neutral Particle Transport Code System.

    SciTech Connect

    BAKER, RANDAL S.

    2009-09-10

    Version 00 PARTISN (PARallel, TIme-Dependent SN) is the evolutionary successor to CCC-547/DANTSYS. The PARTISN code package is a modular computer program package designed to solve the time-independent or dependent multigroup discrete ordinates form of the Boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, the Solver Module, and the Edit Module, respectively. PARTISN is the evolutionary successor to the DANTSYSTM code system package. The Input and Edit Modules in PARTISN are very similar to those in DANTSYS. However, unlike DANTSYS, the Solver Module in PARTISN contains one, two, and three-dimensional solvers in a single module. In addition to the diamond-differencing method, the Solver Module also has Adaptive Weighted Diamond-Differencing (AWDD), Linear Discontinuous (LD), and Exponential Discontinuous (ED) spatial differencing methods. The spatial mesh may consist of either a standard orthogonal mesh or a block adaptive orthogonal mesh. The Solver Module may be run in parallel for two and three dimensional problems. One can now run 1-D problems in parallel using Energy Domain Decomposition (triggered by Block 5 input keyword npeg>0). EDD can also be used in 2-D/3-D with or without our standard Spatial Domain Decomposition. Both the static (fixed source or eigenvalue) and time-dependent forms of the transport equation are solved in forward or adjoint mode. In addition, PARTISN now has a probabilistic mode for Probability of Initiation (static) and Probability of Survival (dynamic) calculations. Vacuum, reflective, periodic, white, or inhomogeneous boundary conditions are solved. General anisotropic scattering and inhomogeneous sources are permitted. PARTISN solves the transport equation on orthogonal (single level or block-structured AMR) grids in 1-D (slab, two

  9. Time and Category Information in Pattern-Based Codes

    PubMed Central

    Eyherabide, Hugo Gabriel; Samengo, Inés

    2010-01-01

    Sensory stimuli are usually composed of different features (the what) appearing at irregular times (the when). Neural responses often use spike patterns to represent sensory information. The what is hypothesized to be encoded in the identity of the elicited patterns (the pattern categories), and the when, in the time positions of patterns (the pattern timing). However, this standard view is oversimplified. In the real world, the what and the when might not be separable concepts, for instance, if they are correlated in the stimulus. In addition, neuronal dynamics can condition the pattern timing to be correlated with the pattern categories. Hence, timing and categories of patterns may not constitute independent channels of information. In this paper, we assess the role of spike patterns in the neural code, irrespective of the nature of the patterns. We first define information-theoretical quantities that allow us to quantify the information encoded by different aspects of the neural response. We also introduce the notion of synergy/redundancy between time positions and categories of patterns. We subsequently establish the relation between the what and the when in the stimulus with the timing and the categories of patterns. To that aim, we quantify the mutual information between different aspects of the stimulus and different aspects of the response. This formal framework allows us to determine the precise conditions under which the standard view holds, as well as the departures from this simple case. Finally, we study the capability of different response aspects to represent the what and the when in the neural response. PMID:21151371

  10. Learning impaired children exhibit timing deficits and training-related improvements in auditory cortical responses to speech in noise.

    PubMed

    Warrier, Catherine M; Johnson, Krista L; Hayes, Erin A; Nicol, Trent; Kraus, Nina

    2004-08-01

    The physiological mechanisms that contribute to abnormal encoding of speech in children with learning problems are yet to be well understood. Furthermore, speech perception problems appear to be particularly exacerbated by background noise in this population. This study compared speech-evoked cortical responses recorded in a noisy background to those recorded in quiet in normal children (NL) and children with learning problems (LP). Timing differences between responses recorded in quiet and in background noise were assessed by cross-correlating the responses with each other. Overall response magnitude was measured with root-mean-square (RMS) amplitude. Cross-correlation scores indicated that 23% of LP children exhibited cortical neural timing abnormalities such that their neurophysiological representation of speech sounds became distorted in the presence of background noise. The latency of the N2 response in noise was isolated as being the root of this distortion. RMS amplitudes in these children did not differ from NL children, indicating that this result was not due to a difference in response magnitude. LP children who participated in a commercial auditory training program and exhibited improved cortical timing also showed improvements in phonological perception. Consequently, auditory pathway timing deficits can be objectively observed in LP children, and auditory training can diminish these deficits.

  11. Bimodal stimulus timing-dependent plasticity in primary auditory cortex is altered after noise exposure with and without tinnitus

    PubMed Central

    Koehler, Seth D.; Shore, Susan E.

    2015-01-01

    Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit. PMID:26289461

  12. Effect of Eight Weekly Aerobic Training Program on Auditory Reaction Time and MaxVO[subscript 2] in Visual Impairments

    ERIC Educational Resources Information Center

    Taskin, Cengiz

    2016-01-01

    The aim of study was to examine the effect of eight weekly aerobic exercises on auditory reaction time and MaxVO[subscript 2] in visual impairments. Forty visual impairment children that have blind 3 classification from the Turkey, experimental group; (age = 15.60 ± 1.10 years; height = 164.15 ± 4.88 cm; weight = 66.60 ± 4.77 kg) for twenty…

  13. Bimodal stimulus timing-dependent plasticity in primary auditory cortex is altered after noise exposure with and without tinnitus.

    PubMed

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2015-12-01

    Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit.

  14. Perceptual distortions in pitch and time reveal active prediction and support for an auditory pitch-motion hypothesis.

    PubMed

    Henry, Molly J; McAuley, J Devin

    2013-01-01

    A number of accounts of human auditory perception assume that listeners use prior stimulus context to generate predictions about future stimulation. Here, we tested an auditory pitch-motion hypothesis that was developed from this perspective. Listeners judged either the time change (i.e., duration) or pitch change of a comparison frequency glide relative to a standard (referent) glide. Under a constant-velocity assumption, listeners were hypothesized to use the pitch velocity (Δf/Δt) of the standard glide to generate predictions about the pitch velocity of the comparison glide, leading to perceptual distortions along the to-be-judged dimension when the velocities of the two glides differed. These predictions were borne out in the pattern of relative points of subjective equality by a significant three-way interaction between the velocities of the two glides and task. In general, listeners' judgments along the task-relevant dimension (pitch or time) were affected by expectations generated by the constant-velocity standard, but in an opposite manner for the two stimulus dimensions. When the comparison glide velocity was faster than the standard, listeners overestimated time change, but underestimated pitch change, whereas when the comparison glide velocity was slower than the standard, listeners underestimated time change, but overestimated pitch change. Perceptual distortions were least evident when the velocities of the standard and comparison glides were matched. Fits of an imputed velocity model further revealed increasingly larger distortions at faster velocities. The present findings provide support for the auditory pitch-motion hypothesis and add to a larger body of work revealing a role for active prediction in human auditory perception.

  15. Multicolor combinatorial probe coding for real-time PCR.

    PubMed

    Huang, Qiuying; Zheng, Linlin; Zhu, Yumei; Zhang, Jiafeng; Wen, Huixin; Huang, Jianwei; Niu, Jianjun; Zhao, Xilin; Li, Qingge

    2011-01-14

    The target volume of multiplex real-time PCR assays is limited by the number of fluorescent dyes available and the number of fluorescence acquisition channels present in the PCR instrument. We hereby explored a probe labeling strategy that significantly increased the target volume of real-time PCR detection in one reaction. The labeling paradigm, termed "Multicolor Combinatorial Probe Coding" (MCPC), uses a limited number (n) of differently colored fluorophores in various combinations to label each probe, enabling one of 2(n)-1 genetic targets to be detected in one reaction. The proof-of-principle of MCPC was validated by identification of one of each possible 15 human papillomavirus types, which is the maximum target number theoretically detectable by MCPC with a 4-color channel instrument, in one reaction. MCPC was then improved from a one-primer-pair setting to a multiple-primer-pair format through Homo-Tag Assisted Non-Dimer (HAND) system to allow multiple primer pairs to be included in one reaction. This improvement was demonstrated via identification of one of the possible 10 foodborne pathogen candidates with 10 pairs of primers included in one reaction, which had limit of detection equivalent to the uniplex PCR. MCPC was further explored in detecting combined genotypes of five β-globin gene mutations where multiple targets were co-amplified. MCPC strategy could expand the scope of real-time PCR assays in applications which are unachievable by current labeling strategy.

  16. Performance of a space-time block coded code division multiple access system over Nakagami-m fading channels

    NASA Astrophysics Data System (ADS)

    Yu, Xiangbin; Dong, Tao; Xu, Dazhuan; Bi, Guangguo

    2010-09-01

    By introducing an orthogonal space-time coding scheme, multiuser code division multiple access (CDMA) systems with different space time codes are given, and corresponding system performance is investigated over a Nakagami-m fading channel. A low-complexity multiuser receiver scheme is developed for space-time block coded CDMA (STBC-CDMA) systems. The scheme can make full use of the complex orthogonality of space-time block coding to simplify the high decoding complexity of the existing scheme. Compared to the existing scheme with exponential decoding complexity, it has linear decoding complexity. Based on the performance analysis and mathematical calculation, the average bit error rate (BER) of the system is derived in detail for integer m and non-integer m, respectively. As a result, a tight closed-form BER expression is obtained for STBC-CDMA with an orthogonal spreading code, and an approximate closed-form BER expression is attained for STBC-CDMA with a quasi-orthogonal spreading code. Simulation results show that the proposed scheme can achieve almost the same performance as the existing scheme with low complexity. Moreover, the simulation results for average BER are consistent with the theoretical analysis.

  17. Provisional Coding Practices: Are They Really a Waste of Time?

    PubMed

    Krypuy, Matthew; McCormack, Lena

    2006-11-01

    In order to facilitate effective clinical coding and hence the precise financial reimbursement of acute services, in 2005 Western District Health Service (WDHS) (located in regional Victoria, Australia) undertook a provisional coding trial for inpatient medical episodes to determine the magnitude and accuracy of clinical documentation. Utilising clinical coding software installed on a laptop computer, provisional coding was undertaken for all current overnight inpatient episodes under each physician one day prior to attending their daily ward round. The provisionally coded episodes were re-coded upon the completion of the discharge summary and the final Diagnostic Related Group (DRG) allocation and weight were compared to the provisional DRG assignment. A total of 54 out of 220 inpatient medical episodes were provisionally coded. This represented approximately a 25% cross section of the population selected for observation. Approximately 67.6% of the provisionally allocated DRGs were accurate in contrast to 32.4% which were subject to change once the discharge summary was completed. The DRG changes were primarily due to: disease progression of a patient during their care episode which could not be identified by clinical coding staff due to discharge prior to the following scheduled ward round; the discharge destination of particular patients; and the accuracy of clinical documentation on the discharge summary. The information gathered from the provisional coding trial supported the hypothesis that clinical documentation standards were sufficient and adequate to support precise clinical coding and DRG assignment at WDHS. The trial further highlighted the importance of a complete and accurate discharge summary available during the coding process of acute inpatient episodes.

  18. Adaptation of spike timing precision controls the sensitivity to interaural time difference in the avian auditory brainstem

    PubMed Central

    Higgs, Matthew H.; Kuznetsova, Marina S.; Spain, William J.

    2012-01-01

    While adaptation is widely thought to facilitate neural coding, the form of adaptation should depend on how the signals are encoded. Monaural neurons early in the interaural time difference (ITD) pathway encode the phase of sound input using spike timing rather than firing rate. Such neurons in chicken nucleus magnocellularis (NM) adapt to ongoing stimuli by increasing firing rate and decreasing spike timing precision. We measured NM neuron responses while adapting them to simulated physiological input, and used these responses to construct inputs to binaural coincidence detector neurons in nucleus laminaris (NL). Adaptation of spike timing in NM reduced ITD sensitivity in NL, demonstrating the dominant role of timing in the short-term plasticity as well as the immediate response of this sound localization circuit. PMID:23115186

  19. Effect of Auditory Motion Velocity on Reaction Time and Cortical Processes

    ERIC Educational Resources Information Center

    Getzmann, Stephan

    2009-01-01

    The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…

  20. Rapid Increase in Neural Conduction Time in the Adult Human Auditory Brainstem Following Sudden Unilateral Deafness.

    PubMed

    Maslin, M R D; Lloyd, S K; Rutherford, S; Freeman, S; King, A; Moore, D R; Munro, K J

    2015-10-01

    Individuals with sudden unilateral deafness offer a unique opportunity to study plasticity of the binaural auditory system in adult humans. Stimulation of the intact ear results in increased activity in the auditory cortex. However, there are no reports of changes at sub-cortical levels in humans. Therefore, the aim of the present study was to investigate changes in sub-cortical activity immediately before and after the onset of surgically induced unilateral deafness in adult humans. Click-evoked auditory brainstem responses (ABRs) to stimulation of the healthy ear were recorded from ten adults during the course of translabyrinthine surgery for the removal of a unilateral acoustic neuroma. This surgical technique always results in abrupt deafferentation of the affected ear. The results revealed a rapid (within minutes) reduction in latency of wave V (mean pre = 6.55 ms; mean post = 6.15 ms; p < 0.001). A latency reduction was also observed for wave III (mean pre = 4.40 ms; mean post = 4.13 ms; p < 0.001). These reductions in response latency are consistent with functional changes including disinhibition or/and more rapid intra-cellular signalling affecting binaurally sensitive neurons in the central auditory system. The results are highly relevant for improved understanding of putative physiological mechanisms underlying perceptual disorders such as tinnitus and hyperacusis.

  1. Effect of Auditory Motion Velocity on Reaction Time and Cortical Processes

    ERIC Educational Resources Information Center

    Getzmann, Stephan

    2009-01-01

    The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…

  2. Effects of Location, Frequency Region, and Time Course of Selective Attention on Auditory Scene Analysis

    ERIC Educational Resources Information Center

    Cusack, Rhodri; Decks, John; Aikman, Genevieve; Carlyon, Robert P.

    2004-01-01

    Often, the sound arriving at the ears is a mixture from many different sources, but only 1 is of interest. To assist with selection, the auditory system structures the incoming input into streams, each of which ideally corresponds to a single source. Some authors have argued that this process of streaming is automatic and invariant, but recent…

  3. A Latent Consolidation Phase in Auditory Identification Learning: Time in the Awake State Is Sufficient

    ERIC Educational Resources Information Center

    Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi

    2005-01-01

    Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…

  4. A Latent Consolidation Phase in Auditory Identification Learning: Time in the Awake State Is Sufficient

    ERIC Educational Resources Information Center

    Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi

    2005-01-01

    Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…

  5. Effects of Location, Frequency Region, and Time Course of Selective Attention on Auditory Scene Analysis

    ERIC Educational Resources Information Center

    Cusack, Rhodri; Decks, John; Aikman, Genevieve; Carlyon, Robert P.

    2004-01-01

    Often, the sound arriving at the ears is a mixture from many different sources, but only 1 is of interest. To assist with selection, the auditory system structures the incoming input into streams, each of which ideally corresponds to a single source. Some authors have argued that this process of streaming is automatic and invariant, but recent…

  6. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    ERIC Educational Resources Information Center

    Casserly, Elizabeth D.; Pisoni, David B.

    2015-01-01

    Purpose: Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method: Experiments 1 (N = 37) and 2 (N =…

  7. Transformation from a pure time delay to a mixed time and phase delay representation in the auditory forebrain pathway.

    PubMed

    Vonderschen, Katrin; Wagner, Hermann

    2012-04-25

    Birds and mammals exploit interaural time differences (ITDs) for sound localization. Subsequent to ITD detection by brainstem neurons, ITD processing continues in parallel midbrain and forebrain pathways. In the barn owl, both ITD detection and processing in the midbrain are specialized to extract ITDs independent of frequency, which amounts to a pure time delay representation. Recent results have elucidated different mechanisms of ITD detection in mammals, which lead to a representation of small ITDs in high-frequency channels and large ITDs in low-frequency channels, resembling a phase delay representation. However, the detection mechanism does not prevent a change in ITD representation at higher processing stages. Here we analyze ITD tuning across frequency channels with pure tone and noise stimuli in neurons of the barn owl's auditory arcopallium, a nucleus at the endpoint of the forebrain pathway. To extend the analysis of ITD representation across frequency bands to a large neural population, we employed Fourier analysis for the spectral decomposition of ITD curves recorded with noise stimuli. This method was validated using physiological as well as model data. We found that low frequencies convey sensitivity to large ITDs, whereas high frequencies convey sensitivity to small ITDs. Moreover, different linear phase frequency regimes in the high-frequency and low-frequency ranges suggested an independent convergence of inputs from these frequency channels. Our results are consistent with ITD being remodeled toward a phase delay representation along the forebrain pathway. This indicates that sensory representations may undergo substantial reorganization, presumably in relation to specific behavioral output.

  8. [Effect of stimulus rise time and high-pass masking on early auditory evoked potentials].

    PubMed

    Bunke, D; von Specht, H; Mühler, R; Pethe, J; Kevanishvili, Z

    1998-04-01

    Problems of frequency-specific objective assessment of hearing threshold by means of auditory brainstem response (ABR) have been discussed recently. While a number of workers have recommended methods of selective masking to improve the frequency specificity, others believe that frequency-specific potentials can also be obtained without masking. In this context, the effects of rise-decay time and high-pass masking on ABRs were investigated. ABRs were recorded in normal-hearing subjects and patients with high and low frequency hearing loss by means of surface electrodes between the vertex and the ipsilateral mastoid. The frequency of the stimulus was 1 kHz, and the rise-decay time 1 ms (1-0-1) or 2 ms (2-0-2). High-pass filtered noise (cutoff frequency 1.5 kHz; filter slope 250 dB/octave) was employed for masking. Particular attention was paid to the problem of efficient masking. In normal-hearing subjects under the influence of high-pass masking compared to non-masked ABRs, longer mean latencies and diminished means of the amplitudes of wave V were found, with differences in the near-threshold domain being less pronounced. Similar results were observed in patients with high frequency hearing loss. In patients with low frequency hearing loss, the influence of high-pass masking was especially marked distinctly near to threshold. Furthermore, latency and amplitude differences of wave V of the 1-0-1 and the 2-0-2 stimuli were determined from the ABRs obtained with and without high-pass masking. The differences between the latency differences of both stimuli in the suprathreshold range (70 dB nHL) only were statistically significant. The results are suggestive of an inadequate frequency specificity of unmasked stimuli in the suprathreshold range. Evaluation of the latencies revealed for both rise-decay times a similar frequency specificity near the threshold and a higher frequency specificity of the longer stimulus in the suprathreshold range.

  9. Coded acoustic wave sensors and system using time diversity

    NASA Technical Reports Server (NTRS)

    Solie, Leland P. (Inventor); Hines, Jacqueline H. (Inventor)

    2012-01-01

    An apparatus and method for distinguishing between sensors that are to be wirelessly detected is provided. An interrogator device uses different, distinct time delays in the sensing signals when interrogating the sensors. The sensors are provided with different distinct pedestal delays. Sensors that have the same pedestal delay as the delay selected by the interrogator are detected by the interrogator whereas other sensors with different pedestal delays are not sensed. Multiple sensors with a given pedestal delay are provided with different codes so as to be distinguished from one another by the interrogator. The interrogator uses a signal that is transmitted to the sensor and returned by the sensor for combination and integration with the reference signal that has been processed by a function. The sensor may be a surface acoustic wave device having a differential impulse response with a power spectral density consisting of lobes. The power spectral density of the differential response is used to determine the value of the sensed parameter or parameters.

  10. Auditory event files: integrating auditory perception and action planning.

    PubMed

    Zmigrod, Sharon; Hommel, Bernhard

    2009-02-01

    The features of perceived objects are processed in distinct neural pathways, which call for mechanisms that integrate the distributed information into coherent representations (the binding problem). Recent studies of sequential effects have demonstrated feature binding not only in perception, but also across (visual) perception and action planning. We investigated whether comparable effects can be obtained in and across auditory perception and action. The results from two experiments revealed effects indicative of spontaneous integration of auditory features (pitch and loudness, pitch and location), as well as evidence for audio-manual stimulus-response integration. Even though integration takes place spontaneously, features related to task-relevant stimulus or response dimensions are more likely to be integrated. Moreover, integration seems to follow a temporal overlap principle, with features coded close in time being more likely to be bound together. Taken altogether, the findings are consistent with the idea of episodic event files integrating perception and action plans.

  11. Coding Instead of Splitting - Algebraic Combinations in Time and Space

    DTIC Science & Technology

    2016-06-09

    of Splitting - Algebraic Combinations in Timeand Space 1 Basic Information • Principal Investigator: Muriel Médard • Primary Contact Email: medard...AFRL-AFOSR-VA-TR-2016-0218 Coding Instead of Splitting - Algebraic Combinations in Timeand Space Muriel Medard MASSACHUSETTS INSTITUTE OF TECHNOLOGY...TITLE AND SUBTITLE Coding Instead of Splitting - Algebraic Combinations in Timeand Space Sa. CONTRACT NUMBER FA9550- l3- l-0023 Sb. GRANT NUMBER Sc

  12. Auditory brain stem response to complex sounds: a tutorial.

    PubMed

    Skoe, Erika; Kraus, Nina

    2010-06-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brain stem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical auditory function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, on-line auditory processing), helps shape sensory perception. Thus, by being an objective and noninvasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, and persons with hearing loss, auditory processing, and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical or research programs.

  13. Low Complexity Receiver Based Space-Time Codes for Broadband Wireless Communications

    DTIC Science & Technology

    2011-01-31

    STBC family is a combina- tion/overlay between orthogonal STBC and Toeplitz codes, which could be viewed as a generalization of overlapped Alamouti...codes (OAC) and Toeplitz codes recently pro- posed in the literature. It is shown that the newly proposed STBC may outperform the existing codes when...mixed asynchronous signals in the first time-slot by a Toeplitz matrix, and then broadcasts them back to the terminals in the second time-slot. A

  14. DSP-based hardware for real-time video coding

    NASA Astrophysics Data System (ADS)

    de Sa, Luis A. S. V.; Silva, Vitor M.; de la Cruz, Luis J.; Faria, Sergio; Amado, Pedro J.; Navarro, Antonio; Lopes, Fernando; Silvestre, Joao C.

    1992-04-01

    An important application of digital image processing is the compression of video sequences by one or two orders of magnitude with minor picture quality degradation. In order to achieve this data compression elaborated algorithms are used. They eliminate both spatial and temporal redundancy by using transform, differential, and variable length coding techniques. Two of these algorithms are the CCITT H.261 algorithm for videotelephony and the ISO MPEG algorithm for CD-ROM motion video. The hardware implementation of these algorithms is a formidable task in view of the number of operations (more than 1GFLOPS) that may be necessary. This paper discusses the compression and decompression of real-time video using a multiprocessor system based on digital signal processors. The system is based on the partition of each picture in horizontal strips which are operated by a local processor unit made by the combination of the TMS320C30 signal processor and an A121 discrete cosine transform processor. In the encoder, each strip processor inputs raw data from a video acquisition module through a common parallel video bus and outputs compressed data to a supervisor module through a common serial supervisor bus. In the decoder, the data flows through an inverse path, i.e., the processors receive data from a supervisor module and transmit data to a display module. All operations within the horizontal strips are independent from each other except when motion estimation is used. In this case, the processing elements have to access regions of the picture that are allocated to neighboring processors. The number of processors is related to the frame rate and the resolution of the image.

  15. Subcortical Modulation in Auditory Processing and Auditory Hallucinations

    PubMed Central

    Ikuta, Toshikazu; DeRosse, Pamela; Argyelan, Miklos; Karlsgodt, Katherine H.; Kingsley, Peter B.; Szeszko, Philip R.; Malhotra, Anil K.

    2015-01-01

    Hearing perception in individuals with auditory hallucinations has not been well studied. Auditory hallucinations have previously been shown to involve primary auditory cortex activation. This activation suggests that auditory hallucinations activate the terminal of the auditory pathway as if auditory signals are submitted from the cochlea, and that a hallucinatory event is therefore perceived as hearing. The primary auditory cortex is stimulated by some unknown source that is outside of the auditory pathway. The current study aimed to assess the outcomes of stimulating the primary auditory cortex through the auditory pathway in individuals who have experienced auditory hallucinations. Sixteen patients with schizophrenia underwent functional magnetic resonance imaging (fMRI) sessions, as well as hallucination assessments. During the fMRI session, auditory stimuli were presented in one-second intervals at times when scanner noise was absent. Participants listened to auditory stimuli of sine waves (SW) (4 kHz-5.5 kHz), English words (EW), and acoustically reversed English words (arEW) in a block design fashion. The arEW were employed to deliver the sound of a human voice with minimal linguistic components. Patients’ auditory hallucination severity was assessed by the auditory hallucination item of the Brief Psychiatric Rating Scale (BPRS). During perception of arEW when compared with perception of SW, bilateral activation of the globus pallidus correlated with severity of auditory hallucinations. EW when compared with arEW did not correlate with auditory hallucination severity. Our findings suggest that the sensitivity of the globus pallidus to the human voice is associated with the severity of auditory hallucination. PMID:26275927

  16. Interval Timing in Children: Effects of Auditory and Visual Pacing Stimuli and Relationships with Reading and Attention Variables

    PubMed Central

    Birkett, Emma E.; Talcott, Joel B.

    2012-01-01

    Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent. PMID:22900054

  17. Interval timing in children: effects of auditory and visual pacing stimuli and relationships with reading and attention variables.

    PubMed

    Birkett, Emma E; Talcott, Joel B

    2012-01-01

    Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.

  18. EEG alpha spindles and prolonged brake reaction times during auditory distraction in an on-road driving study.

    PubMed

    Sonnleitner, Andreas; Treder, Matthias Sebastian; Simon, Michael; Willmann, Sven; Ewald, Arne; Buchner, Axel; Schrauf, Michael

    2014-01-01

    Driver distraction is responsible for a substantial number of traffic accidents. This paper describes the impact of an auditory secondary task on drivers' mental states during a primary driving task. N=20 participants performed the test procedure in a car following task with repeated forced braking on a non-public test track. Performance measures (provoked reaction time to brake lights) and brain activity (EEG alpha spindles) were analyzed to describe distracted drivers. Further, a classification approach was used to investigate whether alpha spindles can predict drivers' mental states. Results show that reaction times and alpha spindle rate increased with time-on-task. Moreover, brake reaction times and alpha spindle rate were significantly higher while driving with auditory secondary task opposed to driving only. In single-trial classification, a combination of spindle parameters yielded a median classification error of about 8% in discriminating the distracted from the alert driving. Reduced driving performance (i.e., prolonged brake reaction times) during increased cognitive load is assumed to be indicated by EEG alpha spindles, enabling the quantification of driver distraction in experiments on public roads without verbally assessing the drivers' mental states.

  19. Change in Speech Perception and Auditory Evoked Potentials over Time after Unilateral Cochlear Implantation in Postlingually Deaf Adults.

    PubMed

    Purdy, Suzanne C; Kelly, Andrea S

    2016-02-01

    Speech perception varies widely across cochlear implant (CI) users and typically improves over time after implantation. There is also some evidence for improved auditory evoked potentials (shorter latencies, larger amplitudes) after implantation but few longitudinal studies have examined the relationship between behavioral and evoked potential measures after implantation in postlingually deaf adults. The relationship between speech perception and auditory evoked potentials was investigated in newly implanted cochlear implant users from the day of implant activation to 9 months postimplantation, on five occasions, in 10 adults age 27 to 57 years who had been bilaterally profoundly deaf for 1 to 30 years prior to receiving a unilateral CI24 cochlear implant. Changes over time in middle latency response (MLR), mismatch negativity, and obligatory cortical auditory evoked potentials and word and sentence speech perception scores were examined. Speech perception improved significantly over the 9-month period. MLRs varied and showed no consistent change over time. Three participants aged in their 50s had absent MLRs. The pattern of change in N1 amplitudes over the five visits varied across participants. P2 area increased significantly for 1,000- and 4,000-Hz tones but not for 250 Hz. The greatest change in P2 area occurred after 6 months of implant experience. Although there was a trend for mismatch negativity peak latency to reduce and width to increase after 3 months of implant experience, there was considerable variability and these changes were not significant. Only 60% of participants had a detectable mismatch initially; this increased to 100% at 9 months. The continued change in P2 area over the period evaluated, with a trend for greater change for right hemisphere recordings, is consistent with the pattern of incremental change in speech perception scores over time. MLR, N1, and mismatch negativity changes were inconsistent and hence P2 may be a more robust measure

  20. Auditory spatial processing in the human cortex.

    PubMed

    Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C

    2012-12-01

    The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.

  1. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    SciTech Connect

    Massimo, F.; Atzeni, S.; Marocchino, A.

    2016-12-15

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for the solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.

  2. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    NASA Astrophysics Data System (ADS)

    Massimo, F.; Atzeni, S.; Marocchino, A.

    2016-12-01

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for the solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.

  3. Time-dependent recycling modeling with edge plasma transport codes

    NASA Astrophysics Data System (ADS)

    Pigarov, A.; Krasheninnikov, S.; Rognlien, T.; Taverniers, S.; Hollmann, E.

    2013-10-01

    First,we discuss extensions to Macroblob approach which allow to simulate more accurately dynamics of ELMs, pedestal and edge transport with UEDGE code. Second,we present UEDGE modeling results for H mode discharge with infrequent ELMs and large pedestal losses on DIII-D. In modeled sequence of ELMs this discharge attains a dynamic equilibrium. Temporal evolution of pedestal plasma profiles, spectral line emission, and surface temperature matching experimental data over ELM cycle is discussed. Analysis of dynamic gas balance highlights important role of material surfaces. We quantified the wall outgassing between ELMs as 3X the NBI fueling and the recycling coefficient as 0.8 for wall pumping via macroblob-wall interactions. Third,we also present results from multiphysics version of UEDGE with built-in, reduced, 1-D wall models and analyze the role of various PMI processes. Progress in framework-coupled UEDGE/WALLPSI code is discussed. Finally, implicit coupling schemes are important feature of multiphysics codes and we report on the results of parametric analysis of convergence and performance for Picard and Newton iterations in a system of coupled deterministic-stochastic ODE and proposed modifications enhancing convergence.

  4. The topography of frequency and time representation in primate auditory cortices

    PubMed Central

    Baumann, Simon; Joly, Olivier; Rees, Adrian; Petkov, Christopher I; Sun, Li; Thiele, Alexander; Griffiths, Timothy D

    2015-01-01

    Natural sounds can be characterised by their spectral content and temporal modulation, but how the brain is organized to analyse these two critical sound dimensions remains uncertain. Using functional magnetic resonance imaging, we demonstrate a topographical representation of amplitude modulation rate in the auditory cortex of awake macaques. The representation of this temporal dimension is organized in approximately concentric bands of equal rates across the superior temporal plane in both hemispheres, progressing from high rates in the posterior core to low rates in the anterior core and lateral belt cortex. In A1 the resulting gradient of modulation rate runs approximately perpendicular to the axis of the tonotopic gradient, suggesting an orthogonal organisation of spectral and temporal sound dimensions. In auditory belt areas this relationship is more complex. The data suggest a continuous representation of modulation rate across several physiological areas, in contradistinction to a separate representation of frequency within each area. DOI: http://dx.doi.org/10.7554/eLife.03256.001 PMID:25590651

  5. Incorporating Auditory Models in Speech/Audio Applications

    NASA Astrophysics Data System (ADS)

    Krishnamoorthi, Harish

    2011-12-01

    Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to

  6. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    PubMed

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  7. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation

    PubMed Central

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A.

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  8. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  9. [Auditory fatigue].

    PubMed

    Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar

    2015-01-01

    Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  10. Development of the auditory system.

    PubMed

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity.

  11. Development of the auditory system

    PubMed Central

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  12. Auditory reaction time and the derivation of equal loudness contours for the monkey1

    PubMed Central

    Stebbins, William C.

    1966-01-01

    Monkeys were trained to release a telegraph key at the onset of a pure tone. Latency of the response was measured over a 70-db range of sound pressure (re 0.0002 dyn/cm2) at six frequencies (250 to 15,000 cps). Latency was found to be an inverse exponential function of intensity at all frequencies. Equal loudness was inferred from the equal latency contours which were constructed from the latency-intensity functions at each frequency. These data indicate peak auditory sensitivity for the monkey near 1000 cps. At the frequencies above and below 1000 cps consistently more sound energy was required for equal latency. ImagesFig. 1. PMID:4955967

  13. Sources of variability in auditory brain stem evoked potential measures over time.

    PubMed

    Edwards, R M; Buchwald, J S; Tanguay, P E; Schwafel, J A

    1982-02-01

    Auditory brain stem EPs elicited in 10 normal adults by monaural clicks delivered at 72 dB HL, 20/sec showed no significant change in wave latencies or in the ratio of wave I to wave Y amplitude across 250 trial subsets, across 250 trial subsets, across 1500 trial blocks within a test session, or across two test sessions separated by several months. Sources of maximum variability were determined by using mean squared differences with all but one condition constant. 'Subjects' was shown to contribute the most variability followed by 'ears', 'sessions' and 'runs'; collapsing across conditions, wave III latencies were found to be the least variable, while wave II showed the most variability. Some EP morphologies showed extra peaks between waves II and IV, missing wave IV or wave IV fused with wave V. Such variations in wave form morphology were independent of EMG amplitude and were characteristic of certain individuals.

  14. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  15. Transformation of binaural response properties in the ascending auditory pathway: influence of time-varying interaural phase disparity.

    PubMed

    Spitzer, M W; Semple, M N

    1998-12-01

    Transformation of binaural response properties in the ascending auditory pathway: influence of time-varying interaural phase disparity. J. Neurophysiol. 80: 3062-3076, 1998. Previous studies demonstrated that tuning of inferior colliculus (IC) neurons to interaural phase disparity (IPD) is often profoundly influenced by temporal variation of IPD, which simulates the binaural cue produced by a moving sound source. To determine whether sensitivity to simulated motion arises in IC or at an earlier stage of binaural processing we compared responses in IC with those of two major IPD-sensitive neuronal classes in the superior olivary complex (SOC), neurons whose discharges were phase locked (PL) to tonal stimuli and those that were nonphase locked (NPL). Time-varying IPD stimuli consisted of binaural beats, generated by presenting tones of slightly different frequencies to the two ears, and interaural phase modulation (IPM), generated by presenting a pure tone to one ear and a phase modulated tone to the other. IC neurons and NPL-SOC neurons were more sharply tuned to time-varying than to static IPD, whereas PL-SOC neurons were essentially uninfluenced by the mode of stimulus presentation. Preferred IPD was generally similar in responses to static and time-varying IPD for all unit populations. A few IC neurons were highly influenced by the direction and rate of simulated motion, but the major effect for most IC neurons and all SOC neurons was a linear shift of preferred IPD at high rates-attributable to response latency. Most IC and NPL-SOC neurons were strongly influenced by IPM stimuli simulating motion through restricted ranges of azimuth; simulated motion through partially overlapping azimuthal ranges elicited discharge profiles that were highly discontiguous, indicating that the response associated with a particular IPD is dependent on preceding portions of the stimulus. In contrast, PL-SOC responses tracked instantaneous IPD throughout the trajectory of simulated

  16. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  17. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition.

  18. Robust Timing Synchronization for Aviation Communications, and Efficient Modulation and Coding Study for Quantum Communication

    NASA Technical Reports Server (NTRS)

    Xiong, Fugin

    2003-01-01

    One half of Professor Xiong's effort will investigate robust timing synchronization schemes for dynamically varying characteristics of aviation communication channels. The other half of his time will focus on efficient modulation and coding study for the emerging quantum communications.

  19. Robust Timing Synchronization for Aviation Communications, and Efficient Modulation and Coding Study for Quantum Communication

    NASA Technical Reports Server (NTRS)

    Xiong, Fugin

    2003-01-01

    One half of Professor Xiong's effort will investigate robust timing synchronization schemes for dynamically varying characteristics of aviation communication channels. The other half of his time will focus on efficient modulation and coding study for the emerging quantum communications.

  20. Persistent perceptual delay for head movement onset relative to auditory stimuli of different durations and rise times.

    PubMed

    Barnett-Cowan, Michael; Raeder, Sophie M; Bülthoff, Heinrich H

    2012-07-01

    The perception of simultaneity between auditory and vestibular information is crucially important for maintaining a coherent representation of the acoustic environment whenever the head moves. It has been recently reported, however, that despite having similar transduction latencies, vestibular stimuli are perceived significantly later than auditory stimuli when simultaneously generated. This suggests that perceptual latency of a head movement is longer than a co-occurring sound. However, these studies paired a vestibular stimulation of long duration (~1 s) and of a continuously changing temporal envelope with a brief (10-50 ms) sound pulse. In the present study, the stimuli were matched for temporal envelope duration and shape. Participants judged the temporal order of the two stimuli, the onset of an active head movement and the onset of brief (50 ms) or long (1,400 ms) sounds with a square- or raised-cosine-shaped envelope. Consistent with previous reports, head movement onset had to precede the onset of a brief sound by about 73 ms in order for the stimuli to be perceived as simultaneous. Head movements paired with long square sounds (~100 ms) were not significantly different than brief sounds. Surprisingly, head movements paired with long raised-cosine sound (~115 ms) had to be presented even earlier than brief stimuli. This additional lead time could not be accounted for by differences in the comparison stimulus characteristics (temporal envelope duration and shape). Rather, differences between sound conditions were found to be attributable to variability in the time for head movement to reach peak velocity: the head moved faster when paired with a brief sound. The persistent lead time required for vestibular stimulation provides further evidence that the perceptual latency of vestibular stimulation is greater than the other senses.

  1. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    PubMed Central

    Pisoni, David B.

    2015-01-01

    Purpose Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method Experiments 1 (N = 37) and 2 (N = 21) used pre- and posttest measures of speech and nonspeech recognition to find evidence of learning (within subject) and to compare the effects of 3 kinds of training (between subject) on the perceptual abilities of adults with normal hearing listening to simulations of cochlear implant processing. Subjects were given interactive, standard lab-based, or control training experience for 1 hr between the pre- and posttest tasks (unique sets across Experiments 1 & 2). Results Subjects receiving interactive training showed significant learning on sentence recognition in quiet task (Experiment 1), outperforming controls but not lab-trained subjects following training. Training groups did not differ significantly on any other task, even those directly involved in the interactive training experience. Conclusions Interactive training has the potential to produce learning in 1 domain (sentence recognition in quiet), but the particulars of the present training method (short duration, high complexity) may have limited benefits to this single criterion task. PMID:25674884

  2. The time course of auditory and language-specific mechanisms in compensation for sibilant assimilation.

    PubMed

    Clayards, Meghan; Niebuhr, Oliver; Gaskell, M Gareth

    2015-01-01

    Models of spoken-word recognition differ on whether compensation for assimilation is language-specific or depends on general auditory processing. English and French participants were taught words that began or ended with the sibilants /s/ and /∫/. Both languages exhibit some assimilation in sibilant sequences (e.g., /s/ becomes like [∫] in dress shop and classe chargée), but they differ in the strength and predominance of anticipatory versus carryover assimilation. After training, participants were presented with novel words embedded in sentences, some of which contained an assimilatory context either preceding or following. A continuum of target sounds ranging from [s] to [∫] was spliced into the novel words, representing a range of possible assimilation strengths. Listeners' perceptions were examined using a visual-world eyetracking paradigm in which the listener clicked on pictures matching the novel words. We found two distinct language-general context effects: a contrastive effect when the assimilating context preceded the target, and flattening of the sibilant categorization function (increased ambiguity) when the assimilating context followed. Furthermore, we found that English but not French listeners were able to resolve the ambiguity created by the following assimilatory context, consistent with their greater experience with assimilation in this context. The combination of these mechanisms allows listeners to deal flexibly with variability in speech forms.

  3. juwvid: Julia code for time-frequency analysis

    NASA Astrophysics Data System (ADS)

    Kawahara, Hajime

    2017-02-01

    Juwvid performs time-frequency analysis. Written in Julia, it uses a modified version of the Wigner distribution, the pseudo Wigner distribution, and the short-time Fourier transform from MATLAB GPL programs, tftb-0.2. The modification includes the zero-padding FFT, the non-uniform FFT, the adaptive algorithm by Stankovic, Dakovic, Thayaparan 2013, the S-method, the L-Wigner distribution, and the polynomial Wigner-Ville distribution.

  4. Focal manipulations of formant trajectories reveal a role of auditory feedback in the online control of both within-syllable and between-syllable speech timing.

    PubMed

    Cai, Shanqing; Ghosh, Satrajit S; Guenther, Frank H; Perkell, Joseph S

    2011-11-09

    Within the human motor repertoire, speech production has a uniquely high level of spatiotemporal complexity. The production of running speech comprises the traversing of spatial positions with precisely coordinated articulator movements to produce 10-15 sounds/s. How does the brain use auditory feedback, namely the self-perception of produced speech sounds, in the online control of spatial and temporal parameters of multisyllabic articulation? This question has important bearings on the organizational principles of sequential actions, yet its answer remains controversial due to the long latency of the auditory feedback pathway and technical challenges involved in manipulating auditory feedback in precisely controlled ways during running speech. In this study, we developed a novel technique for introducing time-varying, focal perturbations in the auditory feedback during multisyllabic, connected speech. Manipulations of spatial and temporal parameters of the formant trajectory were tested separately on two groups of subjects as they uttered "I owe you a yo-yo." Under these perturbations, significant and specific changes were observed in both the spatial and temporal parameters of the produced formant trajectories. Compensations to spatial perturbations were bidirectional and opposed the perturbations. Furthermore, under perturbations that manipulated the timing of auditory feedback trajectory (slow-down or speed-up), significant adjustments in syllable timing were observed in the subjects' productions. These results highlight the systematic roles of auditory feedback in the online control of a highly over-learned action as connected speech articulation and provide a first look at the properties of this type of sensorimotor interaction in sequential movements.

  5. Quantitative Characterization of Super-Resolution Infrared Imaging Based on Time-Varying Focal Plane Coding

    NASA Astrophysics Data System (ADS)

    Wang, X.; Yuan, Y.; Zhang, J.; Chen, Y.; Cheng, Y.

    2014-10-01

    High resolution infrared image has been the goal of an infrared imaging system. In this paper, a super-resolution infrared imaging method using time-varying coded mask is proposed based on focal plane coding and compressed sensing theory. The basic idea of this method is to set a coded mask on the focal plane of the optical system, and the same scene could be sampled many times repeatedly by using time-varying control coding strategy, the super-resolution image is further reconstructed by sparse optimization algorithm. The results of simulation are quantitatively evaluated by introducing the Peak Signal-to-Noise Ratio (PSNR) and Modulation Transfer Function (MTF), which illustrate that the effect of compressed measurement coefficient r and coded mask resolution m on the reconstructed image quality. Research results show that the proposed method will promote infrared imaging quality effectively, which will be helpful for the practical design of new type of high resolution ! infrared imaging systems.

  6. Subjective and Real Time: Coding Under Different Drug States

    PubMed Central

    Sanchez-Castillo, Hugo; Taylor, Kathleen M.; Ward, Ryan D.; Paz-Trejo, Diana B.; Arroyo-Araujo, Maria; Castillo, Oscar Galicia; Balsam, Peter D.

    2016-01-01

    Organisms are constantly extracting information from the temporal structure of the environment, which allows them to select appropriate actions and predict impending changes. Several lines of research have suggested that interval timing is modulated by the dopaminergic system. It has been proposed that higher levels of dopamine cause an internal clock to speed up, whereas less dopamine causes a deceleration of the clock. In most experiments the subjects are first trained to perform a timing task while drug free. Consequently, most of what is known about the influence of dopaminergic modulation of timing is on well-established timing performance. In the current study the impact of altered DA on the acquisition of temporal control was the focal question. Thirty male Sprague-Dawley rats were distributed randomly into three different groups (haloperidol, d-amphetamine or vehicle). Each animal received an injection 15 min prior to the start of every session from the beginning of interval training. The subjects were trained in a Fixed Interval (FI) 16s schedule followed by training on a peak procedure in which 64s non-reinforced peak trials were intermixed with FI trials. In a final test session all subjects were given vehicle injections and 10 consecutive non-reinforced peak trials to see if training under drug conditions altered the encoding of time. The current study suggests that administration of drugs that modulate dopamine do not alter the encoding temporal durations but do acutely affect the initiation of responding. PMID:27087743

  7. Differential Space-Time Coding Scheme Using Star Quadrature Amplitude Modulation Method

    NASA Astrophysics Data System (ADS)

    Yu, Xiangbin; Xu, DaZhuan; Bi, Guangguo

    2006-12-01

    Differential space-time coding (DSTC) has received much interest as it obviates the requirement of the channel state information at the receiver while maintaining the desired properties of space-time coding techniques. In this paper, by introducing star quadrature amplitude modulation (star QAM) method, two kinds of multiple amplitudes DSTC schemes are proposed. One is based on differential unitary space-time coding (DUSTC) scheme, and the other is based on differential orthogonal space-time coding (DOSTC) scheme. Corresponding bit-error-rate (BER) performance and coding-gain analysis are given, respectively. The proposed schemes can avoid the performance loss of conventional DSTC schemes based on phase-shift keying (PSK) modulation in high spectrum efficiency via multiple amplitudes modulation. Compared with conventional PSK-based DSTC schemes, the developed schemes have higher spectrum efficiency via carrying information not only on phases but also on amplitudes, and have higher coding gain. Moreover, the first scheme can implement low-complexity differential modulation and different code rates and be applied to any number of transmit antennas; while the second scheme has simple decoder and high code rate in the case of 3 and 4 antennas. The simulation results show that our schemes have lower BER when compared with conventional DUSTC and DOSTC schemes.

  8. Design Report for the Synchronized Position, Velocity, and Time Code Generator

    DTIC Science & Technology

    2015-08-01

    acknowledged for his contribution of the printed circuit board (PCB) layout of the global positioning system (GPS) Time Code Generator. Brian Mary of ARL is...identification NMEA National Marine Electronics Association PCB printed circuit board PPS pulse per second PVT position, velocity, and time SOP...ARL-MR-0901 ● AUG 2015 US Army Research Laboratory Design Report for the Synchronized Position, Velocity, and Time Code

  9. The cortical modulation of stimulus-specific adaptation in the auditory midbrain and thalamus: a potential neuronal correlate for predictive coding

    PubMed Central

    Malmierca, Manuel S.; Anderson, Lucy A.; Antunes, Flora M.

    2015-01-01

    To follow an ever-changing auditory scene, the auditory brain is continuously creating a representation of the past to form expectations about the future. Unexpected events will produce an error in the predictions that should “trigger” the network’s response. Indeed, neurons in the auditory midbrain, thalamus and cortex, respond to rarely occurring sounds while adapting to frequently repeated ones, i.e., they exhibit stimulus specific adaptation (SSA). SSA cannot be explained solely by intrinsic membrane properties, but likely involves the participation of the network. Thus, SSA is envisaged as a high order form of adaptation that requires the influence of cortical areas. However, present research supports the hypothesis that SSA, at least in its simplest form (i.e., to frequency deviants), can be transmitted in a bottom-up manner through the auditory pathway. Here, we briefly review the underlying neuroanatomy of the corticofugal projections before discussing state of the art studies which demonstrate that SSA present in the medial geniculate body (MGB) and inferior colliculus (IC) is not inherited from the cortex but can be modulated by the cortex via the corticofugal pathways. By modulating the gain of neurons in the thalamus and midbrain, the auditory cortex (AC) would refine SSA subcortically, preventing irrelevant information from reaching the cortex. PMID:25805974

  10. Adaptive Switching Technique for Space-Time/Frequency Coded OFDM Systems

    NASA Astrophysics Data System (ADS)

    Change, Namseok; Gil, Gye-Tae; Kang, Joonhyuk; Yu, Gangyoul

    In this letter, a switched transmission technique is presented that can provide the orthogonal frequency division multiplexing (OFDM) systems with additional diversity gain. The space-time block coding (STBC) and space-frequency block coding (SFBC) are considered for the transmission of the OFDM signals. A proper coding scheme is selected based on the estimated normalized delay spread and normalized Doppler spread. The selection criterion is derived empirically. It is shown through computer simulations that the proposed switching technique can improve the bit error rate (BER) performance of an OFDM system when the corresponding wireless channel has some time variation of time selectivity as well as frequency selectivity.

  11. Asymmetry of temporal auditory T-complex: right ear-left hemisphere advantage in Tb timing in children.

    PubMed

    Bruneau, Nicole; Bidet-Caulet, Aurélie; Roux, Sylvie; Bonnet-Brilhault, Frédérique; Gomot, Marie

    2015-02-01

    To investigate brain asymmetry of the temporal auditory evoked potentials (T-complex) in response to monaural stimulation in children compared to adults. Ten children (7 to 9 years) and ten young adults participated in the study. All were right-handed. The auditory stimuli used were tones (1100 Hz, 70 dB SPL, 50 ms duration) delivered monaurally (right, left ear) at four different levels of stimulus onset asynchrony (700-1100-1500-3000 ms). Latency and amplitude of responses were measured at left and right temporal sites according to the ear stimulated. Peaks of the three successive deflections (Na-Ta-Tb) of the T-complex were greater in amplitude and better defined in children than in adults. Amplitude measurements in children indicated that Na culminates on the left hemisphere whatever the ear stimulated whereas Ta and Tb culminate on the right hemisphere but for left ear stimuli only. Peak latency displayed different patterns of asymmetry. Na and Ta displayed shorter latencies for contralateral stimulation. The original finding was that Tb peak latency was the shortest at the left temporal site for right ear stimulation in children. Amplitude increased and/or peak latency decreased with increasing SOA, however no interaction effect was found with recording site or with ear stimulated. Our main original result indicates a right ear-left hemisphere timing advantage for Tb peak in children. The Tb peak would therefore be a good candidate as an electrophysiological marker of ear advantage effects during dichotic stimulation and of functional inter-hemisphere interactions and connectivity in children. Copyright © 2014. Published by Elsevier B.V.

  12. Distributed Space-Time Coding for Cooperative Networks

    DTIC Science & Technology

    2006-12-05

    log2 M . (10) We assume that the channels are Rayleigh fading, so that |h|2 is an exponential random variable with expected value σ2h = 1/r α, where r is...analysis to Nakagami - m -fading channels and we showed that the advantage decreases as the index m increases, i.e. as the channel tends to be less and less...symbols over two successive time periods, so that TSR2D = 2Ts and TS2R = 2 log2( M )Ts/ log2( Q ). The sequence transmitted by the source-relay pair is

  13. Multiplexed fluorescence readout using time responses of color coded signals for biomolecular detection

    PubMed Central

    Nishimura, Takahiro; Ogura, Yusuke; Tanida, Jun

    2016-01-01

    Fluorescence readout is an important technique for detecting biomolecules. In this paper, we present a multiplexed fluorescence readout method using time varied fluorescence signals. To generate the fluorescence signals, coded strands and a set of universal molecular beacons are introduced. Each coded strand represents the existence of an assigned target molecule. The coded strands have coded sequences to generate temporary fluorescence signals through binding to the molecular beacons. The signal generating processes are modeled based on the reaction kinetics between the coded strands and molecular beacons. The model is used to decode the detected fluorescence signals using maximum likelihood estimation. Multiplexed fluorescence readout was experimentally demonstrated with three molecular beacons. Numerical analysis showed that the readout accuracy was enhanced by the use of time-varied fluorescence signals. PMID:28018742

  14. Multiplexed fluorescence readout using time responses of color coded signals for biomolecular detection.

    PubMed

    Nishimura, Takahiro; Ogura, Yusuke; Tanida, Jun

    2016-12-01

    Fluorescence readout is an important technique for detecting biomolecules. In this paper, we present a multiplexed fluorescence readout method using time varied fluorescence signals. To generate the fluorescence signals, coded strands and a set of universal molecular beacons are introduced. Each coded strand represents the existence of an assigned target molecule. The coded strands have coded sequences to generate temporary fluorescence signals through binding to the molecular beacons. The signal generating processes are modeled based on the reaction kinetics between the coded strands and molecular beacons. The model is used to decode the detected fluorescence signals using maximum likelihood estimation. Multiplexed fluorescence readout was experimentally demonstrated with three molecular beacons. Numerical analysis showed that the readout accuracy was enhanced by the use of time-varied fluorescence signals.

  15. Auditory Imagination.

    ERIC Educational Resources Information Center

    Croft, Martyn

    Auditory imagination is used in this paper to describe a number of issues and activities related to sound and having to do with listening, thinking, recalling, imagining, reshaping, creating, and uttering sounds and words. Examples of auditory imagination in religious and literary works are cited that indicate a belief in an imagined, expected, or…

  16. Differential Cooperative Communications with Space-Time Network Coding

    DTIC Science & Technology

    2010-01-01

    The received signal at Un in the mth time slot of Phase I is ykmn = √ Ptg k mnv k m + w k mn, (1) where Pt is the power constraint of the user nodes, w...rate (SER) at Un for the symbols from Um is pmn , βmn’s are independent Bernoulli random variables with a distribution P (βmn) = { 1− pmn , for βmn = 1... pmn , for βmn = 0 . (17) The SER for M-QAM modulation can be expressed as [12] pmn = F2 ( 1 + bqγmn sin2 θ ) , (18) where bq = bQAM 2 = 3 2(M+1) and γmn

  17. Does a 'code stroke' rapid access protocol decrease door-to-needle time for thrombolysis?

    PubMed

    Tai, Y J; Weir, L; Hand, P; Davis, S; Yan, B

    2012-12-01

    Timely administration of intravenous tissue plasminogen activator (IVtPA) for acute ischaemic stroke is associated with better clinical outcomes. Therefore, a coordinated hospital system of acute clinical assessment and neuroimaging will likely avoid delays in IV-tPA administration. In July 2007, we implemented a 'code stroke' rapid access protocol at the Royal Melbourne Hospital with the aim of achieving rapid stroke assessment and treatment. This study investigates the quality of our 'code stroke' protocol and its impact on door-to-needle time and IV-tPA usage. We included patients thrombolysed with IV-tPA from January 2003 to June 2007 (pre-code stroke era) and patients thrombolysed from July 2007 to December 2010 (code stroke era). Data collected were demographics, time points (stroke symptom onset, presentation to emergency department, neuroimaging and thrombolysis) and clinical outcomes (modified Rankin Scale) at discharge, symptomatic, intracerebral haemorrhage and death during admission). We compared the door-to-needle time and usage of IV-tPA between the two eras. Patient data on 98 'pre-code stroke' thrombolysed patients and 189 'code stroke' thrombolysed patients were collected. The median age was 71 (60-79), 56% were males, and the median baseline National Institute of Health Stroke Scale score was 13 ± 6.3. There was an 18-min reduction in the median door-to-needle time (90 min in 'pre-code stroke era' vs 72 min in 'code stroke era', P < 0.001). The rate of IV-tPA usage increased from 3.9% in 2004 to 17.3% in 2010. Our study showed that 'code stroke' rapid access protocol decreased door-to-needle time and possibly contributed to the increased IV-tPA usage. © 2011 The Authors; Internal Medicine Journal © 2011 Royal Australasian College of Physicians.

  18. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  19. Comparison of electrostatic and time dependent simulation codes for modeling a pulsed power gun

    SciTech Connect

    Srinivasan-Rao, T.; Smedley, J.; Batchelor, K.; Farrell, J.P.; Dudnikova, G.

    1998-06-01

    This paper is a result of a group of simulations used to determine the optimal parameters for a pulsed power electron gun. As electrostatic codes such as PBGUNS tend to be cheaper, easier to use, and have less stringent computational requirements than time dependent codes such as MAFIA, it was desirable to determine those regimes in which the electrostatic codes agree with time dependent models. It was also necessary to identify those problems that required time dependence, such as longitudinal variation in an electron bunch. PBGUNS was then used to perform the bulk of the optimization, with only those issues that required time dependence being resolved with MAFIA. Good agreement in transverse phase space values was found between the electrostatic code (PBGUNS) and the time dependent code (MAFIA) for a variety of pulse durations, even for pulse durations short compared to the electron transit time of the accelerating region. To obtain values for the longitudinal energy spread and the variation of the transverse phase space across the bunch, it was necessary to use MAFIA. The electrostatic codes have an advantage in terms of required computational resources and run time, and are therefore a good choice for modeling jobs in which the longitudinal energy spread is unimportant.

  20. Real-time C Code Generation in Ptolemy II for the Giotto Model of Computation

    DTIC Science & Technology

    2009-05-20

    Real-time C Code Generation in Ptolemy II for the Giotto Model of Computation Shanna-Shaye Forbes Electrical Engineering and Computer Sciences...MAY 2009 2. REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Real-time C Code Generation in Ptolemy II for the Giotto...periodic and there are multiple modes of operation. Ptolemy II is a university based open source modeling and simulation framework that supports model

  1. Time sequence of auditory nerve and spiral ganglion cell degeneration following chronic kanamycin-induced deafness in the guinea pig.

    PubMed

    Kong, W J; Yin, Z D; Fan, G R; Li, D; Huang, X

    2010-05-17

    We investigated the time sequence of morphological changes of the spiral ganglion cell (SGC) and auditory nerve (AN) following chronic kanamycin-induced deafness. Guinea pigs were treated with kanamycin by subcutaneous injection at 500 mg/kg per day for 7 days. Histological changes in hair cells, SGCs, Schwann cells and the area of the cross-sectional of the AN with vestibular ganglion (VG) in the internal acoustic meatus were quantified at 1, 7, 14, 28, 56 and 70 days after kanamycin treatment. Outer hair cells decreased at 7 and 14 days. Loss of inner hair cells occurred at 14 and 28 days. The cross-sectional area of the AN with VG increased at 1 day and decreased shortly following loss of SGCs and Schwann cells at 7, 14 and 28 days after deafening. There was a similar time course of morphological changes in the overall cochlea and the basal turn. Thus, the effects of kanamycin on hair cells, spiral ganglion and Schwann cells are progressive. Early degeneration of SGC and Schwann cell mainly results from the direct toxic effect of kanamycin. However, multiple factors such as loss of hair cell, degeneration of Schwann cell and the progressive damage of kanamycin, may participate in the late degeneration process of SGCs. The molecular mechanism of the degeneration of SGC and Schwann cell should be investigated in the future. Moreover, there is a different time sequence of cell degeneration between acute and chronic deafness by kanamycin.

  2. Quantitative electromyographic analysis of reaction time to external auditory stimuli in drug-naïve Parkinson's disease.

    PubMed

    Kwon, Do-Young; Park, Byung Kyu; Kim, Ji Won; Eom, Gwang-Moon; Hong, Junghwa; Koh, Seong-Beom; Park, Kun-Woo

    2014-01-01

    Evaluation of motor symptoms in Parkinson's disease (PD) is still based on clinical rating scales by clinicians. Reaction time (RT) is the time interval between a specific stimulus and the start of muscle response. The aim of this study was to identify the characteristics of RT responses in PD patients using electromyography (EMG) and to elucidate the relationship between RT and clinical features of PD. The EMG activity of 31 PD patients was recorded during isometric muscle contraction. RT was defined as the time latency between an auditory beep and responsive EMG activity. PD patients demonstrated significant delays in both initiation and termination of muscle contraction compared with controls. Cardinal motor symptoms of PD were closely correlated with RT. RT was longer in more-affected side and in more-advanced PD stages. Frontal cognitive function, which is indicative of motor programming and movement regulation and perseveration, was also closely related with RT. In conclusion, greater RT is the characteristic motor features of PD and it could be used as a sensitive tool for motor function assessment in PD patients. Further investigations are required to clarify the clinical impact of the RT on the activity of daily living of patients with PD.

  3. Quantitative Electromyographic Analysis of Reaction Time to External Auditory Stimuli in Drug-Naïve Parkinson's Disease

    PubMed Central

    Kwon, Do-Young; Park, Byung Kyu; Kim, Ji Won; Eom, Gwang-Moon; Hong, Junghwa; Koh, Seong-Beom; Park, Kun-Woo

    2014-01-01

    Evaluation of motor symptoms in Parkinson's disease (PD) is still based on clinical rating scales by clinicians. Reaction time (RT) is the time interval between a specific stimulus and the start of muscle response. The aim of this study was to identify the characteristics of RT responses in PD patients using electromyography (EMG) and to elucidate the relationship between RT and clinical features of PD. The EMG activity of 31 PD patients was recorded during isometric muscle contraction. RT was defined as the time latency between an auditory beep and responsive EMG activity. PD patients demonstrated significant delays in both initiation and termination of muscle contraction compared with controls. Cardinal motor symptoms of PD were closely correlated with RT. RT was longer in more-affected side and in more-advanced PD stages. Frontal cognitive function, which is indicative of motor programming and movement regulation and perseveration, was also closely related with RT. In conclusion, greater RT is the characteristic motor features of PD and it could be used as a sensitive tool for motor function assessment in PD patients. Further investigations are required to clarify the clinical impact of the RT on the activity of daily living of patients with PD. PMID:24724037

  4. Perception and coding of interaural time differences with bilateral cochlear implants.

    PubMed

    Laback, Bernhard; Egger, Katharina; Majdak, Piotr

    2015-04-01

    Bilateral cochlear implantation is increasingly becoming the standard in the clinical treatment of bilateral deafness. The main motivation is to provide users of bilateral cochlear implants (CIs) access to binaural cues essential for localizing sound sources and understanding speech in environments of interfering sounds. One of those cues, interaural level differences, can be perceived well by CI users to allow some basic left versus right localization. However, interaural time differences (ITDs) which are important for localization of low-frequency sounds and spatial release from masking are not adequately represented by clinical envelope-based CI systems. Here, we first review the basic ITD sensitivity of CI users, particularly their dependence on stimulation parameters like stimulation rate and place, modulation rate, and envelope shape in single-electrode stimulation, as well as stimulation level, electrode spacing, and monaural across-electrode timing in multiple-electrode stimulation. Then, we discuss factors involved in ITD perception in electric hearing including the match between highly phase-locked electric auditory nerve response properties and binaural cell properties, the restricted stimulation of apical tonotopic pathways, channel interactions in multiple-electrode stimulation, and the onset age of binaural auditory input. Finally, we present clinically available CI stimulation strategies and experimental strategies aiming at improving listeners' access to ITD cues. This article is part of a Special Issue entitled . Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Accuracy and time requirements of a bar-code inventory system for medical supplies.

    PubMed

    Hanson, L B; Weinswig, M H; De Muth, J E

    1988-02-01

    The effects of implementing a bar-code system for issuing medical supplies to nursing units at a university teaching hospital were evaluated. Data on the time required to issue medical supplies to three nursing units at a 480-bed, tertiary-care teaching hospital were collected (1) before the bar-code system was implemented (i.e., when the manual system was in use), (2) one month after implementation, and (3) four months after implementation. At the same times, the accuracy of the central supply perpetual inventory was monitored using 15 selected items. One-way analysis of variance tests were done to determine any significant differences between the bar-code and manual systems. Using the bar-code system took longer than using the manual system because of a significant difference in the time required for order entry into the computer. Multiple-use requirements of the central supply computer system made entering bar-code data a much slower process. There was, however, a significant improvement in the accuracy of the perpetual inventory. Using the bar-code system for issuing medical supplies to the nursing units takes longer than using the manual system. However, the accuracy of the perpetual inventory was significantly improved with the implementation of the bar-code system.

  6. Central projections of auditory nerve fibers in the barn owl.

    PubMed

    Carr, C E; Boudreau, R E

    1991-12-08

    The central projections of the auditory nerve were examined in the barn owl. Each auditory nerve fiber enters the brain and divides to terminate in both the cochlear nucleus angularis and the cochlear nucleus magnocellularis. This division parallels a functional division into intensity and time coding in the auditory system. The lateral branch of the auditory nerve innervates the nucleus angularis and gives rise to a major and a minor terminal field. The terminals range in size and shape from small boutons to large irregular boutons with thorn-like appendages. The medial branch of the auditory nerve conveys phase information to the cells of the nucleus magnocellularis via large axosomatic endings or end bulbs of Held. Each medial branch divides to form 3-6 end bulbs along the rostrocaudal orientation of a single tonotopic band, and each magnocellular neuron receives 1-4 end bulbs. The end bulb envelops the postsynaptic cell body and forms large numbers of synapses. The auditory nerve profiles contain round clear vesicles and form punctate asymmetric synapses on both somatic spines and the cell body.

  7. Alamouti-Type Space-Time Coding for Free-Space Optical Communication with Direct Detection

    NASA Astrophysics Data System (ADS)

    Simon, M. K.; Vilnrotter, V.

    2003-11-01

    In optical communication systems employing direct detection at the receiver, intensity modulations such as on-off keying (OOK) or pulse-position modulation (PPM) are commonly used to convey the information. Consider the possibility of applying space-time coding in such a scenario, using, for example, an Alamouti-type coding scheme [1]. Implicit in the Alamouti code is the fact that the modulation that defines the signal set is such that it is meaningful to transmit and detect both the signal and its negative. While modulations such as phase-shift keying (PSK) and quadrature amplitude modulation (QAM) naturally fall into this class, OOK and PPM do not since the signal polarity (phase) would not be detected at the receiver. We investigate a modification of the Alamouti code to be used with such modulations that has the same desirable properties as the conventional Alamouti code but does not rely on the necessity of transmitting the negative of a signal.

  8. Modeling neural adaptation in the frog auditory system

    NASA Astrophysics Data System (ADS)

    Wotton, Janine; McArthur, Kimberly; Bohara, Amit; Ferragamo, Michael; Megela Simmons, Andrea

    2005-09-01

    Extracellular recordings from the auditory midbrain, Torus semicircularis, of the leopard frog reveal a wide diversity of tuning patterns. Some cells seem to be well suited for time-based coding of signal envelope, and others for rate-based coding of signal frequency. Adaptation for ongoing stimuli plays a significant role in shaping the frequency-dependent response rate at different levels of the frog auditory system. Anuran auditory-nerve fibers are unusual in that they reveal frequency-dependent adaptation [A. L. Megela, J. Acoust. Soc. Am. 75, 1155-1162 (1984)], and therefore provide rate-based input. In order to examine the influence of these peripheral inputs on central responses, three layers of auditory neurons were modeled to examine short-term neural adaptation to pure tones and complex signals. The response of each neuron was simulated with a leaky integrate and fire model, and adaptation was implemented by means of an increasing threshold. Auditory-nerve fibers, dorsal medullary nucleus neurons, and toral cells were simulated and connected in three ascending layers. Modifying the adaptation properties of the peripheral fibers dramatically alters the response at the midbrain. [Work supported by NOHR to M.J.F.; Gustavus Presidential Scholarship to K.McA.; NIH DC05257 to A.M.S.

  9. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  10. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  11. Interference between postural control and spatial vs. non-spatial auditory reaction time tasks in older adults.

    PubMed

    Fuhrman, Susan I; Redfern, Mark S; Jennings, J Richard; Furman, Joseph M

    2015-01-01

    This study investigated whether spatial aspects of an information processing task influence dual-task interference. Two groups (Older/Young) of healthy adults participated in dual-task experiments. Two auditory information processing tasks included a frequency discrimination choice reaction time task (non-spatial task) and a lateralization choice reaction time task (spatial task). Postural tasks included combinations of standing with eyes open or eyes closed on either a fixed floor or a sway-referenced floor. Reaction times and postural sway via center of pressure were recorded. Baseline measures of reaction time and sway were subtracted from the corresponding dual-task results to calculate reaction time task costs and postural task costs. Reaction time task cost increased with eye closure (p = 0.01), sway-referenced flooring (p < 0.0001), and the spatial task (p = 0.04). Additionally, a significant (p = 0.05) task x vision x age interaction indicated that older subjects had a significant vision X task interaction whereas young subjects did not. However, when analyzed by age group, the young group showed minimal differences in interference for the spatial and non-spatial tasks with eyes open, but showed increased interference on the spatial relative to non-spatial task with eyes closed. On the contrary, older subjects demonstrated increased interference on the spatial relative to the non-spatial task with eyes open, but not with eyes closed. These findings suggest that visual-spatial interference may occur in older subjects when vision is used to maintain posture.

  12. From ear to body: the auditory-motor loop in spatial cognition

    PubMed Central

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space. PMID:25249933

  13. EBR-II time constant calculation using the EROS kinetics code

    SciTech Connect

    Grimm, K.N.; Meneghetti, D.

    1986-01-01

    System time constants are important parameters in determining the dynamic behavior of reactors. One method of determining basic time constants is to apply a step change in power level and determine the resulting temperature change. This methodology can be done using any computer code that calculates temperature versus time given either a power input or a reactivity input. In the current analysis this is done using the reactor kinetics code EROS. As an example of this methodology, the time constant is calculated for an Experimental Breeder Reactor II (EBR-II) fuel pin.

  14. Rapid programmable/code-length-variable, time-domain bit-by-bit code shifting for high-speed secure optical communication.

    PubMed

    Gao, Zhensen; Dai, Bo; Wang, Xu; Kataoka, Nobuyuki; Wada, Naoya

    2011-05-01

    We propose and experimentally demonstrate a time-domain bit-by-bit code-shifting scheme that can rapidly program ultralong, code-length variable optical code by using only a dispersive element and a high-speed phase modulator for improving information security. The proposed scheme operates in the bit overlap regime and could eliminate the vulnerability of extracting the code by analyzing the fine structure of the time-domain spectral phase encoded signal. It is also intrinsically immune to eavesdropping via conventional power detection and differential-phase-shift-keying (DPSK) demodulation attacks. With this scheme, 10 Gbits/s of return-to-zero-DPSK data secured by bit-by-bit code shifting using up to 1024 chip optical code patterns have been transmitted over 49 km error free. The proposed scheme exhibits the potential for high-data-rate secure optical communication and to realize even one time pad.

  15. Real-time minimal bit error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  16. Real-time minimal-bit-error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  17. Power optimization of wireless media systems with space-time block codes.

    PubMed

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-07-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.

  18. Is auditory discrimination mature by middle childhood? A study using time-frequency analysis of mismatch responses from 7 years to adulthood

    PubMed Central

    Bishop, Dorothy VM; Hardiman, Mervyn J; Barry, Johanna G

    2011-01-01

    Behavioural and electrophysiological studies give differing impressions of when auditory discrimination is mature. Ability to discriminate frequency and speech contrasts reaches adult levels only around 12 years of age, yet an electrophysiological index of auditory discrimination, the mismatch negativity (MMN), is reported to be as large in children as in adults. Auditory ERPs were measured in 30 children (7 to 12 years), 23 teenagers (13 to 16 years) and 32 adults (35 to 56 years) in an oddball paradigm with tone or syllable stimuli. For each stimulus type, a standard stimulus (1000 Hz tone or syllable [ba]) occurred on 70% of trials, and one of two deviants (1030 or 1200 Hz tone, or syllables [da] or [bi]) equiprobably on the remaining trials. For the traditional MMN interval of 100–250 ms post-onset, size of mismatch responses increased with age, whereas the opposite trend was seen for an interval from 300 to 550 ms post-onset, corresponding to the late discriminative negativity (LDN). Time-frequency analysis of single trials revealed that the MMN resulted from phase-synchronization of oscillations in the theta (4–7 Hz) range, with greater synchronization in adults than children. Furthermore, the amount of synchronization was significantly correlated with frequency discrimination threshold. These results show that neurophysiological processes underlying auditory discrimination continue to develop through childhood and adolescence. Previous reports of adult-like MMN amplitudes in children may be artefactual results of using peak measurements when comparing groups that differ in variance. PMID:22213909

  19. Contextual modulation of primary visual cortex by auditory signals

    PubMed Central

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  20. Contextual modulation of primary visual cortex by auditory signals.

    PubMed

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'.

  1. Spiking Neurons Learning Phase Delays: How Mammals May Develop Auditory Time-Difference Sensitivity

    NASA Astrophysics Data System (ADS)

    Leibold, Christian; van Hemmen, J. Leo

    2005-04-01

    Time differences between the two ears are an important cue for animals to azimuthally locate a sound source. The first binaural brainstem nucleus, in mammals the medial superior olive, is generally believed to perform the necessary computations. Its cells are sensitive to variations of interaural time differences of about 10 μs. The classical explanation of such a neuronal time-difference tuning is based on the physical concept of delay lines. Recent data, however, are inconsistent with a temporal delay and rather favor a phase delay. By means of a biophysical model we show how spike-timing-dependent synaptic learning explains precise interplay of excitation and inhibition and, hence, accounts for a physical realization of a phase delay.

  2. Predictions of Speech Chimaera Intelligibility Using Auditory Nerve Mean-Rate and Spike-Timing Neural Cues.

    PubMed

    Wirtzfeld, Michael R; Ibrahim, Rasha A; Bruce, Ian C

    2017-07-26

    Perceptual studies of speech intelligibility have shown that slow variations of acoustic envelope (ENV) in a small set of frequency bands provides adequate information for good perceptual performance in quiet, whereas acoustic temporal fine-structure (TFS) cues play a supporting role in background noise. However, the implications for neural coding are prone to misinterpretation because the mean-rate neural representation can contain recovered ENV cues from cochlear filtering of TFS. We investigated ENV recovery and spike-time TFS coding using objective measures of simulated mean-rate and spike-timing neural representations of chimaeric speech, in which either the ENV or the TFS is replaced by another signal. We (a) evaluated the levels of mean-rate and spike-timing neural information for two categories of chimaeric speech, one retaining ENV cues and the other TFS; (b) examined the level of recovered ENV from cochlear filtering of TFS speech; (c) examined and quantified the contribution to recovered ENV from spike-timing cues using a lateral inhibition network (LIN); and (d) constructed linear regression models with objective measures of mean-rate and spike-timing neural cues and subjective phoneme perception scores from normal-hearing listeners. The mean-rate neural cues from the original ENV and recovered ENV partially accounted for perceptual score variability, with additional variability explained by the recovered ENV from the LIN-processed TFS speech. The best model predictions of chimaeric speech intelligibility were found when both the mean-rate and spike-timing neural cues were included, providing further evidence that spike-time coding of TFS cues is important for intelligibility when the speech envelope is degraded.

  3. Auditory neglect.

    PubMed Central

    De Renzi, E; Gentilini, M; Barbieri, C

    1989-01-01

    Auditory neglect was investigated in normal controls and in patients with a recent unilateral hemispheric lesion, by requiring them to detect the interruptions that occurred in one ear in a sound delivered through earphones either mono-aurally or binaurally. Control patients accurately detected interruptions. One left brain damaged (LBD) patient missed only once in the ipsilateral ear while seven of the 30 right brain damaged (RBD) patients missed more than one signal in the monoaural test and nine patients did the same in the binaural test. Omissions were always more marked in the left ear and in the binaural test with a significant ear by test interaction. The lesion of these patients was in the parietal lobe (five patients) and the thalamus (four patients). The relation of auditory neglect to auditory extinction was investigated and found to be equivocal, in that there were seven RBD patients who showed extinction, but not neglect and, more importantly, two patients who exhibited the opposite pattern, thus challenging the view that extinction is a minor form of neglect. Also visual and auditory neglect were not consistently correlated, the former being present in nine RBD patients without auditory neglect and the latter in two RBD patients without visual neglect. The finding that in some RBD patients with auditory neglect omissions also occurred, though with less frequency, in the right ear, points to a right hemisphere participation in the deployment of attention not only to the contralateral, but also to the ipsilateral space. PMID:2732732

  4. The Effect of Dopaminergic Medication on Beat-Based Auditory Timing in Parkinson's Disease.

    PubMed

    Cameron, Daniel J; Pickett, Kristen A; Earhart, Gammon M; Grahn, Jessica A

    2016-01-01

    Parkinson's disease (PD) adversely affects timing abilities. Beat-based timing is a mechanism that times events relative to a regular interval, such as the "beat" in musical rhythm, and is impaired in PD. It is unknown if dopaminergic medication influences beat-based timing in PD. Here, we tested beat-based timing over two sessions in participants with PD (OFF then ON dopaminergic medication) and in unmedicated control participants. People with PD and control participants completed two tasks. The first was a discrimination task in which participants compared two rhythms and determined whether they were the same or different. Rhythms either had a beat structure (metric simple rhythms) or did not (metric complex rhythms), as in previous studies. Discrimination accuracy was analyzed to test for the effects of beat structure, as well as differences between participants with PD and controls, and effects of medication (PD group only). The second task was the Beat Alignment Test (BAT), in which participants listened to music with regular tones superimposed, and responded as to whether the tones were "ON" or "OFF" the beat of the music. Accuracy was analyzed to test for differences between participants with PD and controls, and for an effect of medication in patients. Both patients and controls discriminated metric simple rhythms better than metric complex rhythms. Controls also improved at the discrimination task in the second vs. first session, whereas people with PD did not. For participants with PD, the difference in performance between metric simple and metric complex rhythms was greater (sensitivity to changes in simple rhythms increased and sensitivity to changes in complex rhythms decreased) when ON vs. OFF medication. Performance also worsened with disease severity. For the BAT, no group differences or effects of medication were found. Overall, these findings suggest that timing is impaired in PD, and that dopaminergic medication influences beat-based and non

  5. MEG dual scanning: a procedure to study real-time auditory interaction between two persons

    PubMed Central

    Baess, Pamela; Zhdanov, Andrey; Mandel, Anne; Parkkonen, Lauri; Hirvenkari, Lotta; Mäkelä, Jyrki P.; Jousmäki, Veikko; Hari, Riitta

    2012-01-01

    Social interactions fill our everyday life and put strong demands on our brain function. However, the possibilities for studying the brain basis of social interaction are still technically limited, and even modern brain imaging studies of social cognition typically monitor just one participant at a time. We present here a method to connect and synchronize two faraway neuromagnetometers. With this method, two participants at two separate sites can interact with each other through a stable real-time audio connection with minimal delay and jitter. The magnetoencephalographic (MEG) and audio recordings of both laboratories are accurately synchronized for joint offline analysis. The concept can be extended to connecting multiple MEG devices around the world. As a proof of concept of the MEG-to-MEG link, we report the results of time-sensitive recordings of cortical evoked responses to sounds delivered at laboratories separated by 5 km. PMID:22514530

  6. Effects of spatial response coding on distractor processing: evidence from auditory spatial negative priming tasks with keypress, joystick, and head movement responses.

    PubMed

    Möller, Malte; Mayr, Susanne; Buchner, Axel

    2015-01-01

    Prior studies of spatial negative priming indicate that distractor-assigned keypress responses are inhibited as part of visual, but not auditory, processing. However, recent evidence suggests that static keypress responses are not directly activated by spatially presented sounds and, therefore, might not call for an inhibitory process. In order to investigate the role of response inhibition in auditory processing, we used spatially directed responses that have been shown to result in direct response activation to irrelevant sounds. Participants localized a target sound by performing manual joystick responses (Experiment 1) or head movements (Experiment 2B) while ignoring a concurrent distractor sound. Relations between prime distractor and probe target were systematically manipulated (repeated vs. changed) with respect to identity and location. Experiment 2A investigated the influence of distractor sounds on spatial parameters of head movements toward target locations and showed that distractor-assigned responses are immediately inhibited to prevent false responding in the ongoing trial. Interestingly, performance in Experiments 1 and 2B was not generally impaired when the probe target appeared at the location of the former prime distractor and required a previously withheld and presumably inhibited response. Instead, performance was impaired only when prime distractor and probe target mismatched in terms of location or identity, which fully conforms to the feature-mismatching hypothesis. Together, the results suggest that response inhibition operates in auditory processing when response activation is provided but is presumably too short-lived to affect responding on the subsequent trial.

  7. Change in the coding of interaural time difference along the tonotopic axis of the chicken nucleus laminaris

    PubMed Central

    Palanca-Castan, Nicolas; Köppl, Christine

    2015-01-01

    Interaural time differences (ITDs) are an important cue for the localization of sounds in azimuthal space. Both birds and mammals have specialized, tonotopically organized nuclei in the brain stem for the processing of ITD: medial superior olive in mammals and nucleus laminaris (NL) in birds. The specific way in which ITDs are derived was long assumed to conform to a delay-line model in which arrays of systematically arranged cells create a representation of auditory space with different cells responding maximally to specific ITDs. This model was supported by data from barn owl NL taken from regions above 3 kHz and from chicken above 1 kHz. However, data from mammals often do not show defining features of the Jeffress model such as a systematic topographic representation of best ITDs or the presence of axonal delay lines, and an alternative has been proposed in which neurons are not topographically arranged with respect to ITD and coding occurs through the assessment of the overall response of two large neuron populations, one in each hemisphere. Modeling studies have suggested that the presence of different coding systems could be related to the animal’s head size and frequency range rather than their phylogenetic group. Testing this hypothesis requires data from across the tonotopic range of both birds and mammals. The aim of this study was to obtain in vivo recordings from neurons in the low-frequency range (<1000 Hz) of chicken NL. Our data argues for the presence of a modified Jeffress system that uses the slopes of ITD-selective response functions instead of their peaks to topographically represent ITD at mid- to high frequencies. At low frequencies, below several 100 Hz, the data did not support any current model of ITD coding. This is different to what was previously shown in the barn owl and suggests that constraints in optimal ITD processing may be associated with the particular demands on sound localization determined by the animal’s ecological niche

  8. Change in the coding of interaural time difference along the tonotopic axis of the chicken nucleus laminaris.

    PubMed

    Palanca-Castan, Nicolas; Köppl, Christine

    2015-01-01

    Interaural time differences (ITDs) are an important cue for the localization of sounds in azimuthal space. Both birds and mammals have specialized, tonotopically organized nuclei in the brain stem for the processing of ITD: medial superior olive in mammals and nucleus laminaris (NL) in birds. The specific way in which ITDs are derived was long assumed to conform to a delay-line model in which arrays of systematically arranged cells create a representation of auditory space with different cells responding maximally to specific ITDs. This model was supported by data from barn owl NL taken from regions above 3 kHz and from chicken above 1 kHz. However, data from mammals often do not show defining features of the Jeffress model such as a systematic topographic representation of best ITDs or the presence of axonal delay lines, and an alternative has been proposed in which neurons are not topographically arranged with respect to ITD and coding occurs through the assessment of the overall response of two large neuron populations, one in each hemisphere. Modeling studies have suggested that the presence of different coding systems could be related to the animal's head size and frequency range rather than their phylogenetic group. Testing this hypothesis requires data from across the tonotopic range of both birds and mammals. The aim of this study was to obtain in vivo recordings from neurons in the low-frequency range (<1000 Hz) of chicken NL. Our data argues for the presence of a modified Jeffress system that uses the slopes of ITD-selective response functions instead of their peaks to topographically represent ITD at mid- to high frequencies. At low frequencies, below several 100 Hz, the data did not support any current model of ITD coding. This is different to what was previously shown in the barn owl and suggests that constraints in optimal ITD processing may be associated with the particular demands on sound localization determined by the animal's ecological niche in

  9. Spike Timing Matters in Novel Neuronal Code Involved in Vibrotactile Frequency Perception.

    PubMed

    Birznieks, Ingvars; Vickery, Richard M

    2017-05-22

    Skin vibrations sensed by tactile receptors contribute significantly to the perception of object properties during tactile exploration [1-4] and to sensorimotor control during object manipulation [5]. Sustained low-frequency skin vibration (<60 Hz) evokes a distinct tactile sensation referred to as flutter whose frequency can be clearly perceived [6]. How afferent spiking activity translates into the perception of frequency is still unknown. Measures based on mean spike rates of neurons in the primary somatosensory cortex are sufficient to explain performance in some frequency discrimination tasks [7-11]; however, there is emerging evidence that stimuli can be distinguished based also on temporal features of neural activity [12, 13]. Our study's advance is to demonstrate that temporal features are fundamental for vibrotactile frequency perception. Pulsatile mechanical stimuli were used to elicit specified temporal spike train patterns in tactile afferents, and subsequently psychophysical methods were employed to characterize human frequency perception. Remarkably, the most salient temporal feature determining vibrotactile frequency was not the underlying periodicity but, rather, the duration of the silent gap between successive bursts of neural activity. This burst gap code for frequency represents a previously unknown form of neural coding in the tactile sensory system, which parallels auditory pitch perception mechanisms based on purely temporal information where longer inter-pulse intervals receive higher perceptual weights than short intervals [14]. Our study also demonstrates that human perception of stimuli can be determined exclusively by temporal features of spike trains independent of the mean spike rate and without contribution from population response factors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. The Effect of Dopaminergic Medication on Beat-Based Auditory Timing in Parkinson’s Disease

    PubMed Central

    Cameron, Daniel J.; Pickett, Kristen A.; Earhart, Gammon M.; Grahn, Jessica A.

    2016-01-01

    Parkinson’s disease (PD) adversely affects timing abilities. Beat-based timing is a mechanism that times events relative to a regular interval, such as the “beat” in musical rhythm, and is impaired in PD. It is unknown if dopaminergic medication influences beat-based timing in PD. Here, we tested beat-based timing over two sessions in participants with PD (OFF then ON dopaminergic medication) and in unmedicated control participants. People with PD and control participants completed two tasks. The first was a discrimination task in which participants compared two rhythms and determined whether they were the same or different. Rhythms either had a beat structure (metric simple rhythms) or did not (metric complex rhythms), as in previous studies. Discrimination accuracy was analyzed to test for the effects of beat structure, as well as differences between participants with PD and controls, and effects of medication (PD group only). The second task was the Beat Alignment Test (BAT), in which participants listened to music with regular tones superimposed, and responded as to whether the tones were “ON” or “OFF” the beat of the music. Accuracy was analyzed to test for differences between participants with PD and controls, and for an effect of medication in patients. Both patients and controls discriminated metric simple rhythms better than metric complex rhythms. Controls also improved at the discrimination task in the second vs. first session, whereas people with PD did not. For participants with PD, the difference in performance between metric simple and metric complex rhythms was greater (sensitivity to changes in simple rhythms increased and sensitivity to changes in complex rhythms decreased) when ON vs. OFF medication. Performance also worsened with disease severity. For the BAT, no group differences or effects of medication were found. Overall, these findings suggest that timing is impaired in PD, and that dopaminergic medication influences beat

  11. Coccinelle 1D: A one-dimensional neutron kinetic code using time-step size control

    SciTech Connect

    Engrand, P.R.; Effantin, M.E.; Gherchanoc, J.; Larive, B.

    1995-12-31

    COCCINELLE 1D is a one-dimensional neutron kinetic code that has been adapted from Electricite de France (EDF)`s core design code : COCCINELLE. The aim of this work is to integrate a code, derived from COCCINELLE and therefore taking advantage of most of its developments, into EDF`s Pressurized Water Reactors (PWR) simulation tools. The neutronic model of COCCINELLE ID has been optimized so that the code executes as rapidly as possible. In particular, a fast and stable kinetic method has been implemented: the Generalized Runge-Kutta (GRK) method together with its associated time-step size control. Moreover, efforts have been made to structure the code such that it could be easily integrated into any PWR simulation tool. Results show that the code executes at a rate faster than real-time on several test cases, and that, once integrated in a PWR simulation tool, the system is in good agreement with an experimental transient, that is a 3-hour load follow transient.

  12. A comparative study of time-marching and space-marching numerical methods. [for flowfield codes

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Moss, J. N.; Simmonds, A. L.

    1983-01-01

    Menees (1981) has conducted an evaluation of three different flowfield codes for the Jupiter entry conditions. However, a comparison of the codes has been made difficult by the fact that the three codes use different solution procedures, different computational mesh sizes, and a different convergence criterion. There are also other differences. For an objective evaluation of the different numerical solution methods employed by the codes, it would be desirable to select a simple no-blowing perfect-gas flowfield case for which the turbulent models are well established. The present investigation is concerned with the results of such a study. It is found that the choice of the numerical method is rather problem dependent. The time-marching and the space-marching method provide both comparable results if care is taken in selecting the appropriate mesh size near the body surface.

  13. Time and frequency metrics related to auditory masking of a 10 kHz tone in bottlenose dolphins (Tursiops truncatus).

    PubMed

    Branstetter, Brian K; Trickey, Jennifer S; Aihara, Hitomi; Finneran, James J; Liberman, Tori R

    2013-12-01

    Metrics related to the frequency spectrum of noise (e.g., critical ratios) are often used to describe and predict auditory masking. In this study, detection thresholds for a 10 kHz tone were measured in the presence of anthropogenic, natural, and synthesized noise. Time-domain and frequency-domain metrics were calculated for the different noise types, and regression models were used to determine the relationship between noise metrics and masked tonal thresholds. Statistical models suggested that detection thresholds, masked by a variety of noise types at a variety of noise levels, can be explained with metrics related to the spectral density of noise and the degree to which amplitude modulation is correlated across frequency regions of the noise. The results demonstrate the need to include time-domain metrics when describing and predicting auditory masking.

  14. Neural code alterations and abnormal time patterns in Parkinson’s disease

    NASA Astrophysics Data System (ADS)

    Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo

    2015-04-01

    Objective. The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson’s disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. Approach. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. Main results. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson’s disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. Significance. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson’s disease.

  15. Behind the scenes of auditory perception.

    PubMed

    Shamma, Shihab A; Micheyl, Christophe

    2010-06-01

    'Auditory scenes' often contain contributions from multiple acoustic sources. These are usually heard as separate auditory 'streams', which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the past two years indicate that both cortical and subcortical processes contribute to the formation of auditory streams, and they raise important questions concerning the roles of primary and secondary areas of auditory cortex in this phenomenon. In addition, these findings underline the importance of taking into account the relative timing of neural responses, and the influence of selective attention, in the search for neural correlates of the perception of auditory streams.

  16. Timing group delay and differential code bias corrections for BeiDou positioning

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Zhang, Xiaohong; Wang, Jinling

    2015-05-01

    This article first clearly figures out the relationship between parameters of timing group delay (TGD) and differential code bias (DCB) for BDS, and demonstrates the equivalence of TGD and DCB correction models combining theory with practice. The TGD/DCB correction models have been extended to various occasions for BDS positioning, and such models have been evaluated by real triple-frequency datasets. To test the effectiveness of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both standard point positioning (SPP) and precise point positioning (PPP) tests are carried out for BDS signals with different schemes. Furthermore, the influence of differential code biases on BDS positioning estimates such as coordinates, receiver clock biases, tropospheric delays and carrier phase ambiguities is investigated comprehensively. Comparative analysis show that the unmodeled differential code biases degrade the performance of BDS SPP by a factor of two or more, whereas the estimates of PPP are subject to varying degrees of influences. For SPP, the accuracy of dual-frequency combinations is slightly worse than that of single-frequency, and they are much more sensitive to the differential code biases, particularly for the B2B3 combination. For PPP, the uncorrected differential code biases are mostly absorbed into the receiver clock bias and carrier phase ambiguities and thus resulting in a much longer convergence time. Even though the influence of the differential code biases could be mitigated over time and comparable positioning accuracy could be achieved after convergence, it is suggested to properly handle with the differential code biases since it is vital for PPP convergence and integer ambiguity resolution.

  17. Auditory adaptation in voice perception.

    PubMed

    Schweinberger, Stefan R; Casper, Christoph; Hauthal, Nadine; Kaufmann, Jürgen M; Kawahara, Hideki; Kloth, Nadine; Robertson, David M C; Simpson, Adrian P; Zäske, Romi

    2008-05-06

    Perceptual aftereffects following adaptation to simple stimulus attributes (e.g., motion, color) have been studied for hundreds of years. A striking recent discovery was that adaptation also elicits contrastive aftereffects in visual perception of complex stimuli and faces [1-6]. Here, we show for the first time that adaptation to nonlinguistic information in voices elicits systematic auditory aftereffects. Prior adaptation to male voices causes a voice to be perceived as more female (and vice versa), and these auditory aftereffects were measurable even minutes after adaptation. By contrast, crossmodal adaptation effects were absent, both when male or female first names and when silently articulating male or female faces were used as adaptors. When sinusoidal tones (with frequencies matched to male and female voice fundamental frequencies) were used as adaptors, no aftereffects on voice perception were observed. This excludes explanations for the voice aftereffect in terms of both pitch adaptation and postperceptual adaptation to gender concepts and suggests that contrastive voice-coding mechanisms may routinely influence voice perception. The role of adaptation in calibrating properties of high-level voice representations indicates that adaptation is not confined to vision but is a ubiquitous mechanism in the perception of nonlinguistic social information from both faces and voices.

  18. Central auditory imperception.

    PubMed

    Snow, J B; Rintelmann, W F; Miller, J M; Konkle, D F

    1977-09-01

    The development of clinically applicable techniques for the evaluation of hearing impairment caused by lesions of the central auditory pathways has increased clinical interest in the anatomy and physiology of these pathways. A conceptualization of present understanding of the anatomy and physiology of the central auditory pathways is presented. Clinical tests based on reduction of redundancy of the speech message, degradation of speech and binaural interations are presented. Specifically performance-intensity functions, filtered speech tests, competing message tests and time-compressed speech tests are presented with the emphasis on our experience with time-compressed speech tests. With proper use of these tests not only can central auditory impairments by detected, but brain stem lesions can be distinguished from cortical lesions.

  19. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    PubMed

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  20. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    PubMed Central

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  1. Zipf's law in short-time timbral codings of speech, music, and environmental sound signals.

    PubMed

    Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Alvaro

    2012-01-01

    Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources.

  2. Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals

    PubMed Central

    Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro

    2012-01-01

    Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497

  3. Feature Assignment in Perception of Auditory Figure

    ERIC Educational Resources Information Center

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  4. Feature Assignment in Perception of Auditory Figure

    ERIC Educational Resources Information Center

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  5. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  6. One hundred ways to process time, frequency, rate and scale in the central auditory system: a pattern-recognition meta-analysis.

    PubMed

    Hemery, Edgar; Aucouturier, Jean-Julien

    2015-01-01

    The mammalian auditory system extracts features from the acoustic environment based on the responses of spatially distributed sets of neurons in the subcortical and cortical auditory structures. The characteristic responses of these neurons (linearly approximated by their spectro-temporal receptive fields, or STRFs) suggest that auditory representations are formed, as early as in the inferior colliculi, on the basis of a time, frequency, rate (temporal modulations) and scale (spectral modulations) analysis of sound. However, how these four dimensions are integrated and processed in subsequent neural networks remains unclear. In this work, we present a new methodology to generate computational insights into the functional organization of such processes. We first propose a systematic framework to explore more than a hundred different computational strategies proposed in the literature to process the output of a generic STRF model. We then evaluate these strategies on their ability to compute perceptual distances between pairs of environmental sounds. Finally, we conduct a meta-analysis of the dataset of all these algorithms' accuracies to examine whether certain combinations of dimensions and certain ways to treat such dimensions are, on the whole, more computationally effective than others. We present an application of this methodology to a dataset of ten environmental sound categories, in which the analysis reveals that (1) models are most effective when they organize STRF data into frequency groupings-which is consistent with the known tonotopic organization of receptive fields in auditory structures -, and that (2) models that treat STRF data as time series are no more effective than models that rely only on summary statistics along time-which corroborates recent experimental evidence on texture discrimination by summary statistics.

  7. Just in time? Using QR codes for multi-professional learning in clinical practice.

    PubMed

    Jamu, Joseph Tawanda; Lowi-Jones, Hannah; Mitchell, Colin

    2016-07-01

    Clinical guidelines and policies are widely available on the hospital intranet or from the internet, but can be difficult to access at the required time and place. Clinical staff with smartphones could use Quick Response (QR) codes for contemporaneous access to relevant information to support the Just in Time Learning (JIT-L) paradigm. There are several studies that advocate the use of smartphones to enhance learning amongst medical students and junior doctors in UK. However, these participants are already technologically orientated. There are limited studies that explore the use of smartphones in nursing practice. QR Codes were generated for each topic and positioned at relevant locations on a medical ward. Support and training were provided for staff. Website analytics and semi-structured interviews were performed to evaluate the efficacy, acceptability and feasibility of using QR codes to facilitate Just in Time learning. Use was intermittently high but not sustained. Thematic analysis of interviews revealed a positive assessment of the Just in Time learning paradigm and context-sensitive clinical information. However, there were notable barriers to acceptance, including usability of QR codes and appropriateness of smartphone use in a clinical environment. The use of Just in Time learning for education and reference may be beneficial to healthcare professionals. However, alternative methods of access for less technologically literate users and a change in culture of mobile device use in clinical areas may be needed.

  8. Trading Speed and Accuracy by Coding Time: A Coupled-circuit Cortical Model

    PubMed Central

    Standage, Dominic; You, Hongzhi; Wang, Da-Hui; Dorris, Michael C.

    2013-01-01

    Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by ‘climbing’ activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification. PMID:23592967

  9. Computer code for space-time diagnostics of nuclear safety parameters

    SciTech Connect

    Solovyev, D. A.; Semenov, A. A.; Gruzdov, F. V.; Druzhaev, A. A.; Shchukin, N. V.; Dolgenko, S. G.; Solovyeva, I. V.; Ovchinnikova, E. A.

    2012-07-01

    The computer code ECRAN 3D (Experimental and Calculation Reactor Analysis) is designed for continuous monitoring and diagnostics of reactor cores and databases for RBMK-1000 on the basis of analytical methods for the interrelation parameters of nuclear safety. The code algorithms are based on the analysis of deviations between the physically obtained figures and the results of neutron-physical and thermal-hydraulic calculations. Discrepancies between the measured and calculated signals are equivalent to obtaining inadequacy between performance of the physical device and its simulator. The diagnostics system can solve the following problems: identification of facts and time for inconsistent results, localization of failures, identification and quantification of the causes for inconsistencies. These problems can be effectively solved only when the computer code is working in a real-time mode. This leads to increasing requirements for a higher code performance. As false operations can lead to significant economic losses, the diagnostics system must be based on the certified software tools. POLARIS, version 4.2.1 is used for the neutron-physical calculation in the computer code ECRAN 3D. (authors)

  10. Cumulative Time Series Representation for Code Blue prediction in the Intensive Care Unit.

    PubMed

    Salas-Boni, Rebeca; Bai, Yong; Hu, Xiao

    2015-01-01

    Patient monitors in hospitals generate a high number of false alarms that compromise patients care and burden clinicians. In our previous work, an attempt to alleviate this problem by finding combinations of monitor alarms and laboratory test that were predictive of code blue events, called SuperAlarms. Our current work consists of developing a novel time series representation that accounts for both cumulative effects and temporality was developed, and it is applied to code blue prediction in the intensive care unit (ICU). The health status of patients is represented both by a term frequency approach, TF, often used in natural language processing; and by our novel cumulative approach. We call this representation "weighted accumulated occurrence representation", or WAOR. These two representations are fed into a L1 regularized logistic regression classifier, and are used to predict code blue events. Our performance was assessed online in an independent set. We report the sensitivity of our algorithm at different time windows prior to the code blue event, as well as the work-up to detect ratio and the proportion of false code blue detections divided by the number of false monitor alarms. We obtained a better performance with our cumulative representation, retaining a sensitivity close to our previous work while improving the other metrics.

  11. On the effect of timing errors in run length codes. [redundancy removal algorithms for digital channels

    NASA Technical Reports Server (NTRS)

    Wilkins, L. C.; Wintz, P. A.

    1975-01-01

    Many redundancy removal algorithms employ some sort of run length code. Blocks of timing words are coded with synchronization words inserted between blocks. The probability of incorrectly reconstructing a sample because of a channel error in the timing data is a monotonically nondecreasing function of time since the last synchronization word. In this paper we compute the 'probability that the accumulated magnitude of timing errors equal zero' as a function of time since the last synchronization word for a zero-order predictor (ZOP). The result is valid for any data source that can be modeled by a first-order Markov chain and any digital channel that can be modeled by a channel transition matrix. An example is presented.

  12. Speakers' acceptance of real-time speech exchange indicates that we use auditory feedback to specify the meaning of what we say.

    PubMed

    Lind, Andreas; Hall, Lars; Breidegard, Björn; Balkenius, Christian; Johansson, Petter

    2014-06-01

    Speech is usually assumed to start with a clearly defined preverbal message, which provides a benchmark for self-monitoring and a robust sense of agency for one's utterances. However, an alternative hypothesis states that speakers often have no detailed preview of what they are about to say, and that they instead use auditory feedback to infer the meaning of their words. In the experiment reported here, participants performed a Stroop color-naming task while we covertly manipulated their auditory feedback in real time so that they said one thing but heard themselves saying something else. Under ideal timing conditions, two thirds of these semantic exchanges went undetected by the participants, and in 85% of all nondetected exchanges, the inserted words were experienced as self-produced. These findings indicate that the sense of agency for speech has a strong inferential component, and that auditory feedback of one's own voice acts as a pathway for semantic monitoring, potentially overriding other feedback loops. © The Author(s) 2014.

  13. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.10 Voluntary...

  14. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.8...

  15. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.9 Reporting...

  16. A novel repetition space-time coding scheme for mobile FSO systems

    NASA Astrophysics Data System (ADS)

    Li, Ming; Cao, Yang; Li, Shu-ming; Yang, Shao-wen

    2015-03-01

    Considering the influence of more random atmospheric turbulence, worse pointing errors and highly dynamic link on the transmission performance of mobile multiple-input multiple-output (MIMO) free space optics (FSO) communication systems, this paper establishes a channel model for the mobile platform. Based on the combination of Alamouti space-time code and time hopping ultra-wide band (TH-UWB) communications, a novel repetition space-time coding (RSTC) method for mobile 2×2 free-space optical communications with pulse position modulation (PPM) is developed. In particular, two decoding methods of equal gain combining (EGC) maximum likelihood detection (MLD) and correlation matrix detection (CMD) are derived. When a quasi-static fading and weak turbulence channel model are considered, simulation results show that whether the channel state information (CSI) is known or not, the coding system demonstrates more significant performance of the symbol error rate (SER) than the uncoding. In other words, transmitting diversity can be achieved while conveying the information only through the time delays of the modulated signals transmitted from different antennas. CMD has almost the same effect of signal combining with maximal ratio combining (MRC). However, when the channel correlation increases, SER performance of the coding 2×2 system degrades significantly.

  17. The Role of Coding Time in Estimating and Interpreting Growth Curve Models.

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.; Deeb-Sossa, Natalia; Papadakis, Alison A.; Bollen, Kenneth A.; Curran, Patrick J.

    2004-01-01

    The coding of time in growth curve models has important implications for the interpretation of the resulting model that are sometimes not transparent. The authors develop a general framework that includes predictors of growth curve components to illustrate how parameter estimates and their standard errors are exactly determined as a function of…

  18. The Role of Coding Time in Estimating and Interpreting Growth Curve Models.

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.; Deeb-Sossa, Natalia; Papadakis, Alison A.; Bollen, Kenneth A.; Curran, Patrick J.

    2004-01-01

    The coding of time in growth curve models has important implications for the interpretation of the resulting model that are sometimes not transparent. The authors develop a general framework that includes predictors of growth curve components to illustrate how parameter estimates and their standard errors are exactly determined as a function of…

  19. One hundred ways to process time, frequency, rate and scale in the central auditory system: a pattern-recognition meta-analysis

    PubMed Central

    Hemery, Edgar; Aucouturier, Jean-Julien

    2015-01-01

    The mammalian auditory system extracts features from the acoustic environment based on the responses of spatially distributed sets of neurons in the subcortical and cortical auditory structures. The characteristic responses of these neurons (linearly approximated by their spectro-temporal receptive fields, or STRFs) suggest that auditory representations are formed, as early as in the inferior colliculi, on the basis of a time, frequency, rate (temporal modulations) and scale (spectral modulations) analysis of sound. However, how these four dimensions are integrated and processed in subsequent neural networks remains unclear. In this work, we present a new methodology to generate computational insights into the functional organization of such processes. We first propose a systematic framework to explore more than a hundred different computational strategies proposed in the literature to process the output of a generic STRF model. We then evaluate these strategies on their ability to compute perceptual distances between pairs of environmental sounds. Finally, we conduct a meta-analysis of the dataset of all these algorithms' accuracies to examine whether certain combinations of dimensions and certain ways to treat such dimensions are, on the whole, more computationally effective than others. We present an application of this methodology to a dataset of ten environmental sound categories, in which the analysis reveals that (1) models are most effective when they organize STRF data into frequency groupings—which is consistent with the known tonotopic organization of receptive fields in auditory structures -, and that (2) models that treat STRF data as time series are no more effective than models that rely only on summary statistics along time—which corroborates recent experimental evidence on texture discrimination by summary statistics. PMID:26190996

  20. Auditory Reserve and the Legacy of Auditory Experience

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function. PMID:25405381

  1. Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1980-01-01

    There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.

  2. Motivation for Using Generalized Geometry in the Time Dependent Transport Code TDKENO

    SciTech Connect

    Dustin Popp; Zander Mausolff; Sedat Goluoglu

    2016-04-01

    We are proposing to use the code, TDKENO, to model TREAT. TDKENO solves the time dependent, three dimensional Boltzmann transport equation with explicit representation of delayed neutrons. Instead of directly integrating this equation, the neutron flux is factored into two components – a rapidly varying amplitude equation and a slowly varying shape equation and each is solved separately on different time scales. The shape equation is solved using the 3D Monte Carlo transport code KENO, from Oak Ridge National Laboratory’s SCALE code package. Using the Monte Carlo method to solve the shape equation is still computationally intensive, but the operation is only performed when needed. The amplitude equation is solved deterministically and frequently, so the solution gives an accurate time-dependent solution without having to repeatedly We have modified TDKENO to incorporate KENO-VI so that we may accurately represent the geometries within TREAT. This paper explains the motivation behind using generalized geometry, and provides the results of our modifications. TDKENO uses the Improved Quasi-Static method to accomplish this. In this method, the neutron flux is factored into two components. One component is a purely time-dependent and rapidly varying amplitude function, which is solved deterministically and very frequently (small time steps). The other is a slowly varying flux shape function that weakly depends on time and is only solved when needed (significantly larger time steps).

  3. Combining sparse coding and time-domain features for heart sound classification.

    PubMed

    Whitaker, Bradley M; Suresha, Pradyumna B; Liu, Chengyu; Clifford, Gari D; Anderson, David V

    2017-07-31

    This paper builds upon work submitted as part of the 2016 PhysioNet/CinC Challenge, which used sparse coding as a feature extraction tool on audio PCG data for heart sound classification. In sparse coding, preprocessed data is decomposed into a dictionary matrix and a sparse coefficient matrix. The dictionary matrix represents statistically important features of the audio segments. The sparse coefficient matrix is a mapping that represents which features are used by each segment. Working in the sparse domain, we train support vector machines (SVMs) for each audio segment (S1, systole, S2, diastole) and the full cardiac cycle. We train a sixth SVM to combine the results from the preliminary SVMs into a single binary label for the entire PCG recording. In addition to classifying heart sounds using sparse coding, this paper presents two novel modifications. The first uses a matrix norm in the dictionary update step of sparse coding to encourage the dictionary to learn discriminating features from the abnormal heart recordings. The second combines the sparse coding features with time-domain features in the final SVM stage. The original algorithm submitted to the challenge achieved a cross-validated mean accuracy (MAcc) score of 0.8652 (Se  =  0.8669 and Sp  =  0.8634). After incorporating the modifications new to this paper, we report an improved cross-validated MAcc of 0.8926 (Se  =  0.9007 and Sp  =  0.8845). Our results show that sparse coding is an effective way to define spectral features of the cardiac cycle and its sub-cycles for the purpose of classification. In addition, we demonstrate that sparse coding can be combined with additional feature extraction methods to improve classification accuracy.

  4. Brillouin optical time-domain analyzer for extended sensing range using probe dithering and cyclic coding

    NASA Astrophysics Data System (ADS)

    Iribas, Haritz; Loayssa, Alayn; Sauser, Florian; Llera, Miguel; Le Floch, Sébastien

    2017-04-01

    We present an enhanced performance Brillouin optical time-domain analysis sensor that uses dual probes waves with optical frequency modulation and cyclic coding. The frequency modulation serves to increase the probe power that can be injected in the fiber before the onset of non-local effects and noise generated by spontaneous Brillouin scattering. This leads to higher detected signal-to-noise ratio (SNR), which is further increased by the coding gain. The enhanced SNR translates to extended range for the sensor, with experiments demonstrating 1-m spatial resolution over a 164 km fiber loop with a 3-MHz Brillouin frequency shift measurement precision at the worst contrast position. In addition, we introduce a study of the power limits that can be injected in the fiber with cyclic coding before the appearance of distortions in the decoded signal.

  5. Organizing principles of real-time memory encoding: neural clique assemblies and universal neural codes.

    PubMed

    Lin, Longnian; Osan, Remus; Tsien, Joe Z

    2006-01-01

    Recent identification of network-level coding units, termed neural cliques, in the hippocampus has enabled real-time patterns of memory traces to be mathematically described, directly visualized, and dynamically deciphered. These memory coding units are functionally organized in a categorical and hierarchical manner, suggesting that internal representations of external events in the brain is achieved not by recording exact details of those events, but rather by recreating its own selective pictures based on cognitive importance. This neural-clique-based hierarchical-extraction and parallel-binding process enables the brain to acquire not only large storage capacity but also abstraction and generalization capability. In addition, activation patterns of the neural clique assemblies can be converted to strings of binary codes that would permit universal categorizations of internal brain representations across individuals and species.

  6. Dynamic Divisive Normalization Predicts Time-Varying Value Coding in Decision-Related Circuits

    PubMed Central

    LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W.

    2014-01-01

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. PMID:25429145

  7. Two-Way Time Transfer With Dual Pseudo-Random Noise Codes

    DTIC Science & Technology

    2008-12-01

    addition of square-wave modulation using the normal PRN code. The generated BOC signal is spread in frequency domain depending on the frequency of...offset carrier ( BOC ) adopted by GALILEO or the modernized GPS ranging signal [2] brings up an idea to overcome this problem. Though BOC signal does not...ANSI Std Z39-18 40th Annual Precise Time and Time Interval (PTTI) Meeting versatile analog-to-digital sampler, a waveform generator, as well as a

  8. A Design of Low Frequency Time-Code Receiver Based on DSP and FPGA

    NASA Astrophysics Data System (ADS)

    Li, Guo-Dong; Xu, Lin-Sheng

    2006-06-01

    The hardware of a low frequency time-code receiver which was designed with FPGA (field programmable gate array) and DSP (digital signal processor) is introduced. The method of realizing the time synchronization for the receiver system is described. The software developed for DSP and FPGA is expounded, and the results of test and simulation are presented. The design is charcterized by high accuracy, good reliability, fair extensibility, etc.

  9. Results From Time Transfer Experiments Based on GLONASS P-Code Measurements from RINEX Files

    DTIC Science & Technology

    2001-01-01

    approach, using the CGGTTS data computed by an internul sofrware of the time receivers. We choose here another approach and analyze the raw data...These delays can be computed either from the CGGTTS files or directly from the raw P-code data. We show that thejirst approach is better than the...shown by different studies ([2] [SI 141 [6] [7]) . In all the mentioned studies, the time transfer results were obtained us- ing CGGTTS (GPS

  10. Reliable Wireless Broadcast with Linear Network Coding for Multipoint-to-Multipoint Real-Time Communications

    NASA Astrophysics Data System (ADS)

    Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi

    This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.

  11. Auditory system

    NASA Technical Reports Server (NTRS)

    Ades, H. W.

    1973-01-01

    The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.

  12. Auditory system

    NASA Technical Reports Server (NTRS)

    Ades, H. W.

    1973-01-01

    The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.

  13. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  14. Identification of Dynamic Patterns of Speech-Evoked Auditory Brainstem Response Based on Ensemble Empirical Mode Decomposition and Nonlinear Time Series Analysis Methods

    NASA Astrophysics Data System (ADS)

    Mozaffarilegha, Marjan; Esteki, Ali; Ahadi, Mohsen; Nazeri, Ahmadreza

    The speech-evoked auditory brainstem response (sABR) shows how complex sounds such as speech and music are processed in the auditory system. Speech-ABR could be used to evaluate particular impairments and improvements in auditory processing system. Many researchers used linear approaches for characterizing different components of sABR signal, whereas nonlinear techniques are not applied so commonly. The primary aim of the present study is to examine the underlying dynamics of normal sABR signals. The secondary goal is to evaluate whether some chaotic features exist in this signal. We have presented a methodology for determining various components of sABR signals, by performing Ensemble Empirical Mode Decomposition (EEMD) to get the intrinsic mode functions (IMFs). Then, composite multiscale entropy (CMSE), the largest Lyapunov exponent (LLE) and deterministic nonlinear prediction are computed for each extracted IMF. EEMD decomposes sABR signal into five modes and a residue. The CMSE results of sABR signals obtained from 40 healthy people showed that 1st, and 2nd IMFs were similar to the white noise, IMF-3 with synthetic chaotic time series and 4th, and 5th IMFs with sine waveform. LLE analysis showed positive values for 3rd IMFs. Moreover, 1st, and 2nd IMFs showed overlaps with surrogate data and 3rd, 4th and 5th IMFs showed no overlap with corresponding surrogate data. Results showed the presence of noisy, chaotic and deterministic components in the signal which respectively corresponded to 1st, and 2nd IMFs, IMF-3, and 4th and 5th IMFs. While these findings provide supportive evidence of the chaos conjecture for the 3rd IMF, they do not confirm any such claims. However, they provide a first step towards an understanding of nonlinear behavior of auditory system dynamics in brainstem level.

  15. Adapted wavelet transform improves time-frequency representations: a study of auditory elicited P300-like event-related potentials in rats

    NASA Astrophysics Data System (ADS)

    Richard, Nelly; Laursen, Bettina; Grupe, Morten; Drewes, Asbjørn M.; Graversen, Carina; Sørensen, Helge B. D.; Bastlund, Jesper F.

    2017-04-01

    Objective. Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. Approach. An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. Main results. While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. Significance. The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.

  16. An auditory display tool for DNA sequence analysis.

    PubMed

    Temple, Mark D

    2017-04-24

    DNA Sonification refers to the use of an auditory display to convey the information content of DNA sequence data. Six sonification algorithms are presented that each produce an auditory display. These algorithms are logically designed from the simple through to the more complex. Three of these parse individual nucleotides, nucleotide pairs or codons into musical notes to give rise to 4, 16 or 64 notes, respectively. Codons may also be parsed degenerately into 20 notes with respect to the genetic code. Lastly nucleotide pairs can be parsed as two separate frames or codons can be parsed as three reading frames giving rise to multiple streams of audio. The most informative sonification algorithm reads the DNA sequence as codons in three reading frames to produce three concurrent streams of audio in an auditory display. This approach is advantageous since start and stop codons in either frame have a direct affect to start or stop the audio in that frame, leaving the other frames unaffected. Using these methods, DNA sequences such as open reading frames or repetitive DNA sequences can be distinguished from one another. These sonification tools are available through a webpage interface in which an input DNA sequence can be processed in real time to produce an auditory display playable directly within the browser. The potential of this approach as an analytical tool is discussed with reference to auditory displays derived from test sequences including simple nucleotide sequences, repetitive DNA sequences and coding or non-coding genes. This study presents a proof-of-concept that some properties of a DNA sequence can be identified through sonification alone and argues for their inclusion within the toolkit of DNA sequence browsers as an adjunct to existing visual and analytical tools.

  17. Automation from pictures: Producing real time code from a state transition diagram

    SciTech Connect

    Kozubal, A.J.

    1991-01-01

    The state transition diagram (STD) model has been helpful in the design of real time software, especially with the emergence of graphical computer aided software engineering (CASE) tools. Nevertheless, the translation of the STD to real time code has in the past been primarily a manual task. At Los Alamos we have automated this process. The designer constructs the STD using a CASE tool (Cadre Teamwork) using a special notation for events and actions. A translator converts the STD into an intermediate state notation language (SNL), and this SNL is compiled directly into C code (a state program). Execution of the state program is driven by external events, allowing multiple state programs to effectively share the resources of the host processor. Since the design and the code are tightly integrated through the CASE tool, the design and code never diverge, and we avoid design obsolescence. Furthermore, the CASE tool automates the production of formal technical documents from the graphic description encapsulated by the CASE tool. 10 refs., 3 figs.

  18. Role of precise spike timing in coding of dynamic vibrissa stimuli in somatosensory thalamus.

    PubMed

    Montemurro, Marcelo A; Panzeri, Stefano; Maravall, Miguel; Alenda, Andrea; Bale, Michael R; Brambilla, Marco; Petersen, Rasmus S

    2007-10-01

    Rats discriminate texture by whisking their vibrissae across the surfaces of objects. This process induces corresponding vibrissa vibrations, which must be accurately represented by neurons in the somatosensory pathway. In this study, we investigated the neural code for vibrissa motion in the ventroposterior medial (VPm) nucleus of the thalamus by single-unit recording. We found that neurons conveyed a great deal of information (up to 77.9 bits/s) about vibrissa dynamics. The key was precise spike timing, which typically varied by less than a millisecond from trial to trial. The neural code was sparse, the average spike being remarkably informative (5.8 bits/spike). This implies that as few as four VPm spikes, coding independent information, might reliably differentiate between 10(6) textures. To probe the mechanism of information transmission, we compared the role of time-varying firing rate to that of temporally correlated spike patterns in two ways: 93.9% of the information encoded by a neuron could be accounted for by a hypothetical neuron with the same time-dependent firing rate but no correlations between spikes; moreover, > or =93.4% of the information in the spike trains could be decoded even if temporal correlations were ignored. Taken together, these results suggest that the essence of the VPm code for vibrissa motion is firing rate modulation on a submillisecond timescale. The significance of such a code may be that it enables a small number of neurons, firing only few spikes, to convey distinctions between very many different textures to the barrel cortex.

  19. Neural Coding of Interaural Time Differences with Bilateral Cochlear Implants in Unanesthetized Rabbits

    PubMed Central

    Hancock, Kenneth E.; Delgutte, Bertrand

    2016-01-01

    time difference (ITD)] to identify where the sound is coming from. This problem is especially acute at the high stimulation rates used in clinical CI processors. This study provides a better understanding of ITD processing with bilateral CIs and shows a parallel between human performance in ITD discrimination and neural responses in the auditory midbrain. The present study is the first report on binaural properties of auditory neurons with CIs in unanesthetized animals. PMID:27194332

  20. Process timing and its relation to the coding of tonal harmony.

    PubMed

    Aksentijevic, Aleksandar; Barber, Paul J; Elliott, Mark A

    2011-10-01

    Advances in auditory research suggest that gamma-band synchronization of frequency-specific cortical loci could be responsible for the integration of pure tones (harmonics) into harmonic complex tones. Thus far, evidence for such a mechanism has been revealed in neurophysiological studies, with little corroborative psychophysical evidence. In six experiments, we observed a rate- and time-specific response-time advantage for a sequence of target pips when the defining frequency of the target was a fractional multiple of a priming frequency. The effect was only observed when the prime and target sequences were presented at 33 pips per second and when the interstimulus interval was approximately 100 and 250 ms. This evidence implicates oscillatory gamma-band activity in the representation of harmonic complex tones and suggests that synchronization with precise temporal characteristics is important for disambiguating related harmonic templates. An outline of a model is presented, which accounts for these findings in terms of fast resynchronization of relevant neuronal assemblies.

  1. Adaptive sphere decoding for space-time codes of wireless MIMO communications

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Walker, Ernest

    2010-04-01

    In this paper, we develop an adaptive sphere decoding technique for space-time coding of wireless MIMO communications. This technique makes use of the statistics of previous decoding results to reduce the decoding complexity of subsequent decoding process. Specially, we propose a method for the determination of the initial sphere radius for the decoding process of future time-frame based on a queue of records of minimum sphere radius obtained from the decoding process of previous time-frames. Concrete methods have been derived for the choice of appropriate queue sizes. Numerical experiment is performed for demonstrating the efficiency of the adaptive technique.

  2. Imaginary time propagation code for large-scale two-dimensional eigenvalue problems in magnetic fields

    NASA Astrophysics Data System (ADS)

    Luukko, P. J. J.; Räsänen, E.

    2013-03-01

    We present a code for solving the single-particle, time-independent Schrödinger equation in two dimensions. Our program utilizes the imaginary time propagation (ITP) algorithm, and it includes the most recent developments in the ITP method: the arbitrary order operator factorization and the exact inclusion of a (possibly very strong) magnetic field. Our program is able to solve thousands of eigenstates of a two-dimensional quantum system in reasonable time with commonly available hardware. The main motivation behind our work is to allow the study of highly excited states and energy spectra of two-dimensional quantum dots and billiard systems with a single versatile code, e.g., in quantum chaos research. In our implementation we emphasize a modern and easily extensible design, simple and user-friendly interfaces, and an open-source development philosophy. Catalogue identifier: AENR_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 11310 No. of bytes in distributed program, including test data, etc.: 97720 Distribution format: tar.gz Programming language: C++ and Python. Computer: Tested on x86 and x86-64 architectures. Operating system: Tested under Linux with the g++ compiler. Any POSIX-compliant OS with a C++ compiler and the required external routines should suffice. Has the code been vectorised or parallelized?: Yes, with OpenMP. RAM: 1 MB or more, depending on system size. Classification: 7.3. External routines: FFTW3 (http://www.fftw.org), CBLAS (http://netlib.org/blas), LAPACK (http://www.netlib.org/lapack), HDF5 (http://www.hdfgroup.org/HDF5), OpenMP (http://openmp.org), TCLAP (http://tclap.sourceforge.net), Python (http://python.org), Google Test (http://code.google.com/p/googletest/) Nature of problem: Numerical calculation

  3. Coded throughput performance simulations for the time-varying satellite channel. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Han, LI

    1995-01-01

    The design of a reliable satellite communication link involving the data transfer from a small, low-orbit satellite to a ground station, but through a geostationary satellite, was examined. In such a scenario, the received signal power to noise density ratio increases as the transmitting low-orbit satellite comes into view, and then decreases as it then departs, resulting in a short-duration, time-varying communication link. The optimal values of the small satellite antenna beamwidth, signaling rate, modulation scheme and the theoretical link throughput (in bits per day) have been determined. The goal of this thesis is to choose a practical coding scheme which maximizes the daily link throughput while satisfying a prescribed probability of error requirement. We examine the throughput of both fixed rate and variable rate concatenated forward error correction (FEC) coding schemes for the additive white Gaussian noise (AWGN) channel, and then examine the effect of radio frequency interference (RFI) on the best coding scheme among them. Interleaving is used to mitigate degradation due to RFI. It was found that the variable rate concatenated coding scheme could achieve 74 percent of the theoretical throughput, equivalent to 1.11 Gbits/day based on the cutoff rate R(sub 0). For comparison, 87 percent is achievable for AWGN-only case.

  4. Network Coded Cooperative Communication in a Real-Time Wireless Hospital Sensor Network.

    PubMed

    Prakash, R; Balaji Ganesh, A; Sivabalan, Somu

    2017-05-01

    The paper presents a network coded cooperative communication (NC-CC) enabled wireless hospital sensor network architecture for monitoring health as well as postural activities of a patient. A wearable device, referred as a smartband is interfaced with pulse rate, body temperature sensors and an accelerometer along with wireless protocol services, such as Bluetooth and Radio-Frequency transceiver and Wi-Fi. The energy efficiency of wearable device is improved by embedding a linear acceleration based transmission duty cycling algorithm (NC-DRDC). The real-time demonstration is carried-out in a hospital environment to evaluate the performance characteristics, such as power spectral density, energy consumption, signal to noise ratio, packet delivery ratio and transmission offset. The resource sharing and energy efficiency features of network coding technique are improved by proposing an algorithm referred as network coding based dynamic retransmit/rebroadcast decision control (LA-TDC). From the experimental results, it is observed that the proposed LA-TDC algorithm reduces network traffic and end-to-end delay by an average of 27.8% and 21.6%, respectively than traditional network coded wireless transmission. The wireless architecture is deployed in a hospital environment and results are then successfully validated.

  5. Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.

    PubMed

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-27

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.

  6. An algorithm for space-time block code classification using higher-order statistics (HOS).

    PubMed

    Yan, Wenjun; Zhang, Limin; Ling, Qing

    2016-01-01

    This paper proposes a novel algorithm for space-time block code classification, when a single antenna is employed at the receiver. The algorithm exploits the discriminating features provided by the higher-order cumulants of the received signal. It does not require estimation of channel and information of the noise. Computer simulations are conducted to evaluate the performance of the proposed algorithm. The results show the performance of the algorithm is good.

  7. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  8. Flexible Radiation Codes for Numerical Weather Prediction Across Space and Time Scales

    DTIC Science & Technology

    2013-09-30

    time and space scales, especially from regional models to global models. OBJECTIVES We are adapting radiation codes developed for climate ...PSrad is now complete, thorougly tested and debugged, is functioning as the radiation scheme in the climate model ECHAM 6.2 developed at the Max Planck...statiically significant change at most stations, indicating that errors in most places are not primarily driven by radiation errors. We are working

  9. A Parallel Code for Solving the Molecular Time Dependent Schroedinger Equation in Cartesian Coordinates

    SciTech Connect

    Suarez, J.; Stamatiadis, S.; Farantos, S. C.; Lathouwers, L.

    2009-08-13

    Reproducing molecular dynamics is at the root of the basic principles of chemical change and physical properties of the matter. New insight on molecular encounters can be gained by solving the Schroedinger equation in cartesian coordinates, provided one can overcome the massive calculations that it implies. We have developed a parallel code for solving the molecular Time Dependent Schroedinger Equation (TDSE) in cartesian coordinates. Variable order Finite Difference methods result in sparse Hamiltonian matrices which can make the large scale problem solving feasible.

  10. 2×Nr MIMO ARQ Scheme Using Multi-Strata Space-Time Codes

    NASA Astrophysics Data System (ADS)

    Ko, Dongju; Lee, Jeong Woo

    We propose a 2×Nr MIMO ARQ scheme that uses multi-strata space-time codes composed of two layers. The phase and transmit power of each layer are assigned adaptively at each transmission round to mitigate the inter-layer interference and improve the block error rate by retransmission. Simulation results show that the proposed scheme achieves better performance than the conventional schemes in terms of the throughput and the block error rate.

  11. Real-time speech encoding based on Code-Excited Linear Prediction (CELP)

    NASA Technical Reports Server (NTRS)

    Leblanc, Wilfrid P.; Mahmoud, S. A.

    1988-01-01

    This paper reports on the work proceeding with regard to the development of a real-time voice codec for the terrestrial and satellite mobile radio environments. The codec is based on a complexity reduced version of code-excited linear prediction (CELP). The codebook search complexity was reduced to only 0.5 million floating point operations per second (MFLOPS) while maintaining excellent speech quality. Novel methods to quantize the residual and the long and short term model filters are presented.

  12. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  13. TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    SciTech Connect

    Cullen, D.E

    2000-11-22

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.

  14. TART98 a coupled neutron-photon 3-D, combinatorial geometry time dependent Monte Carlo Transport code

    SciTech Connect

    Cullen, D E

    1998-11-22

    TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.

  15. Power Allocation Strategies for Distributed Space-Time Codes in Amplify-and-Forward Mode

    NASA Astrophysics Data System (ADS)

    Maham, Behrouz; Hjørungnes, Are

    2009-12-01

    We consider a wireless relay network with Rayleigh fading channels and apply distributed space-time coding (DSTC) in amplify-and-forward (AF) mode. It is assumed that the relays have statistical channel state information (CSI) of the local source-relay channels, while the destination has full instantaneous CSI of the channels. It turns out that, combined with the minimum SNR based power allocation in the relays, AF DSTC results in a new opportunistic relaying scheme, in which the best relay is selected to retransmit the source's signal. Furthermore, we have derived the optimum power allocation between two cooperative transmission phases by maximizing the average received SNR at the destination. Next, assuming M-PSK and M-QAM modulations, we analyze the performance of cooperative diversity wireless networks using AF opportunistic relaying. We also derive an approximate formula for the symbol error rate (SER) of AF DSTC. Assuming the use of full-diversity space-time codes, we derive two power allocation strategies minimizing the approximate SER expressions, for constrained transmit power. Our analytical results have been confirmed by simulation results, using full-rate, full-diversity distributed space-time codes.

  16. Neural Code-Neural Self-information Theory on How Cell-Assembly Code Rises from Spike Time and Neuronal Variability.

    PubMed

    Li, Meng; Tsien, Joe Z

    2017-01-01

    A major stumbling block to cracking the real-time neural code is neuronal variability - neurons discharge spikes with enormous variability not only across trials within the same experiments but also in resting states. Such variability is widely regarded as a noise which is often deliberately averaged out during data analyses. In contrast to such a dogma, we put forth the Neural Self-Information Theory that neural coding is operated based on the self-information principle under which variability in the time durations of inter-spike-intervals (ISI), or neuronal silence durations, is self-tagged with discrete information. As the self-information processor, each ISI carries a certain amount of information based on its variability-probability distribution; higher-probability ISIs which reflect the balanced excitation-inhibition ground state convey minimal information, whereas lower-probability ISIs which signify rare-occurrence surprisals in the form of extremely transient or prolonged silence carry most information. These variable silence durations are naturally coupled with intracellular biochemical cascades, energy equilibrium and dynamic regulation of protein and gene expression levels. As such, this silence variability-based self-information code is completely intrinsic to the neurons themselves, with no need for outside observers to set any reference point as typically used in the rate code, population code and temporal code models. Moreover, temporally coordinated ISI surprisals across cell population can inherently give rise to robust real-time cell-assembly codes which can be readily sensed by the downstream neural clique assemblies. One immediate utility of this self-information code is a general decoding strategy to uncover a variety of cell-assembly patterns underlying external and internal categorical or continuous variables in an unbiased manner.

  17. Auditory Cortex Characteristics in Schizophrenia: Associations With Auditory Hallucinations.

    PubMed

    Mørch-Johnsen, Lynn; Nesvåg, Ragnar; Jørgensen, Kjetil N; Lange, Elisabeth H; Hartberg, Cecilie B; Haukvik, Unn K; Kompus, Kristiina; Westerhausen, René; Osnes, Kåre; Andreassen, Ole A; Melle, Ingrid; Hugdahl, Kenneth; Agartz, Ingrid

    2017-01-01

    Neuroimaging studies have demonstrated associations between smaller auditory cortex volume and auditory hallucinations (AH) in schizophrenia. Reduced cortical volume can result from a reduction of either cortical thickness or cortical surface area, which may reflect different neuropathology. We investigate for the first time how thickness and surface area of the auditory cortex relate to AH in a large sample of schizophrenia spectrum patients. Schizophrenia spectrum (n = 194) patients underwent magnetic resonance imaging. Mean cortical thickness and surface area in auditory cortex regions (Heschl's gyrus [HG], planum temporale [PT], and superior temporal gyrus [STG]) were compared between patients with (AH+, n = 145) and without (AH-, n = 49) a lifetime history of AH and 279 healthy controls. AH+ patients showed significantly thinner cortex in the left HG compared to AH- patients (d = 0.43, P = .0096). There were no significant differences between AH+ and AH- patients in cortical thickness in the PT or STG, or in auditory cortex surface area in any of the regions investigated. Group differences in cortical thickness in the left HG was not affected by duration of illness or current antipsychotic medication. AH in schizophrenia patients were related to thinner cortex, but not smaller surface area of the left HG, a region which includes the primary auditory cortex. The results support that structural abnormalities of the auditory cortex underlie AH in schizophrenia. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  18. Driving-simulator-based test on the effectiveness of auditory red-light running vehicle warning system based on time-to-collision sensor.

    PubMed

    Yan, Xuedong; Xue, Qingwan; Ma, Lu; Xu, Yongcun

    2014-02-21

    The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR) collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM). The collisions avoidance related variables were measured in terms of brake reaction time (BRT), maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles.

  19. Driving-Simulator-Based Test on the Effectiveness of Auditory Red-Light Running Vehicle Warning System Based on Time-To-Collision Sensor

    PubMed Central

    Yan, Xuedong; Xue, Qingwan; Ma, Lu; Xu, Yongcun

    2014-01-01

    The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR) collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM). The collisions avoidance related variables were measured in terms of brake reaction time (BRT), maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles. PMID:24566631

  20. Auditory Spatial Attention Representations in the Human Cerebral Cortex

    PubMed Central

    Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.

    2014-01-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753

  1. Auditory spatial attention representations in the human cerebral cortex.

    PubMed

    Kong, Lingqiang; Michalka, Samantha W; Rosen, Maya L; Sheremata, Summer L; Swisher, Jascha D; Shinn-Cunningham, Barbara G; Somers, David C

    2014-03-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes.

  2. Bearing performance degradation assessment based on time-frequency code features and SOM network

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Han, Yan; Deng, Lei

    2017-04-01

    Bearing performance degradation assessment and prognostics are extremely important in supporting maintenance decision and guaranteeing the system’s reliability. To achieve this goal, this paper proposes a novel feature extraction method for the degradation assessment and prognostics of bearings. Features of time-frequency codes (TFCs) are extracted from the time-frequency distribution using a hybrid procedure based on short-time Fourier transform (STFT) and non-negative matrix factorization (NMF) theory. An alternative way to design the health indicator is investigated by quantifying the similarity between feature vectors using a self-organizing map (SOM) network. On the basis of this idea, a new health indicator called time-frequency code quantification error (TFCQE) is proposed to assess the performance degradation of the bearing. This indicator is constructed based on the bearing real-time behavior and the SOM model that is previously trained with only the TFC vectors under the normal condition. Vibration signals collected from the bearing run-to-failure tests are used to validate the developed method. The comparison results demonstrate the superiority of the proposed TFCQE indicator over many other traditional features in terms of feature quality metrics, incipient degradation identification and achieving accurate prediction. Highlights • Time-frequency codes are extracted to reflect the signals’ characteristics. • SOM network served as a tool to quantify the similarity between feature vectors. • A new health indicator is proposed to demonstrate the whole stage of degradation development. • The method is useful for extracting the degradation features and detecting the incipient degradation. • The superiority of the proposed method is verified using experimental data.

  3. A unified mathematical framework for coding time, space, and sequences in the hippocampal region.

    PubMed

    Howard, Marc W; MacDonald, Christopher J; Tiganj, Zoran; Shankar, Karthik H; Du, Qian; Hasselmo, Michael E; Eichenbaum, Howard

    2014-03-26

    The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1.

  4. A Unified Mathematical Framework for Coding Time, Space, and Sequences in the Hippocampal Region

    PubMed Central

    MacDonald, Christopher J.; Tiganj, Zoran; Shankar, Karthik H.; Du, Qian; Hasselmo, Michael E.; Eichenbaum, Howard

    2014-01-01

    The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1. PMID:24672015

  5. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  6. Computed neutron tomography and coded aperture holography from real time neutron images

    NASA Astrophysics Data System (ADS)

    Sulcoski, Mark F.

    1986-10-01

    The uses of neutron tomography and holography for nondestructive evaluation applications are developed and investigated. The use of a real time neutron imaging system coupled with a image processing system to obtain neutron tomographs. Experiments utilized a Thomson-CSF neutron camera coupled to a computer based system used for image processing. Experiments included a configuration of a reactor neutron beam port for neutron imaging, development and implementation of a convolution method tomographic algorithm suitable for neutron imaging. Results to date have demonstrated the proof of principle of this neutron tomography system. Coded aperture neutron holography is under investigation using a cadmium Fresnel zone plate as the coded aperture and the real time imaging system as the detection and holographic reconstruction system. Coded aperture imaging utilizes the zone place to encode scattered radiation pattern recorded at the detector is used as input data to a convolution algorithm which reconstructs the scattering source. This technique has not yet been successfully implemented and is still under development.

  7. MINVAR: a local optimization criterion for rate-distortion tradeoff in real time video coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Ngan, King Ngi

    2005-10-01

    In this paper, we propose a minimum variation (MINVAR) distortion criterion based approach for the rate distortion tradeoff in video coding. The MINVAR based rate distortion tradeoff framework provides a local optimization strategy as a rate control mechanism in real time video coding applications by minimizing the distortion variation while the corresponding bit rate fluctuation is limited by utilizing the encoder buffer. We use the H.264 video codec to evaluate the performance of the proposed method. As shown in the simulation results, the decoded picture quality of the proposed approach is smoother than that of the traditional H.264 joint model (JM) rate control algorithm. The global video quality, the average PSNR, is maintained while a better subjective visual quality is guaranteed.

  8. Programmed optoelectronic time-pulse coded relational processor as base element for sorting neural networks

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Bardachenko, Vitaliy F.; Nikolsky, Alexander I.; Lazarev, Alexander A.

    2007-04-01

    In the paper we show that the biologically motivated conception of the use of time-pulse encoding gives the row of advantages (single methodological basis, universality, simplicity of tuning, training and programming et al) at creation and designing of sensor systems with parallel input-output and processing, 2D-structures of hybrid and neuro-fuzzy neurocomputers of next generations. We show principles of construction of programmable relational optoelectronic time-pulse coded processors, continuous logic, order logic and temporal waves processes, that lie in basis of the creation. We consider structure that executes extraction of analog signal of the set grade (order), sorting of analog and time-pulse coded variables. We offer optoelectronic realization of such base relational elements of order logic, which consists of time-pulse coded phototransformers (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutations blocks. We make estimations of basic technical parameters of such base devices and processors on their basis by simulation and experimental research: power of optical input signals - 0.200-20 μW, processing time - microseconds, supply voltage - 1.5-10 V, consumption power - hundreds of microwatts per element, extended functional possibilities, training possibilities. We discuss some aspects of possible rules and principles of training and programmable tuning on the required function, relational operation and realization of hardware blocks for modifications of such processors. We show as on the basis of such quasiuniversal hardware simple block and flexible programmable tuning it is possible to create sorting machines, neural networks and hybrid data-processing systems with the untraditional numerical systems and pictures operands.

  9. Selective processing of auditory evoked responses with iterative-randomized stimulation and averaging: A strategy for evaluating the time-invariant assumption.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D

    2016-03-01

    The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed.

  10. A 2.9 ps equivalent resolution interpolating time counter based on multiple independent coding lines

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Jachna, Z.; Kwiatkowski, P.; Rozyc, K.

    2013-03-01

    We present the design, operation and test results of a time counter that has an equivalent resolution of 2.9 ps, a measurement uncertainty at the level of 6 ps, and a measurement range of 10 s. The time counter has been implemented in a general-purpose reprogrammable device Spartan-6 (Xilinx). To obtain both high precision and wide measurement range the counting of periods of a reference clock is combined with a two-stage interpolation within a single period of the clock signal. The interpolation involves a four-phase clock in the first interpolation stage (FIS) and an equivalent coding line (ECL) in the second interpolation stage (SIS). The ECL is created as a compound of independent discrete time coding lines (TCL). The number of TCLs used to create the virtual ECL has an effect on its resolution. We tested ECLs made from up to 16 TCLs, but the idea may be extended to a larger number of lines. In the presented time counter the coarse resolution of the counting method equal to 2 ns (period of the 500 MHz reference clock) is firstly improved fourfold in the FIS and next even more than 400 times in the SIS. The proposed solution allows us to overcome the technological limitation in achievable resolution and improve the precision of conversion of integrated interpolators based on tapped delay lines.

  11. Timing Precision in Population Coding of Natural Scenes in the Early Visual System

    PubMed Central

    Desbordes, Gaëlle; Jin, Jianzhong; Weng, Chong; Lesica, Nicholas A; Stanley, Garrett B; Alonso, Jose-Manuel

    2008-01-01

    The timing of spiking activity across neurons is a fundamental aspect of the neural population code. Individual neurons in the retina, thalamus, and cortex can have very precise and repeatable responses but exhibit degraded temporal precision in response to suboptimal stimuli. To investigate the functional implications for neural populations in natural conditions, we recorded in vivo the simultaneous responses, to movies of natural scenes, of multiple thalamic neurons likely converging to a common neuronal target in primary visual cortex. We show that the response of individual neurons is less precise at lower contrast, but that spike timing precision across neurons is relatively insensitive to global changes in visual contrast. Overall, spike timing precision within and across cells is on the order of 10 ms. Since closely timed spikes are more efficient in inducing a spike in downstream cortical neurons, and since fine temporal precision is necessary to represent the more slowly varying natural environment, we argue that preserving relative spike timing at a ∼10-ms resolution is a crucial property of the neural code entering cortex. PMID:19090624

  12. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    PubMed

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. A two-level space-time color-coding method for 3D measurements using structured light

    NASA Astrophysics Data System (ADS)

    Xue, Qi; Wang, Zhao; Huang, Junhui; Gao, Jianmin; Qi, Zhaoshuai

    2015-11-01

    Color-coding methods have significantly improved the measurement efficiency of structured light systems. However, some problems, such as color crosstalk and chromatic aberration, decrease the measurement accuracy of the system. A two-level space-time color-coding method is thus proposed in this paper. The method, which includes a space-code level and a time-code level, is shown to be reliable and efficient. The influence of chromatic aberration is completely mitigated when using this method. Additionally, a self-adaptive windowed Fourier transform is used to eliminate all color crosstalk components. Theoretical analyses and experiments have shown that the proposed coding method solves the problems of color crosstalk and chromatic aberration effectively. Additionally, the method guarantees high measurement accuracy which is very close to the measurement accuracy using monochromatic coded patterns.

  14. Cryptographic robustness of a quantum cryptography system using phase-time coding

    SciTech Connect

    Molotkov, S. N.

    2008-01-15

    A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In the absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.

  15. Two Novel Space-Time Coding Techniques Designed for UWB MISO Systems Based on Wavelet Transform.

    PubMed

    Zaki, Amira Ibrahim; Badran, Ehab F; El-Khamy, Said E

    2016-01-01

    In this paper two novel space-time coding multi-input single-output (STC MISO) schemes, designed especially for Ultra-Wideband (UWB) systems, are introduced. The proposed schemes are referred to as wavelet space-time coding (WSTC) schemes. The WSTC schemes are based on two types of multiplexing, spatial and wavelet domain multiplexing. In WSTC schemes, four symbols are transmitted on the same UWB transmission pulse with the same bandwidth, symbol duration, and number of transmitting antennas of the conventional STC MISO scheme. The used mother wavelet (MW) is selected to be highly correlated with transmitted pulse shape and such that the multiplexed signal has almost the same spectral characteristics as those of the original UWB pulse. The two WSTC techniques increase the data rate to four times that of the conventional STC. The first WSTC scheme increases the data rate with a simple combination process. The second scheme achieves the increase in the data rate with a less complex receiver and better performance than the first scheme due to the spatial diversity introduced by the structure of its transmitter and receiver. The two schemes use Rake receivers to collect the energy in the dense multipath channel components. The simulation results show that the proposed WSTC schemes have better performance than the conventional scheme in addition to increasing the data rate to four times that of the conventional STC scheme.

  16. Two Novel Space-Time Coding Techniques Designed for UWB MISO Systems Based on Wavelet Transform

    PubMed Central

    Zaki, Amira Ibrahim; El-Khamy, Said E.

    2016-01-01

    In this paper two novel space-time coding multi-input single-output (STC MISO) schemes, designed especially for Ultra-Wideband (UWB) systems, are introduced. The proposed schemes are referred to as wavelet space-time coding (WSTC) schemes. The WSTC schemes are based on two types of multiplexing, spatial and wavelet domain multiplexing. In WSTC schemes, four symbols are transmitted on the same UWB transmission pulse with the same bandwidth, symbol duration, and number of transmitting antennas of the conventional STC MISO scheme. The used mother wavelet (MW) is selected to be highly correlated with transmitted pulse shape and such that the multiplexed signal has almost the same spectral characteristics as those of the original UWB pulse. The two WSTC techniques increase the data rate to four times that of the conventional STC. The first WSTC scheme increases the data rate with a simple combination process. The second scheme achieves the increase in the data rate with a less complex receiver and better performance than the first scheme due to the spatial diversity introduced by the structure of its transmitter and receiver. The two schemes use Rake receivers to collect the energy in the dense multipath channel components. The simulation results show that the proposed WSTC schemes have better performance than the conventional scheme in addition to increasing the data rate to four times that of the conventional STC scheme. PMID:27959939

  17. Comparison of WDM/Pulse-Position-Modulation (WDM/PPM) with Code/Pulse-Position-Swapping (C/PPS) Based on Wavelength/Time Codes

    SciTech Connect

    Mendez, A J; Hernandez, V J; Gagliardi, R M; Bennett, C V

    2009-06-19

    Pulse position modulation (PPM) signaling is favored in intensity modulated/direct detection (IM/DD) systems that have average power limitations. Combining PPM with WDM over a fiber link (WDM/PPM) enables multiple accessing and increases the link's throughput. Electronic bandwidth and synchronization advantages are further gained by mapping the time slots of PPM onto a code space, or code/pulse-position-swapping (C/PPS). The property of multiple bits per symbol typical of PPM can be combined with multiple accessing by using wavelength/time [W/T] codes in C/PPS. This paper compares the performance of WDM/PPM and C/PPS for equal wavelengths and bandwidth.

  18. General relativistic radiative transfer code in rotating black hole space-time: ARTIST

    NASA Astrophysics Data System (ADS)

    Takahashi, Rohta; Umemura, Masayuki

    2017-02-01

    We present a general relativistic radiative transfer code, ARTIST (Authentic Radiative Transfer In Space-Time), that is a perfectly causal scheme to pursue the propagation of radiation with absorption and scattering around a Kerr black hole. The code explicitly solves the invariant radiation intensity along null geodesics in the Kerr-Schild coordinates, and therefore properly includes light bending, Doppler boosting, frame dragging, and gravitational redshifts. The notable aspect of ARTIST is that it conserves the radiative energy with high accuracy, and is not subject to the numerical diffusion, since the transfer is solved on long characteristics along null geodesics. We first solve the wavefront propagation around a Kerr black hole that was originally explored by Hanni. This demonstrates repeated wavefront collisions, light bending, and causal propagation of radiation with the speed of light. We show that the decay rate of the total energy of wavefronts near a black hole is determined solely by the black hole spin in late phases, in agreement with analytic expectations. As a result, the ARTIST turns out to correctly solve the general relativistic radiation fields until late phases as t ˜ 90 M. We also explore the effects of absorption and scattering, and apply this code for a photon wall problem and an orbiting hotspot problem. All the simulations in this study are performed in the equatorial plane around a Kerr black hole. The ARTIST is the first step to realize the general relativistic radiation hydrodynamics.

  19. Functional Groups in the Avian Auditory System

    PubMed Central

    Woolley, Sarah M. N.; Gill, Patrick R.; Fremouw, Thane; Theunissen, Frédéric E.

    2009-01-01

    Auditory perception depends on the coding and organization of the information-bearing acoustic features of sounds by auditory neurons. We report here that auditory neurons can be classified into functional groups each of which plays a specific role in extracting distinct complex sound features. We recorded the electrophysiological responses of single auditory neurons in the songbird midbrain and forebrain to conspecific song, measured their tuning by calculating spectrotemporal receptive fields (STRFs) and classified them using multiple cluster analysis methods. Based on STRF shape, cells clustered into functional groups that divided the space of acoustical features into regions that represent cues for the fundamental acoustic percepts of pitch, timbre and rhythm. Four major groups were found in the midbrain and five major groups were found in the forebrain. Comparing STRFs in midbrain and forebrain neurons suggested that both inheritance and emergence of tuning properties occur as information ascends the auditory processing stream. PMID:19261874

  20. Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests

    NASA Astrophysics Data System (ADS)

    Zilhão, Miguel; Witek, Helvi; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Nerozzi, Andrea

    2010-04-01

    The numerical evolution of Einstein’s field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.

  1. Spectro-temporal shaping of supercontinuum for subnanosecond time-coded M-CARS spectroscopy.

    PubMed

    Shalaby, Badr M; Louot, Christophe; Capitaine, Erwan; Krupa, Katarzyna; Labruyère, Alexis; Tonello, Alessandro; Pagnoux, Dominique; Leproux, Philippe; Couderc, Vincent

    2016-11-01

    A supercontinuum laser source was designed for multiplex-coherent anti-Stokes Raman scattering spectroscopy. This source was based on the use of a germanium-doped standard optical fiber with a zero dispersion wavelength at 1600 nm and pumped at 1064 nm. We analyzed the nonlinear spectro-temporal interrelations of a subnanosecond pulse propagating in a normal dispersion regime in the presence of a multiple Raman cascading process and strong conversion. The multiple Raman orders permitted the generation of a high-power flat spectrum with a specific nonlinear dynamics that can open the way to subnanosecond time-coded multiplex CARS systems.

  2. A real-time chirp-coded imaging system with tissue attenuation compensation.

    PubMed

    Ramalli, A; Guidi, F; Boni, E; Tortoli, P

    2015-07-01

    In ultrasound imaging, pulse compression methods based on the transmission (TX) of long coded pulses and matched receive filtering can be used to improve the penetration depth while preserving the axial resolution (coded-imaging). The performance of most of these methods is affected by the frequency dependent attenuation of tissue, which causes mismatch of the receiver filter. This, together with the involved additional computational load, has probably so far limited the implementation of pulse compression methods in real-time imaging systems. In this paper, a real-time low-computational-cost coded-imaging system operating on the beamformed and demodulated data received by a linear array probe is presented. The system has been implemented by extending the firmware and the software of the ULA-OP research platform. In particular, pulse compression is performed by exploiting the computational resources of a single digital signal processor. Each image line is produced in less than 20 μs, so that, e.g., 192-line frames can be generated at up to 200 fps. Although the system may work with a large class of codes, this paper has been focused on the test of linear frequency modulated chirps. The new system has been used to experimentally investigate the effects of tissue attenuation so that the design of the receive compression filter can be accordingly guided. Tests made with different chirp signals confirm that, although the attainable compression gain in attenuating media is lower than the theoretical value expected for a given TX Time-Bandwidth product (BT), good SNR gains can be obtained. For example, by using a chirp signal having BT=19, a 13 dB compression gain has been measured. By adapting the frequency band of the receiver to the band of the received echo, the signal-to-noise ratio and the penetration depth have been further increased, as shown by real-time tests conducted on phantoms and in vivo. In particular, a 2.7 dB SNR increase has been measured through a

  3. Just-in-time coding of the problem list in a clinical environment.

    PubMed Central

    Warren, J. J.; Collins, J.; Sorrentino, C.; Campbell, J. R.

    1998-01-01

    Clinically useful problem lists are essential to the CPR. Providing a terminology that is standardized and understood by all clinicians is a major challenge. UNMC has developed a lexicon to support their problem list. Using a just-in-time coding strategy, the lexicon is maintained and extended prospectively in a dynamic clinical environment. The terms in the lexicon are mapped to ICD-9-CM, NANDA, and SNOMED International classification schemes. Currently, the lexicon contains 12,000 terms. This process of development and maintenance of the lexicon is described. PMID:9929226

  4. Adaptation in the auditory system: an overview.

    PubMed

    Pérez-González, David; Malmierca, Manuel S

    2014-01-01

    The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  5. Imaging of human tooth using ultrasound based chirp-coded nonlinear time reversal acoustics.

    PubMed

    Dos Santos, Serge; Prevorovsky, Zdenek

    2011-08-01

    Human tooth imaging sonography is investigated experimentally with an acousto-optic noncoupling set-up based on the chirp-coded nonlinear time reversal acoustic concept. The complexity of the tooth internal structure (enamel-dentine interface, cracks between internal tubules) is analyzed by adapting the nonlinear elastic wave spectroscopy (NEWS) with the objective of the tomography of damage. Optimization of excitations using intrinsic symmetries, such as time reversal (TR) invariance, reciprocity, correlation properties are then proposed and implemented experimentally. The proposed medical application of this TR-NEWS approach is implemented on a third molar human tooth and constitutes an alternative of noncoupling echodentography techniques. A 10 MHz bandwidth ultrasonic instrumentation has been developed including a laser vibrometer and a 20 MHz contact piezoelectric transducer. The calibrated chirp-coded TR-NEWS imaging of the tooth is obtained using symmetrized excitations, pre- and post-signal processing, and the highly sensitive 14 bit resolution TR-NEWS instrumentation previously calibrated. Nonlinear signature coming from the symmetry properties is observed experimentally in the tooth using this bi-modal TR-NEWS imaging after and before the focusing induced by the time-compression process. The TR-NEWS polar B-scan of the tooth is described and suggested as a potential application for modern echodentography. It constitutes the basis of the self-consistent harmonic imaging sonography for monitoring cracks propagation in the dentine, responsible of human tooth structural health. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Explicit time-reversible orbit integration in Particle In Cell codes with static homogeneous magnetic field

    NASA Astrophysics Data System (ADS)

    Patacchini, L.; Hutchinson, I. H.

    2009-04-01

    A new explicit time-reversible orbit integrator for the equations of motion in a static homogeneous magnetic field - called Cyclotronic integrator - is presented. Like Spreiter and Walter's Taylor expansion algorithm, for sufficiently weak electric field gradients this second order method does not require a fine resolution of the Larmor motion; it has however the essential advantage of being symplectic, hence time-reversible. The Cyclotronic integrator is only subject to a linear stability constraint ( ΩΔ t < π, Ω being the Larmor angular frequency), and is therefore particularly suitable to electrostatic Particle In Cell codes with uniform magnetic field where Ω is larger than any other characteristic frequency, yet a resolution of the particles' gyromotion is required. Application examples and a detailed comparison with the well-known (time-reversible) Boris algorithm are presented; it is in particular shown that implementation of the Cyclotronic integrator in the kinetic codes SCEPTIC and Democritus can reduce the cost of orbit integration by up to a factor of ten.

  7. Audio Signal Processing Using Time-Frequency Approaches: Coding, Classification, Fingerprinting, and Watermarking

    NASA Astrophysics Data System (ADS)

    Umapathy, K.; Ghoraani, B.; Krishnan, S.

    2010-12-01

    Audio signals are information rich nonstationary signals that play an important role in our day-to-day communication, perception of environment, and entertainment. Due to its non-stationary nature, time- or frequency-only approaches are inadequate in analyzing these signals. A joint time-frequency (TF) approach would be a better choice to efficiently process these signals. In this digital era, compression, intelligent indexing for content-based retrieval, classification, and protection of digital audio content are few of the areas that encapsulate a majority of the audio signal processing applications. In this paper, we present a comprehensive array of TF methodologies that successfully address applications in all of the above mentioned areas. A TF-based audio coding scheme with novel psychoacoustics model, music classification, audio classification of environmental sounds, audio fingerprinting, and audio watermarking will be presented to demonstrate the advantages of using time-frequency approaches in analyzing and extracting information from audio signals.

  8. Architecture for time or transform domain decoding of reed-solomon codes

    NASA Technical Reports Server (NTRS)

    Shao, Howard M. (Inventor); Truong, Trieu-Kie (Inventor); Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  9. Process Timing and Its Relation to the Coding of Tonal Harmony

    ERIC Educational Resources Information Center

    Aksentijevic, Aleksandar; Barber, Paul J.; Elliott, Mark A.

    2011-01-01

    Advances in auditory research suggest that gamma-band synchronization of frequency-specific cortical loci could be responsible for the integration of pure tones (harmonics) into harmonic complex tones. Thus far, evidence for such a mechanism has been revealed in neurophysiological studies, with little corroborative psychophysical evidence. In six…

  10. Process Timing and Its Relation to the Coding of Tonal Harmony

    ERIC Educational Resources Information Center

    Aksentijevic, Aleksandar; Barber, Paul J.; Elliott, Mark A.

    2011-01-01

    Advances in auditory research suggest that gamma-band synchronization of frequency-specific cortical loci could be responsible for the integration of pure tones (harmonics) into harmonic complex tones. Thus far, evidence for such a mechanism has been revealed in neurophysiological studies, with little corroborative psychophysical evidence. In six…

  11. Analysis and modeling of quasi-stationary multivariate time series and their application to middle latency auditory evoked potentials

    NASA Astrophysics Data System (ADS)

    Hutt, A.; Riedel, H.

    2003-03-01

    A methodological framework for analyzing and modeling of multivariate data is introduced. In a first step, a cluster method extracts data segments of quasi-stationary states. A novel cluster criterion for segment borders is introduced, which is independent of the number of clusters. Its assessment reveals additional robustness towards initial conditions. A subsequent dynamical systems based modeling (DSBM) approach focuses on data segments and fits low-dimensional dynamical systems for each segment. Applications to middle latent auditory evoked potentials yield data segments, which are equivalent to well-known waves from electroencephalography studies. Focussing to wave Pa, two-dimensional dynamical systems with common topological properties are extracted. These findings reveal the common underlying dynamics of Pa and indicate self-organized brain activity.

  12. TTVFast: An Efficient and Accurate Code for Transit Timing Inversion Problems

    NASA Astrophysics Data System (ADS)

    Deck, Katherine M.; Agol, Eric; Holman, Matthew J.; Nesvorný, David

    2014-06-01

    Transit timing variations (TTVs) have proven to be a powerful technique for confirming Kepler planet candidates, for detecting non-transiting planets, and for constraining the masses and orbital elements of multi-planet systems. These TTV applications often require the numerical integration of orbits for computation of transit times (as well as impact parameters and durations); frequently tens of millions to billions of simulations are required when running statistical analyses of the planetary system properties. We have created a fast code for transit timing computation, TTVFast, which uses a symplectic integrator with a Keplerian interpolator for the calculation of transit times. The speed comes at the expense of accuracy in the calculated times, but the accuracy lost is largely unnecessary, as transit times do not need to be calculated to accuracies significantly smaller than the measurement uncertainties on the times. The time step can be tuned to give sufficient precision for any particular system. We find a speed-up of at least an order of magnitude relative to dynamical integrations with high precision using a Bulirsch-Stoer integrator.

  13. TTVFast: An efficient and accurate code for transit timing inversion problems

    SciTech Connect

    Deck, Katherine M.; Agol, Eric; Holman, Matthew J.; Nesvorný, David

    2014-06-01

    Transit timing variations (TTVs) have proven to be a powerful technique for confirming Kepler planet candidates, for detecting non-transiting planets, and for constraining the masses and orbital elements of multi-planet systems. These TTV applications often require the numerical integration of orbits for computation of transit times (as well as impact parameters and durations); frequently tens of millions to billions of simulations are required when running statistical analyses of the planetary system properties. We have created a fast code for transit timing computation, TTVFast, which uses a symplectic integrator with a Keplerian interpolator for the calculation of transit times. The speed comes at the expense of accuracy in the calculated times, but the accuracy lost is largely unnecessary, as transit times do not need to be calculated to accuracies significantly smaller than the measurement uncertainties on the times. The time step can be tuned to give sufficient precision for any particular system. We find a speed-up of at least an order of magnitude relative to dynamical integrations with high precision using a Bulirsch-Stoer integrator.

  14. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response.

    PubMed

    Nuttall, Helen E; Moore, David R; Barry, Johanna G; Krumbholz, Katrin; de Boer, Jessica

    2015-06-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. Copyright © 2015 the American Physiological Society.

  15. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response

    PubMed Central

    Nuttall, Helen E.; Moore, David R.; Barry, Johanna G.; Krumbholz, Katrin

    2015-01-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. PMID:25787954

  16. Auditory sequence analysis and phonological skill.

    PubMed

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E; Turton, Stuart; Griffiths, Timothy D

    2012-11-07

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence.

  17. Auditory sequence analysis and phonological skill

    PubMed Central

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D.

    2012-01-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  18. A visual parallel-BCI speller based on the time-frequency coding strategy

    NASA Astrophysics Data System (ADS)

    Xu, Minpeng; Chen, Long; Zhang, Lixin; Qi, Hongzhi; Ma, Lan; Tang, Jiabei; Wan, Baikun; Ming, Dong

    2014-04-01

    Objective. Spelling is one of the most important issues in brain-computer interface (BCI) research. This paper is to develop a visual parallel-BCI speller system based on the time-frequency coding strategy in which the sub-speller switching among four simultaneously presented sub-spellers and the character selection are identified in a parallel mode. Approach. The parallel-BCI speller was constituted by four independent P300+SSVEP-B (P300 plus SSVEP blocking) spellers with different flicker frequencies, thereby all characters had a specific time-frequency code. To verify its effectiveness, 11 subjects were involved in the offline and online spellings. A classification strategy was designed to recognize the target character through jointly using the canonical correlation analysis and stepwise linear discriminant analysis. Main results. Online spellings showed that the proposed parallel-BCI speller had a high performance, reaching the highest information transfer rate of 67.4 bit min-1, with an average of 54.0 bit min-1 and 43.0 bit min-1 in the three rounds and five rounds, respectively. Significance. The results indicated that the proposed parallel-BCI could be effectively controlled by users with attention shifting fluently among the sub-spellers, and highly improved the BCI spelling performance.

  19. Auditory short-term memory in the primate auditory cortex.

    PubMed

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  20. A population coding account for systematic variation in saccadic dead time.

    PubMed

    Ludwig, Casimir J H; Mildinhall, John W; Gilchrist, Iain D

    2007-01-01

    During movement programming, there is a point in time at which the movement system is committed to executing an action with certain parameters even though new information may render this action obsolete. For saccades programmed to a visual target this period is termed the dead time. Using a double-step paradigm, we examined potential variability in the dead time with variations in overall saccade latency and spatiotemporal configuration of two sequential targets. In experiment 1, we varied overall saccade latency by manipulating the presence or absence of a central fixation point. Despite a large and robust gap effect, decreasing the saccade latency in this way did not alter the dead time. In experiment 2, we varied the separation between the two targets. The dead time increased with separation up to a point and then leveled off. A stochastic accumulator model of the oculomotor decision mechanism accounts comprehensively for our findings. The model predicts a gap effect through changes in baseline activity without producing variations in the dead time. Variations in dead time with separation between the two target locations are a natural consequence of the population coding assumption in the model.

  1. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    SciTech Connect

    Arndt, S.A.

    1997-07-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.

  2. Auditory adaptation improves tactile frequency perception.

    PubMed

    Crommett, Lexi E; Pérez-Bellido, Alexis; Yau, Jeffrey M

    2017-01-11

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear: perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus, auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.

  3. A multi-layer VLC imaging system based on space-time trace-orthogonal coding

    NASA Astrophysics Data System (ADS)

    Li, Peng-Xu; Yang, Yu-Hong; Zhu, Yi-Jun; Zhang, Yan-Yu

    2017-02-01

    In visible light communication (VLC) imaging systems, different properties of data are usually demanded for transmission with different priorities in terms of reliability and/or validity. For this consideration, a novel transmission scheme called space-time trace-orthogonal coding (STTOC) for VLC is proposed in this paper by taking full advantage of the characteristics of time-domain transmission and space-domain orthogonality. Then, several constellation designs for different priority strategies subject to the total power constraint are presented. One significant advantage of this novel scheme is that the inter-layer interference (ILI) can be eliminated completely and the computation complexity of maximum likelihood (ML) detection is linear. Computer simulations verify the correctness of our theoretical analysis, and demonstrate that both transmission rate and error performance of the proposed scheme greatly outperform the conventional multi-layer transmission system.

  4. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  5. A high order special relativistic hydrodynamic and magnetohydrodynamic code with space-time adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Zanotti, Olindo; Dumbser, Michael

    2015-03-01

    We present a high order one-step ADER-WENO finite volume scheme with space-time adaptive mesh refinement (AMR) for the solution of the special relativistic hydrodynamic and magnetohydrodynamic equations. By adopting a local discontinuous Galerkin predictor method, a high order one-step time discretization is obtained, with no need for Runge-Kutta sub-steps. This turns out to be particularly advantageous in combination with space-time adaptive mesh refinement, which has been implemented following a "cell-by-cell" approach. As in existing second order AMR methods, also the present higher order AMR algorithm features time-accurate local time stepping (LTS), where grids on different spatial refinement levels are allowed to use different time steps. We also compare two different Riemann solvers for the computation of the numerical fluxes at the cell interfaces. The new scheme has been validated over a sample of numerical test problems in one, two and three spatial dimensions, exploring its ability in resolving the propagation of relativistic hydrodynamical and magnetohydrodynamical waves in different physical regimes. The astrophysical relevance of the new code for the study of the Richtmyer-Meshkov instability is briefly discussed in view of future applications.

  6. Biomedical time series clustering based on non-negative sparse coding and probabilistic topic model.

    PubMed

    Wang, Jin; Liu, Ping; F H She, Mary; Nahavandi, Saeid; Kouzani, Abbas

    2013-09-01

    Biomedical time series clustering that groups a set of unlabelled temporal signals according to their underlying similarity is very useful for biomedical records management and analysis such as biosignals archiving and diagnosis. In this paper, a new framework for clustering of long-term biomedical time series such as electrocardiography (ECG) and electroencephalography (EEG) signals is proposed. Specifically, local segments extracted from the time series are projected as a combination of a small number of basis elements in a trained dictionary by non-negative sparse coding. A Bag-of-Words (BoW) representation is then constructed by summing up all the sparse coefficients of local segments in a time series. Based on the BoW representation, a probabilistic topic model that was originally developed for text document analysis is extended to discover the underlying similarity of a collection of time series. The underlying similarity of biomedical time series is well captured attributing to the statistic nature of the probabilistic topic model. Experiments on three datasets constructed from publicly available EEG and ECG signals demonstrates that the proposed approach achieves better accuracy than existing state-of-the-art methods, and is insensitive to model parameters such as length of local segments and dictionary size.

  7. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  8. Accuracy of rate coding: When shorter time window and higher spontaneous activity help

    NASA Astrophysics Data System (ADS)

    Levakova, Marie; Tamborrino, Massimiliano; Kostal, Lubomir; Lansky, Petr

    2017-02-01

    It is widely accepted that neuronal firing rates contain a significant amount of information about the stimulus intensity. Nevertheless, theoretical studies on the coding accuracy inferred from the exact spike counting distributions are rare. We present an analysis based on the number of observed spikes assuming the stochastic perfect integrate-and-fire model with a change point, representing the stimulus onset, for which we calculate the corresponding Fisher information to investigate the accuracy of rate coding. We analyze the effect of changing the duration of the time window and the influence of several parameters of the model, in particular the level of the presynaptic spontaneous activity and the level of random fluctuation of the membrane potential, which can be interpreted as noise of the system. The results show that the Fisher information is nonmonotonic with respect to the length of the observation period. This counterintuitive result is caused by the discrete nature of the count of spikes. We observe also that the signal can be enhanced by noise, since the Fisher information is nonmonotonic with respect to the level of spontaneous activity and, in some cases, also with respect to the level of fluctuation of the membrane potential.

  9. Novel space-time trellis codes for free-space optical communications using transmit laser selection.

    PubMed

    García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2015-09-21

    In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results.

  10. Oscillations, phase-of-firing coding, and spike timing-dependent plasticity: an efficient learning scheme.

    PubMed

    Masquelier, Timothée; Hugues, Etienne; Deco, Gustavo; Thorpe, Simon J

    2009-10-28

    Recent experiments have established that information can be encoded in the spike times of neurons relative to the phase of a background oscillation in the local field potential-a phenomenon referred to as "phase-of-firing coding" (PoFC). These firing phase preferences could result from combining an oscillation in the input current with a stimulus-dependent static component that would produce the variations in preferred phase, but it remains unclear whether these phases are an epiphenomenon or really affect neuronal interactions-only then could they have a functional role. Here we show that PoFC has a major impact on downstream learning and decoding with the now well established spike timing-dependent plasticity (STDP). To be precise, we demonstrate with simulations how a single neuron equipped with STDP robustly detects a pattern of input currents automatically encoded in the phases of a subset of its afferents, and repeating at random intervals. Remarkably, learning is possible even when only a small fraction of the afferents ( approximately 10%) exhibits PoFC. The ability of STDP to detect repeating patterns had been noted before in continuous activity, but it turns out that oscillations greatly facilitate learning. A benchmark with more conventional rate-based codes demonstrates the superiority of oscillations and PoFC for both STDP-based learning and the speed of decoding: the oscillation partially formats the input spike times, so that they mainly depend on the current input currents, and can be efficiently learned by STDP and then recognized in just one oscillation cycle. This suggests a major functional role for oscillatory brain activity that has been widely reported experimentally.

  11. Creating a sense of auditory space.

    PubMed

    McAlpine, David

    2005-07-01

    Determining the location of a sound source requires the use of binaural hearing--information about a sound at the two ears converges onto neurones in the auditory brainstem to create a binaural representation. The main binaural cue used by many mammals to locate a sound source is the interaural time difference, or ITD. For over 50 years a single model has dominated thinking on how ITDs are processed. The Jeffress model consists of an array of coincidence detectors--binaural neurones that respond maximally to simultaneous input from each ear--innervated by a series of delay lines--axons of varying length from the two ears. The purpose of this arrangement is to create a topographic map of ITD, and hence spatial position in the horizontal plane, from the relative timing of a sound at the two ears. This model appears to be realized in the brain of the barn owl, an auditory specialist, and has been assumed to hold for mammals also. Recent investigations, however, indicate that both the means by which neural tuning for preferred ITD, and the coding strategy used by mammals to determine the location of a sound source, may be very different to barn owls and to the model proposed by Jeffress.

  12. Creating a sense of auditory space

    PubMed Central

    McAlpine, David

    2005-01-01

    Determining the location of a sound source requires the use of binaural hearing – information about a sound at the two ears converges onto neurones in the auditory brainstem to create a binaural representation. The main binaural cue used by many mammals to locate a sound source is the interaural time difference, or ITD. For over 50 years a single model has dominated thinking on how ITDs are processed. The Jeffress model consists of an array of coincidence detectors – binaural neurones that respond maximally to simultaneous input from each ear – innervated by a series of delay lines – axons of varying length from the two ears. The purpose of this arrangement is to create a topographic map of ITD, and hence spatial position in the horizontal plane, from the relative timing of a sound at the two ears. This model appears to be realized in the brain of the barn owl, an auditory specialist, and has been assumed to hold for mammals also. Recent investigations, however, indicate that both the means by which neural tuning for preferred ITD, and the coding strategy used by mammals to determine the location of a sound source, may be very different to barn owls and to the model proposed by Jeffress. PMID:15760940

  13. Rejection positivity predicts trial-to-trial reaction times in an auditory selective attention task: a computational analysis of inhibitory control

    PubMed Central

    Chen, Sufen; Melara, Robert D.

    2014-01-01

    A series of computer simulations using variants of a formal model of attention (Melara and Algom, 2003) probed the role of rejection positivity (RP), a slow-wave electroencephalographic (EEG) component, in the inhibitory control of distraction. Behavioral and EEG data were recorded as participants performed auditory selective attention tasks. Simulations that modulated processes of distractor inhibition accounted well for reaction-time (RT) performance, whereas those that modulated target excitation did not. A model that incorporated RP from actual EEG recordings in estimating distractor inhibition was superior in predicting changes in RT as a function of distractor salience across conditions. A model that additionally incorporated momentary fluctuations in EEG as the source of trial-to-trial variation in performance precisely predicted individual RTs within each condition. The results lend support to the linking proposition that RP controls the speed of responding to targets through the inhibitory control of distractors. PMID:25191244

  14. Rigid body motion analysis system for off-line processing of time-coded video sequences

    NASA Astrophysics Data System (ADS)

    Snow, Walter L.; Shortis, Mark R.

    1995-09-01

    Photogrammetry affords the only noncontact means of providing unambiguous six-degree-of- freedom estimates for rigid body motion analysis. Video technology enables convenient off- the-shelf capability for obtaining and storing image data at frame (30 Hz) or field (60 Hz) rates. Videometry combines these technologies with frame capture capability accessible to PCs to allow unavailable measurements critical to the study of rigid body dynamics. To effectively utilize this capability, however, some means of editing, post processing, and sorting substantial amounts of time coded video data is required. This paper discusses a prototype motion analysis system built around PC and video disk technology, which is proving useful in exploring applications of these concepts to rigid body tracking and deformation analysis. Calibration issues and user interactive software development associated with this project will be discussed, as will examples of measurement projects and data reduction.

  15. Real-time postprocessing technique for compression artifact reduction in low-bit-rate video coding

    NASA Astrophysics Data System (ADS)

    Shen, Mei-Yin; Kuo, C.-C. Jay

    1998-10-01

    A computationally efficient postprocessing technique to reduce compression artifacts in low-bit-rate video coding is proposed in this research. We first formulate the artifact reduction problem as a robust estimation problem. Under this framework, the artifact-free image is obtained by minimizing a cost function that accounts for smoothness constraints as well as image fidelity. Instead of using the traditional approach that applies the gradient descent search for optimization, a set of nonlinear filters is proposed to determine the approximating global minimum to reduce the computational complexity so that real-time postprocessing is possible. We have performed experimental results on the H.263 codec and observed that the proposed method is effective in reducing severe blocking and ringing artifacts, while maintaining a low complexity and a low memory bandwidth.

  16. Real-time video coding under power constraint based on H.264 codec

    NASA Astrophysics Data System (ADS)

    Su, Li; Lu, Yan; Wu, Feng; Li, Shipeng; Gao, Wen

    2007-01-01

    In this paper, we propose a joint power-distortion optimization scheme for real-time H.264 video encoding under the power constraint. Firstly, the power constraint is translated to the complexity constraint based on DVS technology. Secondly, a computation allocation model (CAM) with virtual buffers is proposed to facilitate the optimal allocation of constrained computational resource for each frame. Thirdly, the complexity adjustable encoder based on optimal motion estimation and mode decision is proposed to meet the allocated resource. The proposed scheme takes the advantage of some new features of H.264/AVC video coding tools such as early termination strategy in fast ME. Moreover, it can avoid suffering from the high overhead of the parametric power control algorithms and achieve fine complexity scalability in a wide range with stable rate-distortion performance. The proposed scheme also shows the potential of a further reduction of computation and power consumption in the decoding without any change on the existing decoders.

  17. A systemized stroke code significantly reduced time intervals for using intravenous tissue plasminogen activator under magnetic resonance imaging screening.

    PubMed

    Sohn, Sang-Wook; Park, Hyun-Seok; Cha, Jae-Kwan; Nah, Hyun-Wook; Kim, Dae-Hyun; Kang, Myong-Jin; Choi, Jae-Hyung; Huh, Jae-Taeck

    2015-02-01

    A stroke code can shorten time intervals until intravenous tissue plasminogen activator (IV t-PA) treatment in acute ischemic stroke (AIS). Recently, several reports demonstrated that magnetic resonance imaging (MRI)-based thrombolysis had reduced complications and improved outcomes in AIS despite longer processing compared with computed tomography (CT)-based thrombolysis. In January 2009, we implemented CODE RED, a computerized stroke code, at our hospital with the aim of achieving rapid stroke assessment and treatment. We included patients with thrombolysis from January 2007 to December 2008 (prestroke code period) and from January 2009 to May 2013 (poststroke code period). The IV t-PA time intervals and 90-day modified Rankin Scale (mRS) scores were collected. During the observation period, 252 patients used IV t-PA under the CODE RED (MRI based: 208; CT based: 44). The remaining 71 patients (MRI based: 53; CT based: 18) received it before the implementation of our stroke code. After implementation of CODE RED, door-to-image time, door-to-needle time, and the onset-to-needle time were significantly reduced by 11, 18, and 22 minutes in MRI-based thrombolysis. Particularly, the proportion of favorable outcome (mRS score 0-2) was significantly increased (from 41.5% to 60.1%, P = .02) in poststroke than in prestroke code period in MRI-based thrombolysis. However, in ordinal regression, the presence of stroke code showed just a trend for favorable outcome (odds ratio, .99-2.87; P = .059) at 90 days of using IV t-PA after correction of age, sex, and National Institutes of Health Stroke Scale. In this study, we demonstrated that a systemized stroke code shortened time intervals for using IV t-PA under MRI screening. Also, our results showed a possibility that a systemized stroke code might enhance the efficacy of MRI-based thrombolysis. In the future, we need to carry out a more detailed prospective study about this notion. Copyright © 2015 National Stroke Association

  18. Is Auditory Discrimination Mature by Middle Childhood? A Study Using Time-Frequency Analysis of Mismatch Responses from 7 Years to Adulthood

    ERIC Educational Resources Information Center

    Bishop, Dorothy V. M.; Hardiman, Mervyn J.; Barry, Johanna G.

    2011-01-01

    Behavioural and electrophysiological studies give differing impressions of when auditory discrimination is mature. Ability to discriminate frequency and speech contrasts reaches adult levels only around 12 years of age, yet an electrophysiological index of auditory discrimination, the mismatch negativity (MMN), is reported to be as large in…

  19. Is Auditory Discrimination Mature by Middle Childhood? A Study Using Time-Frequency Analysis of Mismatch Responses from 7 Years to Adulthood

    ERIC Educational Resources Information Center

    Bishop, Dorothy V. M.; Hardiman, Mervyn J.; Barry, Johanna G.

    2011-01-01

    Behavioural and electrophysiological studies give differing impressions of when auditory discrimination is mature. Ability to discriminate frequency and speech contrasts reaches adult levels only around 12 years of age, yet an electrophysiological index of auditory discrimination, the mismatch negativity (MMN), is reported to be as large in…

  20. Real-time distributed video coding for 1K-pixel visual sensor networks

    NASA Astrophysics Data System (ADS)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  1. Effective real-time vehicle tracking using discriminative sparse coding on local patches

    NASA Astrophysics Data System (ADS)

    Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei

    2016-01-01

    A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.

  2. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2004-12-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  3. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2005-01-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  4. Optimum neural tuning curves for information efficiency with rate coding and finite-time window

    PubMed Central

    Han, Fang; Wang, Zhijie; Fan, Hong; Sun, Xiaojuan

    2015-01-01

    An important question for neural encoding is what kind of neural systems can convey more information with less energy within a finite time coding window. This paper first proposes a finite-time neural encoding system, where the neurons in the system respond to a stimulus by a sequence of spikes that is assumed to be Poisson process and the external stimuli obey normal distribution. A method for calculating the mutual information of the finite-time neural encoding system is proposed and the definition of information efficiency is introduced. The values of the mutual information and the information efficiency obtained by using Logistic function are compared with those obtained by using other functions and it is found that Logistic function is the best one. It is further found that the parameter representing the steepness of the Logistic function has close relationship with full entropy, and that the parameter representing the translation of the function associates with the energy consumption and noise entropy tightly. The optimum parameter combinations for Logistic function to maximize the information efficiency are calculated when the stimuli and the properties of the encoding system are varied respectively. Some explanations for the results are given. The model and the method we proposed could be useful to study neural encoding system, and the optimum neural tuning curves obtained in this paper might exhibit some characteristics of a real neural system. PMID:26089793

  5. Optimum neural tuning curves for information efficiency with rate coding and finite-time window.

    PubMed

    Han, Fang; Wang, Zhijie; Fan, Hong; Sun, Xiaojuan

    2015-01-01

    An important question for neural encoding is what kind of neural systems can convey more information with less energy within a finite time coding window. This paper first proposes a finite-time neural encoding system, where the neurons in the system respond to a stimulus by a sequence of spikes that is assumed to be Poisson process and the external stimuli obey normal distribution. A method for calculating the mutual information of the finite-time neural encoding system is proposed and the definition of information efficiency is introduced. The values of the mutual information and the information efficiency obtained by using Logistic function are compared with those obtained by using other functions and it is found that Logistic function is the best one. It is further found that the parameter representing the steepness of the Logistic function has close relationship with full entropy, and that the parameter representing the translation of the function associates with the energy consumption and noise entropy tightly. The optimum parameter combinations for Logistic function to maximize the information efficiency are calculated when the stimuli and the properties of the encoding system are varied respectively. Some explanations for the results are given. The model and the method we proposed could be useful to study neural encoding system, and the optimum neural tuning curves obtained in this paper might exhibit some characteristics of a real neural system.

  6. Auditory short-term memory in the primate auditory cortex

    PubMed Central

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ‘working memory’ bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ‘match’ stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. PMID:26541581

  7. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    SciTech Connect

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I. E-mail: sshibata@post.kek.jp

    2015-08-15

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source function is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.

  8. A really complicated problem: Auditory scene analysis

    NASA Astrophysics Data System (ADS)

    Yost, William A.

    2004-05-01

    It has been more than a decade since Al Bregman and other authors brought the challenge of auditory scene analysis back to the attention of auditory science. While a lot of research has been done on and around this topic, an accepted theory of auditory scene analysis has not evolved. Auditory science has little, if any, information about how the nervous system solves this problem, and there have not been any major successes in developing computational methods that solve the problem for most real-world auditory scenes. I will argue that the major reason that more has not been accomplished is that auditory scene analysis is a really hard problem. If one starts with a single sound source and tries to understand how the auditory system determines this single source, the problem is already very complicated without adding other sources that occur at the same time as is the typical depiction of the auditory scene. In this paper I will illustrate some of the challenges that exist for determining the auditory scene that have not received a lot of attention, as well as some of the more discussed aspects of the challenge. [Work supported by NIDCD.

  9. Auditory and audiovisual inhibition of return.

    PubMed

    Spence, C; Driver, J

    1998-01-01

    Two experiments examined any inhibition-of-return (IOR) effects from auditory cues and from preceding auditory targets upon reaction times (RTs) for detecting subsequent auditory targets. Auditory RT was delayed if the preceding auditory cue was on the same side as the target, but was unaffected by the location of the auditory target from the preceding trial, suggesting that response inhibition for the cue may have produced its effects. By contrast, visual detection RT was inhibited by the ipsilateral presentation of a visual target on the preceding trial. In a third experiment, targets could be unpredictably auditory or visual, and no peripheral cues intervened. Both auditory and visual detection RTs were now delayed following an ipsilateral versus contralateral target in either modality on the preceding trial, even when eye position was monitored to ensure central fixation throughout. These data suggest that auditory target-target IOR arises only when target modality is unpredictable. They also provide the first unequivocal evidence for cross-modal IOR, since, unlike other recent studies (e.g., Reuter-Lorenz, Jha, & Rosenquist, 1996; Tassinari & Berlucchi, 1995; Tassinari & Campara, 1996), the present cross-modal effects cannot be explained in terms of response inhibition for the cue. The results are discussed in relation to neurophysiological studies and audiovisual links in saccade programming.

  10. Functional dissociation of transient and sustained fMRI BOLD components in human auditory cortex revealed with a streaming paradigm based on interaural time differences.

    PubMed

    Schadwinkel, Stefan; Gutschalk, Alexander

    2010-12-01

    A number of physiological studies suggest that feature-selective adaptation is relevant to the pre-processing for auditory streaming, the perceptual separation of overlapping sound sources. Most of these studies are focused on spectral differences between streams, which are considered most important for streaming. However, spatial cues also support streaming, alone or in combination with spectral cues, but physiological studies of spatial cues for streaming remain scarce. Here, we investigate whether the tuning of selective adaptation for interaural time differences (ITD) coincides with the range where streaming perception is observed. FMRI activation that has been shown to adapt depending on the repetition rate was studied with a streaming paradigm where two tones were differently lateralized by ITD. Listeners were presented with five different ΔITD conditions (62.5, 125, 187.5, 343.75, or 687.5 μs) out of an active baseline with no ΔITD during fMRI. The results showed reduced adaptation for conditions with ΔITD ≥ 125 μs, reflected by enhanced sustained BOLD activity. The percentage of streaming perception for these stimuli increased from approximately 20% for ΔITD = 62.5 μs to > 60% for ΔITD = 125 μs. No further sustained BOLD enhancement was observed when the ΔITD was increased beyond ΔITD = 125 μs, whereas the streaming probability continued to increase up to 90% for ΔITD = 687.5 μs. Conversely, the transient BOLD response, at the transition from baseline to ΔITD blocks, increased most prominently as ΔITD was increased from 187.5 to 343.75 μs. These results demonstrate a clear dissociation of transient and sustained components of the BOLD activity in auditory cortex.

  11. WESSEL: Code for Numerical Simulation of Two-Dimensional Time-Dependent Width-Averaged Flows with Arbitrary Boundaries.

    DTIC Science & Technology

    1985-08-01

    id This report should be cited as follows: -0 Thompson , J . F ., and Bernard, R. S. 1985. "WESSEL: Code for Numerical Simulation of Two-Dimensional Time...Bodies," Ph. D. Dissertation, Mississippi State University, Mississippi State, Miss. Thompson , J . F . 1983. "A Boundary-Fitted Coordinate Code for General...Vicksburg, Miss. Thompson , J . F ., and Bernard, R. S. 1985. "Numerical Modeling of Two-Dimensional Width-Averaged Flows Using Boundary-Fitted Coordinate

  12. Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses.

    PubMed

    Sudhakar, Shyam Kumar; Torben-Nielsen, Benjamin; De Schutter, Erik

    2015-12-01

    Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it.

  13. [A simple procedure for quantitative and time coded detection of snoring sounds in apnea and snoring patients].

    PubMed

    Schäfer, J

    1988-09-01

    A simple technique for time-encoded recording of snoring sounds is presented. An electronic circuit connected to a stereo cassette recorder via cord remote control will start the recorder when the sound pressure of the snoring sound surpasses a preset level. The snoring sound is recorded on one track of the cassette recorder. The other track is used for the time code. Every 120 seconds the tape recorder is switched on by the electronic circuit and a short high level signal is recorded. The time interval between two time coding signals equals the snoring time. Analysis of the time coding signals is performed by an A/D converter installed in a Personal Computer. The converter amd accompanying software are used to calculate the duration of the intervals between the coding signals. The results are displayed graphically where sleeping time is plotted against snoring time (Fig.2). The length of each bar equals the snoring time in seconds for each 120 second interval. Snoring periods can be recognised at a glance and the snoring pattern can be evaluated. Three cases studies are presented demonstrating the performance of the technique. A 39-year-old female with nonpathological snoring (Fig.2), a 45-year-old male with heavy and regular snoring (Fig. 3) and a 36-year-old male with a full-blown Pickwickian Syndrome (Fig. 4). This patient's response to nasal cPAP is demonstrated in Fig. 5.

  14. Experience and information loss in auditory and visual memory.

    PubMed

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  15. Noise-induced hearing loss alters the temporal dynamics of auditory-nerve responses.

    PubMed

    Scheidt, Ryan E; Kale, Sushrut; Heinz, Michael G

    2010-10-01

    Auditory-nerve fibers demonstrate dynamic response properties in that they adapt to rapid changes in sound level, both at the onset and offset of a sound. These dynamic response properties affect temporal coding of stimulus modulations that are perceptually relevant for many sounds such as speech and music. Temporal dynamics have been well characterized in auditory-nerve fibers from normal-hearing animals, but little is known about the effects of sensorineural hearing loss on these dynamics. This study examined the effects of noise-induced hearing loss on the temporal dynamics in auditory-nerve fiber responses from anesthetized chinchillas. Post-stimulus-time histograms were computed from responses to 50-ms tones presented at characteristic frequency and 30 dB above fiber threshold. Several response metrics related to temporal dynamics were computed from post-stimulus-time histograms and were compared between normal-hearing and noise-exposed animals. Results indicate that noise-exposed auditory-nerve fibers show significantly reduced response latency, increased onset response and percent adaptation, faster adaptation after onset, and slower recovery after offset. The decrease in response latency only occurred in noise-exposed fibers with significantly reduced frequency selectivity. These changes in temporal dynamics have important implications for temporal envelope coding in hearing-impaired ears, as well as for the design of dynamic compression algorithms for hearing aids.

  16. Coding and decoding with adapting neurons: a population approach to the peri-stimulus time histogram.

    PubMed

    Naud, Richard; Gerstner, Wulfram

    2012-01-01

    The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a 'quasi-renewal equation' which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.

  17. Multi-fluid transport code modeling of time-dependent recycling in ELMy H-mode

    SciTech Connect

    Pigarov, A. Yu.; Krasheninnikov, S. I.; Rognlien, T. D.; Hollmann, E. M.; Lasnier, C. J.; Unterberg, Ezekial A

    2014-01-01

    Simulations of a high-confinement-mode (H-mode) tokamak discharge with infrequent giant type-I ELMs are performed by the multi-fluid, multi-species, two-dimensional transport code UEDGE-MB, which incorporates the Macro-Blob approach for intermittent non-diffusive transport due to filamentary coherent structures observed during the Edge Localized Modes (ELMs) and simple time-dependent multi-parametric models for cross-field plasma transport coefficients and working gas inventory in material surfaces. Temporal evolutions of pedestal plasma profiles, divertor recycling, and wall inventory in a sequence of ELMs are studied and compared to the experimental time-dependent data. Short- and long-time-scale variations of the pedestal and divertor plasmas where the ELM is described as a sequence of macro-blobs are discussed. It is shown that the ELM recovery includes the phase of relatively dense and cold post-ELM divertor plasma evolving on a several ms scale, which is set by the transport properties of H-mode barrier. The global gas balance in the discharge is also analyzed. The calculated rates of working gas deposition during each ELM and wall outgassing between ELMs are compared to the ELM particle losses from the pedestal and neutral-beam-injection fueling rate, correspondingly. A sensitivity study of the pedestal and divertor plasmas to model assumptions for gas deposition and release on material surfaces is presented. The performed simulations show that the dynamics of pedestal particle inventory is dominated by the transient intense gas deposition into the wall during each ELM followed by continuous gas release between ELMs at roughly a constant rate.

  18. Multi-fluid transport code modeling of time-dependent recycling in ELMy H-mode

    SciTech Connect

    Pigarov, A. Yu.; Krasheninnikov, S. I.; Hollmann, E. M.; Rognlien, T. D.; Lasnier, C. J.; Unterberg, E.

    2014-06-15

    Simulations of a high-confinement-mode (H-mode) tokamak discharge with infrequent giant type-I ELMs are performed by the multi-fluid, multi-species, two-dimensional transport code UEDGE-MB, which incorporates the Macro-Blob approach for intermittent non-diffusive transport due to filamentary coherent structures observed during the Edge Localized Modes (ELMs) and simple time-dependent multi-parametric models for cross-field plasma transport coefficients and working gas inventory in material surfaces. Temporal evolutions of pedestal plasma profiles, divertor recycling, and wall inventory in a sequence of ELMs are studied and compared to the experimental time-dependent data. Short- and long-time-scale variations of the pedestal and divertor plasmas where the ELM is described as a sequence of macro-blobs are discussed. It is shown that the ELM recovery includes the phase of relatively dense and cold post-ELM divertor plasma evolving on a several ms scale, which is set by the transport properties of H-mode barrier. The global gas balance in the discharge is also analyzed. The calculated rates of working gas deposition during each ELM and wall outgassing between ELMs are compared to the ELM particle losses from the pedestal and neutral-beam-injection fueling rate, correspondingly. A sensitivity study of the pedestal and divertor plasmas to model assumptions for gas deposition and release on material surfaces is presented. The performed simulations show that the dynamics of pedestal particle inventory is dominated by the transient intense gas deposition into the wall during each ELM followed by continuous gas release between ELMs at roughly a constant rate.

  19. The Perception of Auditory Motion.

    PubMed

    Carlile, Simon; Leung, Johahn

    2016-04-19

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. © The Author(s) 2016.

  20. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  1. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  2. Auditory Training for Central Auditory Processing Disorder

    PubMed Central

    Weihing, Jeffrey; Chermak, Gail D.; Musiek, Frank E.

    2015-01-01

    Auditory training (AT) is an important component of rehabilitation for patients with central auditory processing disorder (CAPD). The present article identifies and describes aspects of AT as they relate to applications in this population. A description of the types of auditory processes along with information on relevant AT protocols that can be used to address these specific deficits is included. Characteristics and principles of effective AT procedures also are detailed in light of research that reflects on their value. Finally, research investigating AT in populations who show CAPD or present with auditory complaints is reported. Although efficacy data in this area are still emerging, current findings support the use of AT for treatment of auditory difficulties. PMID:27587909

  3. Evaluation of a thin-slot formalism for finite-difference time-domain electromagnetics codes

    SciTech Connect

    Turner, C.D.; Bacon, L.D.

    1987-03-01

    A thin-slot formalism for use with finite-difference time-domain (FDTD) electromagnetics codes has been evaluated in both two and three dimensions. This formalism allows narrow slots to be modeled in the wall of a scatterer without reducing the space grid size to the gap width. In two dimensions, the evaluation involves the calculation of the total fields near two infinitesimally thin coplanar strips separated by a gap. A method-of-moments (MoM) solution of the same problem is used as a benchmark for comparison. Results in two dimensions show that up to 10% error can be expected in total electric and magnetic fields both near (lambda/40) and far (1 lambda) from the slot. In three dimensions, the evaluation is similar. The finite-length slot is placed in a finite plate and an MoM surface patch solution is used for the benchmark. These results, although less extensive than those in two dimensions, show that slightly larger errors can be expected. Considering the approximations made near the slot in incorporating the formalism, the results are very promising. Possibilities also exist for applying this formalism to walls of arbitrary thickness and to other types of slots, such as overlapping joints. 11 refs., 25 figs., 6 tabs.

  4. Detection by real time PCR of walnut allergen coding sequences in processed foods.

    PubMed

    Linacero, Rosario; Ballesteros, Isabel; Sanchiz, Africa; Prieto, Nuria; Iniesto, Elisa; Martinez, Yolanda; Pedrosa, Mercedes M; Muzquiz, Mercedes; Cabanillas, Beatriz; Rovira, Mercè; Burbano, Carmen; Cuadrado, Carmen

    2016-07-01

    A quantitative real-time PCR (RT-PCR) method, employing novel primer sets designed on Jug r 1, Jug r 3, and Jug r 4 allergen-coding sequences, was set up and validated. Its specificity, sensitivity, and applicability were evaluated. The DNA extraction method based on CTAB-phenol-chloroform was best for walnut. RT-PCR allowed a specific and accurate amplification of allergen sequence, and the limit of detection was 2.5pg of walnut DNA. The method sensitivity and robustness were confirmed with spiked samples, and Jug r 3 primers detected up to 100mg/kg of raw walnut (LOD 0.01%, LOQ 0.05%). Thermal treatment combined with pressure (autoclaving) reduced yield and amplification (integrity and quality) of walnut DNA. High hydrostatic pressure (HHP) did not produce any effect on the walnut DNA amplification. This RT-PCR method showed greater sensitivity and reliability in the detection of walnut traces in commercial foodstuffs compared with ELISA assays.

  5. Contextualizing older women's body images: Time dimensions, multiple reference groups, and age codings of appearance.

    PubMed

    Krekula, Clary

    2016-01-01

    The article sheds light on older women's body images and problematizes assumptions that women's aging is more painful and shameful than men's aging since men are not expected to live up to youthful beauty norms, the so-called double standard of aging hypothesis. Based on 12 qualitative interviews with women from the age of 75 from the Swedish capital area, I argue that older women have access to a double perspective of beauty, which means that they can relate to both youthful and age-related beauty norms. The results also illustrate that women's body image is created in a context where previous body images are central and that this time perspective can contribute toward a positive body image. Further, the results show how age codings of appearance-related qualities create a narrow framework for older women's body images and point to the benefits of shifting the analytical focus toward a material-semiotic body where corporeality and discourse are seen as interwoven.

  6. Global Time Dependent Solutions of Stochastically Driven Standard Accretion Disks: Development of Hydrodynamical Code

    NASA Astrophysics Data System (ADS)

    Wani, Naveel; Maqbool, Bari; Iqbal, Naseer; Misra, Ranjeev

    2016-07-01

    X-ray binaries and AGNs are powered by accretion discs around compact objects, where the x-rays are emitted from the inner regions and uv emission arise from the relatively cooler outer parts. There has been an increasing evidence that the variability of the x-rays in different timescales is caused by stochastic fluctuations in the accretion disc at different radii. These fluctuations although arise in the outer parts of the disc but propagate inwards to give rise to x-ray variability and hence provides a natural connection between the x-ray and uv variability. There are analytical expressions to qualitatively understand the effect of these stochastic variabilities, but quantitative predictions are only possible by a detailed hydrodynamical study of the global time dependent solution of standard accretion disc. We have developed numerical efficient code (to incorporate all these effects), which considers gas pressure dominated solutions and stochastic fluctuations with the inclusion of boundary effect of the last stable orbit.

  7. Population coding of interaural time differences in gerbils and barn owls.

    PubMed

    Lesica, Nicholas A; Lingner, Andrea; Grothe, Benedikt

    2010-09-01

    Interaural time differences (ITDs) are the primary cue for the localization of low-frequency sound sources in the azimuthal plane. For decades, it was assumed that the coding of ITDs in the mammalian brain was similar to that in the avian brain, where information is sparsely distributed across individual neurons, but recent studies have suggested otherwise. In this study, we characterized the representation of ITDs in adult male and female gerbils. First, we performed behavioral experiments to determine the acuity with which gerbils can use ITDs to localize sounds. Next, we used different decoders to infer ITDs from the activity of a population of neurons in central nucleus of the inferior colliculus. These results show that ITDs are not represented in a distributed manner, but rather in the summed activity of the entire population. To contrast these results with those from a population where the representation of ITDs is known to be sparsely distributed, we performed the same analysis on activity from the external nucleus of the inferior colliculus of adult male and female barn owls. Together, our results support the idea that, unlike the avian brain, the mammalian brain represents ITDs in the overall activity of a homogenous population of neurons within each hemisphere.

  8. Deep time perspective on turtle neck evolution: chasing the Hox code by vertebral morphology.

    PubMed

    Böhmer, Christine; Werneburg, Ingmar

    2017-08-21

    The unparalleled ability of turtle neck retraction is possible in three different modes, which characterize stem turtles, living side-necked (Pleurodira), and hidden-necked (Cryptodira) turtles, respectively. Despite the conservatism in vertebral count among turtles, there is significant functional and morphological regionalization in the cervical vertebral column. Since Hox genes play a fundamental role in determining the differentiation in vertebra morphology and based on our reconstruction of evolutionary genetics in deep time, we hypothesize genetic differences among the turtle groups and between turtles and other land vertebrates. We correlated anterior Hox gene expression and the quantifiable shape of the vertebrae to investigate the morphological modularity in the neck across living and extinct turtles. This permitted the reconstruction of the hypothetical ancestral Hox code pattern of the whole turtle clade. The scenario of the evolution of axial patterning in turtles indicates shifts in the spatial expression of HoxA-5 in relation to the reduction of cervical ribs in modern turtles and of HoxB-5 linked with a lower morphological differentiation between the anterior cervical vertebrae observed in cryptodirans. By comparison with the mammalian pattern, we illustrate how the fixed count of eight cervical vertebrae in turtles resulted from the emergence of the unique turtle shell.

  9. Implementation and evaluation of a simulation curriculum for paediatric residency programs including just-in-time in situ mock codes

    PubMed Central

    Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam

    2012-01-01

    OBJECTIVE: To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. DESIGN: Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. SETTING: Paediatric residency program at BC Children’s Hospital, Vancouver, British Columbia. INTERVENTIONS: The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. RESULTS: A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. CONCLUSIONS: A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes. PMID:23372405

  10. Hearing the light: neural and perceptual encoding of optogenetic stimulation in the central auditory pathway

    PubMed Central

    Guo, Wei; Hight, Ariel E.; Chen, Jenny X.; Klapoetke, Nathan C.; Hancock, Kenneth E.; Shinn-Cunningham, Barbara G.; Boyden, Edward S.; Lee, Daniel J.; Polley, Daniel B.

    2015-01-01

    Optogenetics provides a means to dissect the organization and function of neural circuits. Optogenetics also offers the translational promise of restoring sensation, enabling movement or supplanting abnormal activity patterns in pathological brain circuits. However, the inherent sluggishness of evoked photocurrents in conventional channelrhodopsins has hampered the development of optoprostheses that adequately mimic the rate and timing of natural spike patterning. Here, we explore the feasibility and limitations of a central auditory optoprosthesis by photoactivating mouse auditory midbrain neurons that either express channelrhodopsin-2 (ChR2) or Chronos, a channelrhodopsin with ultra-fast channel kinetics. Chronos-mediated spike fidelity surpassed ChR2 and natural acoustic stimulation to support a superior code for the detection and discrimination of rapid pulse trains. Interestingly, this midbrain coding advantage did not translate to a perceptual advantage, as behavioral detection of midbrain activation was equivalent with both opsins. Auditory cortex recordings revealed that the precisely synchronized midbrain responses had been converted to a simplified rate code that was indistinguishable between opsins and less robust overall than acoustic stimulation. These findings demonstrate the temporal coding benefits that can be realized with next-generation channelrhodopsins, but also highlight the challenge of inducing variegated patterns of forebrain spiking activity that support adaptive perception and behavior. PMID:26000557

  11. Manipulation of BK channel expression is sufficient to alter auditory hair cell thresholds in larval zebrafish

    PubMed Central

    Rohmann, Kevin N.; Tripp, Joel A.; Genova, Rachel M.; Bass, Andrew H.

    2014-01-01

    Non-mammalian vertebrates rely on electrical resonance for frequency tuning in auditory hair cells. A key component of the resonance exhibited by these cells is an outward calcium-activated potassium current that flows through large-conductance calcium-activated potassium (BK) channels. Previous work in midshipman fish (Porichthys notatus) has shown that BK expression correlates with seasonal changes in hearing sensitivity and that pharmacologically blocking these channels replicates the natural decreases in sensitivity during the winter non-reproductive season. To test the hypothesis that reducing BK channel function is sufficient to change auditory thresholds in fish, morpholino oligonucleotides (MOs) were used in larval zebrafish (Danio rerio) to alter expression of slo1a and slo1b, duplicate genes coding for the pore-forming α-subunits of BK channels. Following MO injection, microphonic potentials were recorded from the inner ear of larvae. Quantitative real-time PCR was then used to determine the MO effect on slo1a and slo1b expression in these same fish. Knockdown of either slo1a or slo1b resulted in disrupted gene expression and increased auditory thresholds across the same range of frequencies of natural auditory plasticity observed in midshipman. We conclude that interference with the normal expression of individual slo1 genes is sufficient to increase auditory thresholds in zebrafish larvae and that changes in BK channel expression are a direct mechanism for regulation of peripheral hearing sensitivity among fishes. PMID:24803460

  12. Theoretical and experimental studies of turbo product code with time diversity in free space optical communication.

    PubMed

    Han, Yaoqiang; Dang, Anhong; Ren, Yongxiong; Tang, Junxiong; Guo, Hong

    2010-12-20

    In free space optical communication (FSOC) systems, channel fading caused by atmospheric turbulence degrades the system performance seriously. However, channel coding combined with diversity techniques can be exploited to mitigate channel fading. In this paper, based on the experimental study of the channel fading effects, we propose to use turbo product code (TPC) as the channel coding scheme, which features good resistance to burst errors and no error floor. However, only channel coding cannot cope with burst errors caused by channel fading, interleaving is also used. We investigate the efficiency of interleaving for different interleaving depths, and then the optimum interleaving depth for TPC is also determined. Finally, an experimental study of TPC with interleaving is demonstrated, and we show that TPC with interleaving can significantly mitigate channel fading in FSOC systems.

  13. A generalized time-frequency subtraction method for robust speech enhancement based on wavelet filter banks modeling of human auditory system.

    PubMed

    Shao, Yu; Chang, Chip-Hong

    2007-08-01

    We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.

  14. A Real-Time SAR Processor using One-Bit Raw Signal Coding for SRTM

    DTIC Science & Technology

    2000-10-01

    allow users access to individually authored sections f proceedings, annals, symposia, ect . However, the component should be considered within he...granted during SRTM by X-IFSAR Gate Array ( FPGA ) technology, endorses the Single Bit will allow once again the analysis of precipitation over SAR coding...II", finally, ocean large-scale dynamics. Department of Electrical Engineering. The Signum Code (SC) algorithm, together with state- of-the-art FPGA

  15. Delayed Auditory Feedback and Movement

    ERIC Educational Resources Information Center

    Pfordresher, Peter Q.; Dalla Bella, Simone

    2011-01-01

    It is well known that timing of rhythm production is disrupted by delayed auditory feedback (DAF), and that disruption varies with delay length. We tested the hypothesis that disruption depends on the state of the movement trajectory at the onset of DAF. Participants tapped isochronous rhythms at a rate specified by a metronome while hearing DAF…

  16. Auditory Temporal Conditioning in Neonates.

    ERIC Educational Resources Information Center

    Franz, W. K.; And Others

    Twenty normal newborns, approximately 36 hours old, were tested using an auditory temporal conditioning paradigm which consisted of a slow rise, 75 db tone played for five seconds every 25 seconds, ten times. Responses to the tones were measured by instantaneous, beat-to-beat heartrate; and the test trial was designated as the 2 1/2-second period…

  17. Passive auditory stimulation improves vision in hemianopia.

    PubMed

    Lewald, Jörg; Tegenthoff, Martin; Peters, Sören; Hausmann, Markus

    2012-01-01

    Techniques employed in rehabilitation of visual field disorders such as hemianopia are usually based on either visual or audio-visual stimulation and patients have to perform a training task. Here we present results from a completely different, novel approach that was based on passive unimodal auditory stimulation. Ten patients with either left or right-sided pure hemianopia (without neglect) received one hour of unilateral passive auditory stimulation on either their anopic or their intact side by application of repetitive trains of sound pulses emitted simultaneously via two loudspeakers. Immediately before and after passive auditory stimulation as well as after a period of recovery, pat