Science.gov

Sample records for auditory time coding

  1. Coding space-time stimulus dynamics in auditory brain maps

    PubMed Central

    Wang, Yunyan; Gutfreund, Yoram; Peña, José L.

    2014-01-01

    Sensory maps are often distorted representations of the environment, where ethologically-important ranges are magnified. The implication of a biased representation extends beyond increased acuity for having more neurons dedicated to a certain range. Because neurons are functionally interconnected, non-uniform representations influence the processing of high-order features that rely on comparison across areas of the map. Among these features are time-dependent changes of the auditory scene generated by moving objects. How sensory representation affects high order processing can be approached in the map of auditory space of the owl's midbrain, where locations in the front are over-represented. In this map, neurons are selective not only to location but also to location over time. The tuning to space over time leads to direction selectivity, which is also topographically organized. Across the population, neurons tuned to peripheral space are more selective to sounds moving into the front. The distribution of direction selectivity can be explained by spatial and temporal integration on the non-uniform map of space. Thus, the representation of space can induce biased computation of a second-order stimulus feature. This phenomenon is likely observed in other sensory maps and may be relevant for behavior. PMID:24782781

  2. Inhibition does not affect the timing code for vocalizations in the mouse auditory midbrain

    PubMed Central

    Dimitrov, Alexander G.; Cummins, Graham I.; Mayko, Zachary M.; Portfors, Christine V.

    2014-01-01

    Many animals use a diverse repertoire of complex acoustic signals to convey different types of information to other animals. The information in each vocalization therefore must be coded by neurons in the auditory system. One way in which the auditory system may discriminate among different vocalizations is by having highly selective neurons, where only one or two different vocalizations evoke a strong response from a single neuron. Another strategy is to have specific spike timing patterns for particular vocalizations such that each neural response can be matched to a specific vocalization. Both of these strategies seem to occur in the auditory midbrain of mice. The neural mechanisms underlying rate and time coding are unclear, however, it is likely that inhibition plays a role. Here, we examined whether inhibition is involved in shaping neural selectivity to vocalizations via rate and/or time coding in the mouse inferior colliculus (IC). We examined extracellular single unit responses to vocalizations before and after iontophoretically blocking GABAA and glycine receptors in the IC of awake mice. We then applied a number of neurometrics to examine the rate and timing information of individual neurons. We initially evaluated the neuronal responses using inspection of the raster plots, spike-counting measures of response rate and stimulus preference, and a measure of maximum available stimulus-response mutual information. Subsequently, we used two different event sequence distance measures, one based on vector space embedding, and one derived from the Victor/Purpura Dq metric, to direct hierarchical clustering of responses. In general, we found that the most salient feature of pharmacologically blocking inhibitory receptors in the IC was the lack of major effects on the functional properties of IC neurons. Blocking inhibition did increase response rate to vocalizations, as expected. However, it did not significantly affect spike timing, or stimulus selectivity of

  3. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain

    PubMed Central

    Portfors, Christine V.

    2013-01-01

    The ubiquity of social vocalization among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve vocal communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. PMID:23726970

  4. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    PubMed

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  5. How the owl resolves auditory coding ambiguity.

    PubMed

    Mazer, J A

    1998-09-01

    The barn owl (Tyto alba) uses interaural time difference (ITD) cues to localize sounds in the horizontal plane. Low-order binaural auditory neurons with sharp frequency tuning act as narrow-band coincidence detectors; such neurons respond equally well to sounds with a particular ITD and its phase equivalents and are said to be phase ambiguous. Higher-order neurons with broad frequency tuning are unambiguously selective for single ITDs in response to broad-band sounds and show little or no response to phase equivalents. Selectivity for single ITDs is thought to arise from the convergence of parallel, narrow-band frequency channels that originate in the cochlea. ITD tuning to variable bandwidth stimuli was measured in higher-order neurons of the owl's inferior colliculus to examine the rules that govern the relationship between frequency channel convergence and the resolution of phase ambiguity. Ambiguity decreased as stimulus bandwidth increased, reaching a minimum at 2-3 kHz. Two independent mechanisms appear to contribute to the elimination of ambiguity: one suppressive and one facilitative. The integration of information carried by parallel, distributed processing channels is a common theme of sensory processing that spans both modality and species boundaries. The principles underlying the resolution of phase ambiguity and frequency channel convergence in the owl may have implications for other sensory systems, such as electrolocation in electric fish and the computation of binocular disparity in the avian and mammalian visual systems. PMID:9724807

  6. Changing Auditory Time with Prismatic Goggles

    ERIC Educational Resources Information Center

    Magnani, Barbara; Pavani, Francesco; Frassinetti, Francesca

    2012-01-01

    The aim of the present study was to explore the spatial organization of auditory time and the effects of the manipulation of spatial attention on such a representation. In two experiments, we asked 28 adults to classify the duration of auditory stimuli as "short" or "long". Stimuli were tones of high or low pitch, delivered left or right of the…

  7. Subcortical neural coding mechanisms for auditory temporal processing.

    PubMed

    Frisina, R D

    2001-08-01

    Biologically relevant sounds such as speech, animal vocalizations and music have distinguishing temporal features that are utilized for effective auditory perception. Common temporal features include sound envelope fluctuations, often modeled in the laboratory by amplitude modulation (AM), and starts and stops in ongoing sounds, which are frequently approximated by hearing researchers as gaps between two sounds or are investigated in forward masking experiments. The auditory system has evolved many neural processing mechanisms for encoding important temporal features of sound. Due to rapid progress made in the field of auditory neuroscience in the past three decades, it is not possible to review all progress in this field in a single article. The goal of the present report is to focus on single-unit mechanisms in the mammalian brainstem auditory system for encoding AM and gaps as illustrative examples of how the system encodes key temporal features of sound. This report, following a systems analysis approach, starts with findings in the auditory nerve and proceeds centrally through the cochlear nucleus, superior olivary complex and inferior colliculus. Some general principles can be seen when reviewing this entire field. For example, as one ascends the central auditory system, a neural encoding shift occurs. An emphasis on synchronous responses for temporal coding exists in the auditory periphery, and more reliance on rate coding occurs as one moves centrally. In addition, for AM, modulation transfer functions become more bandpass as the sound level of the signal is raised, but become more lowpass in shape as background noise is added. In many cases, AM coding can actually increase in the presence of background noise. For gap processing or forward masking, coding for gaps changes from a decrease in spike firing rate for neurons of the peripheral auditory system that have sustained response patterns, to an increase in firing rate for more central neurons with

  8. Temporal coding by populations of auditory receptor neurons.

    PubMed

    Sabourin, Patrick; Pollack, Gerald S

    2010-03-01

    Auditory receptor neurons of crickets are most sensitive to either low or high sound frequencies. Earlier work showed that the temporal coding properties of first-order auditory interneurons are matched to the temporal characteristics of natural low- and high-frequency stimuli (cricket songs and bat echolocation calls, respectively). We studied the temporal coding properties of receptor neurons and used modeling to investigate how activity within populations of low- and high-frequency receptors might contribute to the coding properties of interneurons. We confirm earlier findings that individual low-frequency-tuned receptors code stimulus temporal pattern poorly, but show that coding performance of a receptor population increases markedly with population size, due in part to low redundancy among the spike trains of different receptors. By contrast, individual high-frequency-tuned receptors code a stimulus temporal pattern fairly well and, because their spike trains are redundant, there is only a slight increase in coding performance with population size. The coding properties of low- and high-frequency receptor populations resemble those of interneurons in response to low- and high-frequency stimuli, suggesting that coding at the interneuron level is partly determined by the nature and organization of afferent input. Consistent with this, the sound-frequency-specific coding properties of an interneuron, previously demonstrated by analyzing its spike train, are also apparent in the subthreshold fluctuations in membrane potential that are generated by synaptic input from receptor neurons.

  9. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities

    PubMed Central

    Deneux, Thomas; Kempf, Alexandre; Daret, Aurélie; Ponsot, Emmanuel; Bathellier, Brice

    2016-01-01

    Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. PMID:27580932

  10. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities.

    PubMed

    Deneux, Thomas; Kempf, Alexandre; Daret, Aurélie; Ponsot, Emmanuel; Bathellier, Brice

    2016-01-01

    Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. PMID:27580932

  11. Codes for sound-source location in nontonotopic auditory cortex.

    PubMed

    Middlebrooks, J C; Xu, L; Eddins, A C; Green, D M

    1998-08-01

    We evaluated two hypothetical codes for sound-source location in the auditory cortex. The topographical code assumed that single neurons are selective for particular locations and that sound-source locations are coded by the cortical location of small populations of maximally activated neurons. The distributed code assumed that the responses of individual neurons can carry information about locations throughout 360 degrees of azimuth and that accurate sound localization derives from information that is distributed across large populations of such panoramic neurons. We recorded from single units in the anterior ectosylvian sulcus area (area AES) and in area A2 of alpha-chloralose-anesthetized cats. Results obtained in the two areas were essentially equivalent. Noise bursts were presented from loudspeakers spaced in 20 degrees intervals of azimuth throughout 360 degrees of the horizontal plane. Spike counts of the majority of units were modulated >50% by changes in sound-source azimuth. Nevertheless, sound-source locations that produced greater than half-maximal spike counts often spanned >180 degrees of azimuth. The spatial selectivity of units tended to broaden and, often, to shift in azimuth as sound pressure levels (SPLs) were increased to a moderate level. We sometimes saw systematic changes in spatial tuning along segments of electrode tracks as long as 1.5 mm but such progressions were not evident at higher sound levels. Moderate-level sounds presented anywhere in the contralateral hemifield produced greater than half-maximal activation of nearly all units. These results are not consistent with the hypothesis of a topographic code. We used an artificial-neural-network algorithm to recognize spike patterns and, thereby, infer the locations of sound sources. Network input consisted of spike density functions formed by averages of responses to eight stimulus repetitions. Information carried in the responses of single units permitted reasonable estimates of sound

  12. Evolutionarily conserved coding properties of auditory neurons across grasshopper species

    PubMed Central

    Neuhofer, Daniela; Wohlgemuth, Sandra; Stumpner, Andreas; Ronacher, Bernhard

    2008-01-01

    We investigated encoding properties of identified auditory interneurons in two not closely related grasshopper species (Acrididae). The neurons can be homologized on the basis of their similar morphologies and physiologies. As test stimuli, we used the species-specific stridulation signals of Chorthippus biguttulus, which evidently are not relevant for the other species, Locusta migratoria. We recorded spike trains produced in response to these signals from several neuron types at the first levels of the auditory pathway in both species. Using a spike train metric to quantify differences between neuronal responses, we found a high similarity in the responses of homologous neurons: interspecific differences between the responses of homologous neurons in the two species were not significantly larger than intraspecific differences (between several specimens of a neuron in one species). These results suggest that the elements of the thoracic auditory pathway have been strongly conserved during the evolutionary divergence of these species. According to the ‘efficient coding’ hypothesis, an adaptation of the thoracic auditory pathway to the specific needs of acoustic communication could be expected. We conclude that there must have been stabilizing selective forces at work that conserved coding characteristics and prevented such an adaptation. PMID:18505715

  13. Coding of melodic gestalt in human auditory cortex.

    PubMed

    Schindler, Andreas; Herdener, Marcus; Bartels, Andreas

    2013-12-01

    The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.

  14. Temporal pattern recognition based on instantaneous spike rate coding in a simple auditory system.

    PubMed

    Nabatiyan, A; Poulet, J F A; de Polavieja, G G; Hedwig, B

    2003-10-01

    Auditory pattern recognition by the CNS is a fundamental process in acoustic communication. Because crickets communicate with stereotyped patterns of constant frequency syllables, they are established models to investigate the neuronal mechanisms of auditory pattern recognition. Here we provide evidence that for the neural processing of amplitude-modulated sounds, the instantaneous spike rate rather than the time-averaged neural activity is the appropriate coding principle by comparing both coding parameters in a thoracic interneuron (Omega neuron ON1) of the cricket (Gryllus bimaculatus) auditory system. When stimulated with different temporal sound patterns, the analysis of the instantaneous spike rate demonstrates that the neuron acts as a low-pass filter for syllable patterns. The instantaneous spike rate is low at high syllable rates, but prominent peaks in the instantaneous spike rate are generated as the syllable rate resembles that of the species-specific pattern. The occurrence and repetition rate of these peaks in the neuronal discharge are sufficient to explain temporal filtering in the cricket auditory pathway as they closely match the tuning of phonotactic behavior to different sound patterns. Thus temporal filtering or "pattern recognition" occurs at an early stage in the auditory pathway.

  15. Improving Hearing Performance Using Natural Auditory Coding Strategies

    NASA Astrophysics Data System (ADS)

    Rattay, Frank

    Sound transfer from the human ear to the brain is based on three quite different neural coding principles when the continuous temporal auditory source signal is sent as binary code in excellent quality via 30,000 nerve fibers per ear. Cochlear implants are well-accepted neural prostheses for people with sensory hearing loss, but currently the devices are inspired only by the tonotopic principle. According to this principle, every sound frequency is mapped to a specific place along the cochlea. By electrical stimulation, the frequency content of the acoustic signal is distributed via few contacts of the prosthesis to corresponding places and generates spikes there. In contrast to the natural situation, the artificially evoked information content in the auditory nerve is quite poor, especially because the richness of the temporal fine structure of the neural pattern is replaced by a firing pattern that is strongly synchronized with an artificial cycle duration. Improvement in hearing performance is expected by involving more of the ingenious strategies developed during evolution.

  16. Coding of communication calls in the subcortical and cortical structures of the auditory system.

    PubMed

    Suta, D; Popelár, J; Syka, J

    2008-01-01

    The processing of species-specific communication signals in the auditory system represents an important aspect of animal behavior and is crucial for its social interactions, reproduction, and survival. In this article the neuronal mechanisms underlying the processing of communication signals in the higher centers of the auditory system--inferior colliculus (IC), medial geniculate body (MGB) and auditory cortex (AC)--are reviewed, with particular attention to the guinea pig. The selectivity of neuronal responses for individual calls in these auditory centers in the guinea pig is usually low--most neurons respond to calls as well as to artificial sounds; the coding of complex sounds in the central auditory nuclei is apparently based on the representation of temporal and spectral features of acoustical stimuli in neural networks. Neuronal response patterns in the IC reliably match the sound envelope for calls characterized by one or more short impulses, but do not exactly fit the envelope for long calls. Also, the main spectral peaks are represented by neuronal firing rates in the IC. In comparison to the IC, response patterns in the MGB and AC demonstrate a less precise representation of the sound envelope, especially in the case of longer calls. The spectral representation is worse in the case of low-frequency calls, but not in the case of broad-band calls. The emotional content of the call may influence neuronal responses in the auditory pathway, which can be demonstrated by stimulation with time-reversed calls or by measurements performed under different levels of anesthesia. The investigation of the principles of the neural coding of species-specific vocalizations offers some keys for understanding the neural mechanisms underlying human speech perception.

  17. Diverse cortical codes for scene segmentation in primate auditory cortex

    PubMed Central

    Semple, Malcolm N.

    2015-01-01

    The temporal coherence of amplitude fluctuations is a critical cue for segmentation of complex auditory scenes. The auditory system must accurately demarcate the onsets and offsets of acoustic signals. We explored how and how well the timing of onsets and offsets of gated tones are encoded by auditory cortical neurons in awake rhesus macaques. Temporal features of this representation were isolated by presenting otherwise identical pure tones of differing durations. Cortical response patterns were diverse, including selective encoding of onset and offset transients, tonic firing, and sustained suppression. Spike train classification methods revealed that many neurons robustly encoded tone duration despite substantial diversity in the encoding process. Excellent discrimination performance was achieved by neurons whose responses were primarily phasic at tone offset and by those that responded robustly while the tone persisted. Although diverse cortical response patterns converged on effective duration discrimination, this diversity significantly constrained the utility of decoding models referenced to a spiking pattern averaged across all responses or averaged within the same response category. Using maximum likelihood-based decoding models, we demonstrated that the spike train recorded in a single trial could support direct estimation of stimulus onset and offset. Comparisons between different decoding models established the substantial contribution of bursts of activity at sound onset and offset to demarcating the temporal boundaries of gated tones. Our results indicate that relatively few neurons suffice to provide temporally precise estimates of such auditory “edges,” particularly for models that assume and exploit the heterogeneity of neural responses in awake cortex. PMID:25695655

  18. Brain-Generated Estradiol Drives Long-Term Optimization of Auditory Coding to Enhance the Discrimination of Communication Signals

    PubMed Central

    Tremere, Liisa A.; Pinaud, Raphael

    2011-01-01

    Auditory processing and hearing-related pathologies are heavily influenced by steroid hormones in a variety of vertebrate species including humans. The hormone estradiol has been recently shown to directly modulate the gain of central auditory neurons, in real-time, by controlling the strength of inhibitory transmission via a non-genomic mechanism. The functional relevance of this modulation, however, remains unknown. Here we show that estradiol generated in the songbird homologue of the mammalian auditory association cortex, rapidly enhances the effectiveness of the neural coding of complex, learned acoustic signals in awake zebra finches. Specifically, estradiol increases mutual information rates, coding efficiency and the neural discrimination of songs. These effects are mediated by estradiol’s modulation of both rate and temporal coding of auditory signals. Interference with the local action or production of estradiol in the auditory forebrain of freely-behaving animals disrupts behavioral responses to songs, but not to other behaviorally-relevant communication signals. Our findings directly show that estradiol is a key regulator of auditory function in the adult vertebrate brain. PMID:21368039

  19. Subthreshold resonance properties contribute to the efficient coding of auditory spatial cues

    PubMed Central

    Remme, Michiel W. H.; Donato, Roberta; Mikiel-Hunter, Jason; Ballestero, Jimena A.; Foster, Simon; Rinzel, John; McAlpine, David

    2014-01-01

    Neurons in the medial superior olive (MSO) and lateral superior olive (LSO) of the auditory brainstem code for sound-source location in the horizontal plane, extracting interaural time differences (ITDs) from the stimulus fine structure and interaural level differences (ILDs) from the stimulus envelope. Here, we demonstrate a postsynaptic gradient in temporal processing properties across the presumed tonotopic axis; neurons in the MSO and the low-frequency limb of the LSO exhibit fast intrinsic electrical resonances and low input impedances, consistent with their processing of ITDs in the temporal fine structure. Neurons in the high-frequency limb of the LSO show low-pass electrical properties, indicating they are better suited to extracting information from the slower, modulated envelopes of sounds. Using a modeling approach, we assess ITD and ILD sensitivity of the neural filters to natural sounds, demonstrating that the transformation in temporal processing along the tonotopic axis contributes to efficient extraction of auditory spatial cues. PMID:24843153

  20. Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant

    PubMed Central

    Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa

    2015-01-01

    Introduction  The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. Objective  To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. Methods  This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. Results  There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. Conclusion  There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied. PMID:27413409

  1. Neural coding of sound frequency by cricket auditory receptors.

    PubMed

    Imaizumi, K; Pollack, G S

    1999-02-15

    Crickets provide a useful model to study neural processing of sound frequency. Sound frequency is one parameter that crickets use to discriminate between conspecific signals and sounds made by predators, yet little is known about how frequency is represented at the level of auditory receptors. In this paper, we study the physiological properties of auditory receptor fibers (ARFs) by making single-unit recordings in the cricket Teleogryllus oceanicus. Characteristic frequencies (CFs) of ARFs are distributed discontinuously throughout the range of frequencies that we investigated (2-40 kHz) and appear to be clustered around three frequency ranges (/=18 kHz). A striking characteristic of cricket ARFs is the occurrence of additional sensitivity peaks at frequencies other than CFs. These additional sensitivity peaks allow crickets to detect sound over a wide frequency range, although the CFs of ARFs cover only the frequency bands mentioned above. To the best of our knowledge, this is the first example of the extension of an animal's hearing range through multiple sensitivity peaks of auditory receptors.

  2. Encoding of temporal information by timing, rate, and place in cat auditory cortex.

    PubMed

    Imaizumi, Kazuo; Priebe, Nicholas J; Sharpee, Tatyana O; Cheung, Steven W; Schreiner, Christoph E

    2010-07-19

    A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1) the event-locked spike-timing precision, 2) the mean firing rate, and 3) the interspike interval (ISI). To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF) to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.

  3. Sprint starts and the minimum auditory reaction time.

    PubMed

    Pain, Matthew T G; Hibbs, Angela

    2007-01-01

    The simple auditory reaction time is one of the fastest reaction times and is thought to be rarely less than 100 ms. The current false start criterion in a sprint used by the International Association of Athletics Federations is based on this assumed auditory reaction time of 100 ms. However, there is evidence, both anecdotal and from reflex research, that simple auditory reaction times of less than 100 ms can be achieved. Reaction time in nine athletes performing sprint starts in four conditions was measured using starting blocks instrumented with piezoelectric force transducers in each footplate that were synchronized with the starting signal. Only three conditions were used to calculate reaction times. The pre-motor and pseudo-motor time for two athletes were also measured across 13 muscles using surface electromyography (EMG) synchronized with the rest of the system. Five of the athletes had mean reaction times of less than 100 ms in at least one condition and 20% of all starts in the first two conditions had a reaction time of less than 100 ms. The results demonstrate that the neuromuscular-physiological component of simple auditory reaction times can be under 85 ms and that EMG latencies can be under 60 ms. PMID:17127583

  4. Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH)

    PubMed Central

    Tierney, Adam; Kraus, Nina

    2014-01-01

    Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills. PMID:25505879

  5. Burst Firing is a Neural Code in an Insect Auditory System

    PubMed Central

    Eyherabide, Hugo G.; Rokem, Ariel; Herz, Andreas V. M.; Samengo, Inés

    2008-01-01

    Various classes of neurons alternate between high-frequency discharges and silent intervals. This phenomenon is called burst firing. To analyze burst activity in an insect system, grasshopper auditory receptor neurons were recorded in vivo for several distinct stimulus types. The experimental data show that both burst probability and burst characteristics are strongly influenced by temporal modulations of the acoustic stimulus. The tendency to burst, hence, is not only determined by cell-intrinsic processes, but also by their interaction with the stimulus time course. We study this interaction quantitatively and observe that bursts containing a certain number of spikes occur shortly after stimulus deflections of specific intensity and duration. Our findings suggest a sparse neural code where information about the stimulus is represented by the number of spikes per burst, irrespective of the detailed interspike-interval structure within a burst. This compact representation cannot be interpreted as a firing-rate code. An information-theoretical analysis reveals that the number of spikes per burst reliably conveys information about the amplitude and duration of sound transients, whereas their time of occurrence is reflected by the burst onset time. The investigated neurons encode almost half of the total transmitted information in burst activity. PMID:18946533

  6. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    PubMed Central

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  7. Norepinephrine Modulates Coding of Complex Vocalizations in the Songbird Auditory Cortex Independent of Local Neuroestrogen Synthesis

    PubMed Central

    Ikeda, Maaya Z.; Jeon, Sung David; Cowell, Rosemary A.

    2015-01-01

    The catecholamine norepinephrine plays a significant role in auditory processing. Most studies to date have examined the effects of norepinephrine on the neuronal response to relatively simple stimuli, such as tones and calls. It is less clear how norepinephrine shapes the detection of complex syntactical sounds, as well as the coding properties of sensory neurons. Songbirds provide an opportunity to understand how auditory neurons encode complex, learned vocalizations, and the potential role of norepinephrine in modulating the neuronal computations for acoustic communication. Here, we infused norepinephrine into the zebra finch auditory cortex and performed extracellular recordings to study the modulation of song representations in single neurons. Consistent with its proposed role in enhancing signal detection, norepinephrine decreased spontaneous activity and firing during stimuli, yet it significantly enhanced the auditory signal-to-noise ratio. These effects were all mimicked by clonidine, an α-2 receptor agonist. Moreover, a pattern classifier analysis indicated that norepinephrine enhanced the ability of single neurons to accurately encode complex auditory stimuli. Because neuroestrogens are also known to enhance auditory processing in the songbird brain, we tested the hypothesis that norepinephrine actions depend on local estrogen synthesis. Neither norepinephrine nor adrenergic receptor antagonist infusion into the auditory cortex had detectable effects on local estradiol levels. Moreover, pretreatment with fadrozole, a specific aromatase inhibitor, did not block norepinephrine's neuromodulatory effects. Together, these findings indicate that norepinephrine enhances signal detection and information encoding for complex auditory stimuli by suppressing spontaneous “noise” activity and that these actions are independent of local neuroestrogen synthesis. PMID:26109659

  8. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    PubMed Central

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  9. Odors Bias Time Perception in Visual and Auditory Modalities

    PubMed Central

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  10. Odors Bias Time Perception in Visual and Auditory Modalities.

    PubMed

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  11. Auditory training improves neural timing in the human brainstem.

    PubMed

    Russo, Nicole M; Nicol, Trent G; Zecker, Steven G; Hayes, Erin A; Kraus, Nina

    2005-01-01

    The auditory brainstem response reflects neural encoding of the acoustic characteristic of a speech syllable with remarkable precision. Some children with learning impairments demonstrate abnormalities in this preconscious measure of neural encoding especially in background noise. This study investigated whether auditory training targeted to remediate perceptually-based learning problems would alter the neural brainstem encoding of the acoustic sound structure of speech in such children. Nine subjects, clinically diagnosed with a language-based learning problem (e.g., dyslexia), worked with auditory perceptual training software. Prior to beginning and within three months after completing the training program, brainstem responses to the syllable /da/ were recorded in quiet and background noise. Subjects underwent additional auditory neurophysiological, perceptual, and cognitive testing. Ten control subjects, who did not participate in any remediation program, underwent the same battery of tests at time intervals equivalent to the trained subjects. Transient and sustained (frequency-following response) components of the brainstem response were evaluated. The primary pathway afferent volley -- neural events occurring earlier than 11 ms after stimulus onset -- did not demonstrate plasticity. However, quiet-to-noise inter-response correlations of the sustained response ( approximately 11-50 ms) increased significantly in the trained children, reflecting improved stimulus encoding precision, whereas control subjects did not exhibit this change. Thus, auditory training can alter the preconscious neural encoding of complex sounds by improving neural synchrony in the auditory brainstem. Additionally, several measures of brainstem response timing were related to changes in cortical physiology, as well as perceptual, academic, and cognitive measures from pre- to post-training.

  12. Time-sharing visual and auditory tracking tasks

    NASA Technical Reports Server (NTRS)

    Tsang, Pamela S.; Vidulich, Michael A.

    1987-01-01

    An experiment is described which examined the benefits of distributing the input demands of two tracking tasks as a function of task integrality. Visual and auditory compensatory tracking tasks were utilized. Results indicate that presenting the two tracking signals in two input modalities did not improve time-sharing efficiency. This was attributed to the difficulty insensitivity phenomenon.

  13. Speech Compensation for Time-Scale-Modified Auditory Feedback

    ERIC Educational Resources Information Center

    Ogane, Rintaro; Honda, Masaaki

    2014-01-01

    Purpose: The purpose of this study was to examine speech compensation in response to time-scale-modified auditory feedback during the transition of the semivowel for a target utterance of /ija/. Method: Each utterance session consisted of 10 control trials in the normal feedback condition followed by 20 perturbed trials in the modified auditory…

  14. Predictive coding of multisensory timing

    PubMed Central

    Shi, Zhuanghua; Burr, David

    2016-01-01

    The sense of time is foundational for perception and action, yet it frequently departs significantly from physical time. In the paper we review recent progress on temporal contextual effects, multisensory temporal integration, temporal recalibration, and related computational models. We suggest that subjective time arises from minimizing prediction errors and adaptive recalibration, which can be unified in the framework of predictive coding, a framework rooted in Helmholtz’s ‘perception as inference’.

  15. Predictive coding of multisensory timing

    PubMed Central

    Shi, Zhuanghua; Burr, David

    2016-01-01

    The sense of time is foundational for perception and action, yet it frequently departs significantly from physical time. In the paper we review recent progress on temporal contextual effects, multisensory temporal integration, temporal recalibration, and related computational models. We suggest that subjective time arises from minimizing prediction errors and adaptive recalibration, which can be unified in the framework of predictive coding, a framework rooted in Helmholtz’s ‘perception as inference’. PMID:27695705

  16. The Time Course of Neural Changes Underlying Auditory Perceptual Learning

    PubMed Central

    Atienza, Mercedes; Cantero, Jose L.; Dominguez-Marin, Elena

    2002-01-01

    Improvement in perception takes place within the training session and from one session to the next. The present study aims at determining the time course of perceptual learning as revealed by changes in auditory event-related potentials (ERPs) reflecting preattentive processes. Subjects were trained to discriminate two complex auditory patterns in a single session. ERPs were recorded just before and after training, while subjects read a book and ignored stimulation. ERPs showed a negative wave called mismatch negativity (MMN)—which indexes automatic detection of a change in a homogeneous auditory sequence—just after subjects learned to consciously discriminate the two patterns. ERPs were recorded again 12, 24, 36, and 48 h later, just before testing performance on the discrimination task. Additional behavioral and neurophysiological changes were found several hours after the training session: an enhanced P2 at 24 h followed by shorter reaction times, and an enhanced MMN at 36 h. These results indicate that gains in performance on the discrimination of two complex auditory patterns are accompanied by different learning-dependent neurophysiological events evolving within different time frames, supporting the hypothesis that fast and slow neural changes underlie the acquisition of improved perception. PMID:12075002

  17. GOES satellite time code dissemination

    NASA Technical Reports Server (NTRS)

    Beehler, R. E.

    1983-01-01

    The GOES time code system, the performance achieved to date, and some potential improvements in the future are discussed. The disseminated time code is originated from a triply redundant set of atomic standards, time code generators and related equipment maintained by NBS at NOAA's Wallops Island, VA satellite control facility. It is relayed by two GOES satellites located at 75 W and 135 W longitude on a continuous basis to users within North and South America (with overlapping coverage) and well out into the Atlantic and Pacific ocean areas. Downlink frequencies are near 468 MHz. The signals from both satellites are monitored and controlled from the NBS labs at Boulder, CO with additional monitoring input from geographically separated receivers in Washington, D.C. and Hawaii. Performance experience with the received time codes for periods ranging from several years to one day is discussed. Results are also presented for simultaneous, common-view reception by co-located receivers and by receivers separated by several thousand kilometers.

  18. Development of Visuo-Auditory Integration in Space and Time

    PubMed Central

    Gori, Monica; Sandini, Giulio; Burr, David

    2012-01-01

    Adults integrate multisensory information optimally (e.g., Ernst and Banks, 2002) while children do not integrate multisensory visual-haptic cues until 8–10 years of age (e.g., Gori et al., 2008). Before that age strong unisensory dominance occurs for size and orientation visual-haptic judgments, possibly reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. Within the framework of the cross-sensory calibration hypothesis, we investigate visual-auditory integration in both space and time with child-friendly spatial and temporal bisection tasks. Unimodal and bimodal (conflictual and not) audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. On the contrary, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (for PSEs), and bimodal thresholds higher than the Bayesian prediction. Only in the adult group did bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behavior develops late. We suggest that the visual dominance for space and the auditory dominance for time could reflect a cross-sensory comparison of vision in the spatial visuo-audio task and a cross-sensory comparison of audition in the temporal visuo-audio task. PMID:23060759

  19. Context-dependent coding and gain control in the auditory system of crickets.

    PubMed

    Clemens, Jan; Rau, Florian; Hennig, R Matthias; Hildebrandt, K Jannis

    2015-10-01

    Sensory systems process stimuli that greatly vary in intensity and complexity. To maintain efficient information transmission, neural systems need to adjust their properties to these different sensory contexts, yielding adaptive or stimulus-dependent codes. Here, we demonstrated adaptive spectrotemporal tuning in a small neural network, i.e. the peripheral auditory system of the cricket. We found that tuning of cricket auditory neurons was sharper for complex multi-band than for simple single-band stimuli. Information theoretical considerations revealed that this sharpening improved information transmission by separating the neural representations of individual stimulus components. A network model inspired by the structure of the cricket auditory system suggested two putative mechanisms underlying this adaptive tuning: a saturating peripheral nonlinearity could change the spectral tuning, whereas broad feed-forward inhibition was able to reproduce the observed adaptive sharpening of temporal tuning. Our study revealed a surprisingly dynamic code usually found in more complex nervous systems and suggested that stimulus-dependent codes could be implemented using common neural computations.

  20. Efficient coding of time-relative structure using spikes.

    PubMed

    Smith, Evan; Lewicki, Michael S

    2005-01-01

    Nonstationary acoustic features provide essential cues for many auditory tasks, including sound localization, auditory stream analysis, and speech recognition. These features can best be characterized relative to a precise point in time, such as the onset of a sound or the beginning of a harmonic periodicity. Extracting these types of features is a difficult problem. Part of the difficulty is that with standard block-based signal analysis methods, the representation is sensitive to the arbitrary alignment of the blocks with respect to the signal. Convolutional techniques such as shift-invariant transformations can reduce this sensitivity, but these do not yield a code that is efficient, that is, one that forms a nonredundant representation of the underlying structure. Here, we develop a non-block-based method for signal representation that is both time relative and efficient. Signals are represented using a linear superposition of time-shiftable kernel functions, each with an associated magnitude and temporal position. Signal decomposition in this method is a non-linear process that consists of optimizing the kernel function scaling coefficients and temporal positions to form an efficient, shift-invariant representation. We demonstrate the properties of this representation for the purpose of characterizing structure in various types of nonstationary acoustic signals. The computational problem investigated here has direct relevance to the neural coding at the auditory nerve and the more general issue of how to encode complex, time-varying signals with a population of spiking neurons.

  1. Differences in auditory timing between human and nonhuman primates.

    PubMed

    Honing, Henkjan; Merchant, Hugo

    2014-12-01

    The gradual audiomotor evolution hypothesis is proposed as an alternative interpretation to the auditory timing mechanisms discussed in Ackermann et al.'s article. This hypothesis accommodates the fact that the performance of nonhuman primates is comparable to humans in single-interval tasks (such as interval reproduction, categorization, and interception), but shows differences in multiple-interval tasks (such as entrainment, synchronization, and continuation).

  2. Seasonal Plasticity of Precise Spike Timing in the Avian Auditory System

    PubMed Central

    Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A.

    2015-01-01

    Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843

  3. A real-time auditory feedback system for retraining gait.

    PubMed

    Maulucci, Ruth A; Eckhouse, Richard H

    2011-01-01

    Stroke is the third leading cause of death in the United States and the principal cause of major long-term disability, incurring substantial distress as well as medical cost. Abnormal and inefficient gait patterns are widespread in survivors of stroke, yet gait is a major determinant of independent living. It is not surprising, therefore, that improvement of walking function is the most commonly stated priority of the survivors. Although many such individuals achieve the goal of walking, the caliber of their walking performance often limits endurance and quality of life. The ultimate goal of the research presented here is to use real-time auditory feedback to retrain gait in patients with chronic stroke. The strategy is to convert the motion of the foot into an auditory signal, and then use this auditory signal as feedback to inform the subject of the existence as well as the magnitude of error during walking. The initial stage of the project is described in this paper. The design and implementation of the new feedback method for lower limb training is explained. The question of whether the patient is physically capable of handling such training is explored. PMID:22255509

  4. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex.

    PubMed

    Trapeau, Régis; Schönwiesner, Marc

    2015-09-01

    The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere. PMID:26054873

  5. Auditory Stimuli Coding by Postsynaptic Potential and Local Field Potential Features

    PubMed Central

    de Assis, Juliana M.; Santos, Mikaelle O.; de Assis, Francisco M.

    2016-01-01

    The relation between physical stimuli and neurophysiological responses, such as action potentials (spikes) and Local Field Potentials (LFP), has recently been experimented in order to explain how neurons encode auditory information. However, none of these experiments presented analyses with postsynaptic potentials (PSPs). In the present study, we have estimated information values between auditory stimuli and amplitudes/latencies of PSPs and LFPs in anesthetized rats in vivo. To obtain these values, a new method of information estimation was used. This method produced more accurate estimates than those obtained by using the traditional binning method; a fact that was corroborated by simulated data. The traditional binning method could not certainly impart such accuracy even when adjusted by quadratic extrapolation. We found that the information obtained from LFP amplitude variation was significantly greater than the information obtained from PSP amplitude variation. This confirms the fact that LFP reflects the action of many PSPs. Results have shown that the auditory cortex codes more information of stimuli frequency with slow oscillations in groups of neurons than it does with slow oscillations in neurons separately. PMID:27513950

  6. Auditory cortical field coding long-lasting tonal offsets in mice

    PubMed Central

    Baba, Hironori; Tsukano, Hiroaki; Hishida, Ryuichi; Takahashi, Kuniyuki; Horii, Arata; Takahashi, Sugata; Shibuki, Katsuei

    2016-01-01

    Although temporal information processing is important in auditory perception, the mechanisms for coding tonal offsets are unknown. We investigated cortical responses elicited at the offset of tonal stimuli using flavoprotein fluorescence imaging in mice. Off-responses were clearly observed at the offset of tonal stimuli lasting for 7 s, but not after stimuli lasting for 1 s. Off-responses to the short stimuli appeared in a similar cortical region, when conditioning tonal stimuli lasting for 5–20 s preceded the stimuli. MK-801, an inhibitor of NMDA receptors, suppressed the two types of off-responses, suggesting that disinhibition produced by NMDA receptor-dependent synaptic depression might be involved in the off-responses. The peak off-responses were localized in a small region adjacent to the primary auditory cortex, and no frequency-dependent shift of the response peaks was found. Frequency matching of preceding tonal stimuli with short test stimuli was not required for inducing off-responses to short stimuli. Two-photon calcium imaging demonstrated significantly larger neuronal off-responses to stimuli lasting for 7 s in this field, compared with off-responses to stimuli lasting for 1 s. The present results indicate the presence of an auditory cortical field responding to long-lasting tonal offsets, possibly for temporal information processing. PMID:27687766

  7. Late visual and auditory ERP components and choice reaction time.

    PubMed

    Falkenstein, M; Hohnsbein, J; Hoormann, J

    1993-07-01

    Some relations between different late positive ERP components and choice reaction time (RT) were studied. In order to identify the different components we used visual and auditory stimuli, as well as simple and choice reaction tasks, since one of the components is thought to be modality dependent and the other one task dependent. In the paradigm the stimulus modalities were mixed, which was expected to lead to a maximum dissociation of the components after auditory stimuli (Hohnsbein et al. (1991). Electroencephalography and Clinical Neurophysiology, 78, 438-446). The results demonstrated the overlap of two positive waves in choice reaction tasks: a central one (P-SR), and a parietal one (P-CR). The latency of the P-SR varied greatly across modalities, but did not vary with RT, whereas the latency of the P-CR varied strongly with RT. The different overlap of these components on fast and slow trials caused amplitude and latency variations of the "P300" and the positive slow wave. Our results suggest a relation of the P-SR with stimulus evaluation (identification), and of the P-CR with response selection (stimulus-response mapping).

  8. The neural code for auditory space depends on sound frequency and head size in an optimal manner.

    PubMed

    Harper, Nicol S; Scott, Brian H; Semple, Malcolm N; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  9. The Neural Code for Auditory Space Depends on Sound Frequency and Head Size in an Optimal Manner

    PubMed Central

    Harper, Nicol S.; Scott, Brian H.; Semple, Malcolm N.; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)–the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron’s maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species. PMID:25372405

  10. The neural code for auditory space depends on sound frequency and head size in an optimal manner.

    PubMed

    Harper, Nicol S; Scott, Brian H; Semple, Malcolm N; McAlpine, David

    2014-01-01

    A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species.

  11. Left perirhinal cortex codes for similarity in meaning between written words: Comparison with auditory word input.

    PubMed

    Liuzzi, Antonietta Gabriella; Bruffaerts, Rose; Dupont, Patrick; Adamczuk, Katarzyna; Peeters, Ronald; De Deyne, Simon; Storms, Gerrit; Vandenberghe, Rik

    2015-09-01

    Left perirhinal cortex has been previously implicated in associative coding. According to a recent experiment, the similarity of perirhinal fMRI response patterns to written concrete words is higher for words which are more similar in their meaning. If left perirhinal cortex functions as an amodal semantic hub, one would predict that this semantic similarity effect would extend to the spoken modality. We conducted an event-related fMRI experiment and evaluated whether a same semantic similarity effect could be obtained for spoken as for written words. Twenty healthy subjects performed a property verification task in either the written or the spoken modality. Words corresponded to concrete animate entities for which extensive feature generation was available from more than 1000 subjects. From these feature generation data, a concept-feature matrix was derived which formed the basis of a cosine similarity matrix between the entities reflecting their similarity in meaning (called the "semantic cossimilarity matrix"). Independently, we calculated a cosine similarity matrix between the left perirhinal fMRI activity patterns evoked by the words (called the "fMRI cossimilarity matrix"). Next, the similarity was determined between the semantic cossimilarity matrix and the fMRI cossimilarity matrix. This was done for written and spoken words pooled, for written words only, for spoken words only, as well as for crossmodal pairs. Only for written words did the fMRI cossimilarity matrix correlate with the semantic cossimilarity matrix. Contrary to our prediction, we did not find any such effect for auditory word input nor did we find cross-modal effects in perirhinal cortex between written and auditory words. Our findings situate the contribution of left perirhinal cortex to word processing at the top of the visual processing pathway, rather than at an amodal stage where visual and auditory word processing pathways have already converged.

  12. Auditory reafferences: the influence of real-time feedback on movement control

    PubMed Central

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes. PMID:25688230

  13. Impairment of auditory-motor timing and compensatory reorganization after ventral premotor cortex stimulation.

    PubMed

    Kornysheva, Katja; Schubotz, Ricarda I

    2011-01-01

    Integrating auditory and motor information often requires precise timing as in speech and music. In humans, the position of the ventral premotor cortex (PMv) in the dorsal auditory stream renders this area a node for auditory-motor integration. Yet, it remains unknown whether the PMv is critical for auditory-motor timing and which activity increases help to preserve task performance following its disruption. 16 healthy volunteers participated in two sessions with fMRI measured at baseline and following rTMS (rTMS) of either the left PMv or a control region. Subjects synchronized left or right finger tapping to sub-second beat rates of auditory rhythms in the experimental task, and produced self-paced tapping during spectrally matched auditory stimuli in the control task. Left PMv rTMS impaired auditory-motor synchronization accuracy in the first sub-block following stimulation (p<0.01, Bonferroni corrected), but spared motor timing and attention to task. Task-related activity increased in the homologue right PMv, but did not predict the behavioral effect of rTMS. In contrast, anterior midline cerebellum revealed most pronounced activity increase in less impaired subjects. The present findings suggest a critical role of the left PMv in feed-forward computations enabling accurate auditory-motor timing, which can be compensated by activity modulations in the cerebellum, but not in the homologue region contralateral to stimulation. PMID:21738657

  14. Predicted effects of sensorineural hearing loss on across-fiber envelope coding in the auditory nervea

    PubMed Central

    Swaminathan, Jayaganesh; Heinz, Michael G.

    2011-01-01

    Cross-channel envelope correlations are hypothesized to influence speech intelligibility, particularly in adverse conditions. Acoustic analyses suggest speech envelope correlations differ for syllabic and phonemic ranges of modulation frequency. The influence of cochlear filtering was examined here by predicting cross-channel envelope correlations in different speech modulation ranges for normal and impaired auditory-nerve (AN) responses. Neural cross-correlation coefficients quantified across-fiber envelope coding in syllabic (0–5 Hz), phonemic (5–64 Hz), and periodicity (64–300 Hz) modulation ranges. Spike trains were generated from a physiologically based AN model. Correlations were also computed using the model with selective hair-cell damage. Neural predictions revealed that envelope cross-correlation decreased with increased characteristic-frequency separation for all modulation ranges (with greater syllabic-envelope correlation than phonemic or periodicity). Syllabic envelope was highly correlated across many spectral channels, whereas phonemic and periodicity envelopes were correlated mainly between adjacent channels. Outer-hair-cell impairment increased the degree of cross-channel correlation for phonemic and periodicity ranges for speech in quiet and in noise, thereby reducing the number of independent neural information channels for envelope coding. In contrast, outer-hair-cell impairment was predicted to decrease cross-channel correlation for syllabic envelopes in noise, which may partially account for the reduced ability of hearing-impaired listeners to segregate speech in complex backgrounds. PMID:21682421

  15. Bat auditory cortex – model for general mammalian auditory computation or special design solution for active time perception?

    PubMed

    Kössl, Manfred; Hechavarria, Julio; Voss, Cornelia; Schaefer, Markus; Vater, Marianne

    2015-03-01

    Audition in bats serves passive orientation, alerting functions and communication as it does in other vertebrates. In addition, bats have evolved echolocation for orientation and prey detection and capture. This put a selective pressure on the auditory system in regard to echolocation-relevant temporal computation and frequency analysis. The present review attempts to evaluate in which respect the processing modules of bat auditory cortex (AC) are a model for typical mammalian AC function or are designed for echolocation-unique purposes. We conclude that, while cortical area arrangement and cortical frequency processing does not deviate greatly from that of other mammals, the echo delay time-sensitive dorsal cortex regions contain special designs for very powerful time perception. Different bat species have either a unique chronotopic cortex topography or a distributed salt-and-pepper representation of echo delay. The two designs seem to enable similar behavioural performance. PMID:25728173

  16. The Times of Ira Hirsh: Multiple Ranges of Auditory Temporal Perception.

    PubMed

    Divenyi, Pierre L

    2004-08-01

    Ira Hirsh was among the first to recognize that the auditory system does not deal with temporal information in a unitary way across the continuum of time intervals involved in speech processing. He identified the short range (extending from 1 to 20 milliseconds) as that of phase perception, the range between 20 and 100 milliseconds as that in which auditory patterns emerge, and the long range from 100 milliseconds and longer as that of separate auditory events. Furthermore, he also was among the first to recognize that auditory time perception heavily depended on spectral context. A study of the perception of sequences representing different temporal orders of three tones, by Hirsh and the author (e.g., Divenyi and Hirsh, 1978) demonstrated the dependence of auditory sequence perception on both time range and spectral context, and provided a bridge between Hirsh's view of auditory time and Bregman's view of stream segregation. A subsequent search by the author for psychophysical underpinnings of the cocktail-party phenomenon (e.g., Divenyi and Haupt, 1997) suggests that segregation of simultaneous streams of speech might rely on the ability to follow spectral changes in the demisyllabic-to-syllabic (100 to 200 milliseconds) range (i.e., Hirsh's long range).Learning Outcomes: As a result of this activity, the participant will be able to (1) describe the importance of temporal processing in hearing; and (2) identify time ranges where the auditory system will spontaneously adopt different analysis techniques.

  17. Coding of Visual, Auditory, Rule, and Response Information in the Brain: 10 Years of Multivoxel Pattern Analysis.

    PubMed

    Woolgar, Alexandra; Jackson, Jade; Duncan, John

    2016-10-01

    How is the processing of task information organized in the brain? Many views of brain function emphasize modularity, with different regions specialized for processing different types of information. However, recent accounts also highlight flexibility, pointing especially to the highly consistent pattern of frontoparietal activation across many tasks. Although early insights from functional imaging were based on overall activation levels during different cognitive operations, in the last decade many researchers have used multivoxel pattern analyses to interrogate the representational content of activations, mapping out the brain regions that make particular stimulus, rule, or response distinctions. Here, we drew on 100 searchlight decoding analyses from 57 published papers to characterize the information coded in different brain networks. The outcome was highly structured. Visual, auditory, and motor networks predominantly (but not exclusively) coded visual, auditory, and motor information, respectively. By contrast, the frontoparietal multiple-demand network was characterized by domain generality, coding visual, auditory, motor, and rule information. The contribution of the default mode network and voxels elsewhere was minor. The data suggest a balanced picture of brain organization in which sensory and motor networks are relatively specialized for information in their own domain, whereas a specific frontoparietal network acts as a domain-general "core" with the capacity to code many different aspects of a task.

  18. Coding of Visual, Auditory, Rule, and Response Information in the Brain: 10 Years of Multivoxel Pattern Analysis.

    PubMed

    Woolgar, Alexandra; Jackson, Jade; Duncan, John

    2016-10-01

    How is the processing of task information organized in the brain? Many views of brain function emphasize modularity, with different regions specialized for processing different types of information. However, recent accounts also highlight flexibility, pointing especially to the highly consistent pattern of frontoparietal activation across many tasks. Although early insights from functional imaging were based on overall activation levels during different cognitive operations, in the last decade many researchers have used multivoxel pattern analyses to interrogate the representational content of activations, mapping out the brain regions that make particular stimulus, rule, or response distinctions. Here, we drew on 100 searchlight decoding analyses from 57 published papers to characterize the information coded in different brain networks. The outcome was highly structured. Visual, auditory, and motor networks predominantly (but not exclusively) coded visual, auditory, and motor information, respectively. By contrast, the frontoparietal multiple-demand network was characterized by domain generality, coding visual, auditory, motor, and rule information. The contribution of the default mode network and voxels elsewhere was minor. The data suggest a balanced picture of brain organization in which sensory and motor networks are relatively specialized for information in their own domain, whereas a specific frontoparietal network acts as a domain-general "core" with the capacity to code many different aspects of a task. PMID:27315269

  19. Postural prioritization is differentially altered in healthy older compared to younger adults during visual and auditory coded spatial multitasking.

    PubMed

    Liston, Matthew B; Bergmann, Jeroen H; Keating, Niamh; Green, David A; Pavlou, Marousa

    2014-01-01

    Many daily activities require appropriate allocation of attention between postural and cognitive tasks (i.e. dual-tasking) to be carried out effectively. Processing multiple streams of spatial information is important for everyday tasks such as road crossing. Fifteen community-dwelling healthy older (mean age=78.3, male=1) and twenty younger adults (mean age=25.3, male=6) completed a novel bimodal spatial multi-task test providing contextually similar spatial information via separate sensory modalities to investigate effects on postural prioritization. Two tasks, a temporally random visually coded spatial step navigation task (VS) and a regular auditory-coded spatial congruency task (AS) were performed independently (single task) and in combination (multi-task). Response time, accuracy and dual-task costs (% change in multi-task condition) were determined. Results showed a significant 3-way interaction between task type (VS vs. AS), complexity (single vs. multi) and age group for both response time (p ≤ 0.01) and response accuracy (p ≤ 0.05) with older adults performing significantly worse than younger adults. Dual-task costs were significantly greater for older compared to younger adults in the VS step task for both response time (p ≤ 0.01) and accuracy (p ≤ 0.05) indicating prioritization of the AS over the VS stepping task in older adults. Younger adults display greater AS task response time dual task costs compared to older adults (p ≤ 0.05) indicating VS task prioritization in agreement with the posture first strategy. Findings suggest that novel dual modality spatial testing may lead to adoption of postural strategies that deviate from posture first, particularly in older people. Adoption of previously unreported postural prioritization strategies may influence balance control in older people.

  20. Interactions of auditory and visual stimuli in space and time.

    PubMed

    Recanzone, Gregg H

    2009-12-01

    The nervous system has evolved to transduce different types of environmental energy independently, for example light energy is transduced by the retina whereas sound energy is transduced by the cochlea. However, the neural processing of this energy is necessarily combined, resulting in a unified percept of a real-world object or event. These percepts can be modified in the laboratory, resulting in illusions that can be used to probe how multisensory integration occurs. This paper reviews studies that have utilized such illusory percepts in order to better understand the integration of auditory and visual signals in primates. Results from human psychophysical experiments where visual stimuli alter the perception of acoustic space (the ventriloquism effect) are discussed, as are experiments probing the underlying cortical mechanisms of this integration. Similar psychophysical experiments where auditory stimuli alter the perception of visual temporal processing are also described. PMID:19393306

  1. Time course of regional brain activation associated with onset of auditory/verbal hallucinations

    PubMed Central

    Hoffman, Ralph E.; Anderson, Adam W.; Varanko, Maxine; Gore, John C.; Hampson, Michelle

    2008-01-01

    The time course of brain activation prior to onset of auditory/verbal hallucinations was characterised using functional magnetic resonance imaging in six dextral patients with schizophrenia. Composite maps of pre-hallucination periods revealed activation in the left anterior insula and in the right middle temporal gyrus, partially replicating two previous case reports, as well as deactivation in the anterior cingulate and parahippocampal gyri. These findings may reflect brain events that trigger or increase vulnerability to auditory/verbal hallucinations. PMID:18978327

  2. Speech enhancement for listeners with hearing loss based on a model for vowel coding in the auditory midbrain.

    PubMed

    Rao, Akshay; Carney, Laurel H

    2014-07-01

    A novel signal-processing strategy is proposed to enhance speech for listeners with hearing loss. The strategy focuses on improving vowel perception based on a recent hypothesis for vowel coding in the auditory system. Traditionally, studies of neural vowel encoding have focused on the representation of formants (peaks in vowel spectra) in the discharge patterns of the population of auditory-nerve (AN) fibers. A recent hypothesis focuses instead on vowel encoding in the auditory midbrain, and suggests a robust representation of formants. AN fiber discharge rates are characterized by pitch-related fluctuations having frequency-dependent modulation depths. Fibers tuned to frequencies near formants exhibit weaker pitch-related fluctuations than those tuned to frequencies between formants. Many auditory midbrain neurons show tuning to amplitude modulation frequency in addition to audio frequency. According to the auditory midbrain vowel encoding hypothesis, the response map of a population of midbrain neurons tuned to modulations near voice pitch exhibits minima near formant frequencies, due to the lack of strong pitch-related fluctuations at their inputs. This representation is robust over the range of noise conditions in which speech intelligibility is also robust for normal-hearing listeners. Based on this hypothesis, a vowel-enhancement strategy has been proposed that aims to restore vowel encoding at the level of the auditory midbrain. The signal processing consists of pitch tracking, formant tracking, and formant enhancement. The novel formant-tracking method proposed here estimates the first two formant frequencies by modeling characteristics of the auditory periphery, such as saturated discharge rates of AN fibers and modulation tuning properties of auditory midbrain neurons. The formant enhancement stage aims to restore the representation of formants at the level of the midbrain by increasing the dominance of a single harmonic near each formant and saturating

  3. Speech enhancement for listeners with hearing loss based on a model for vowel coding in the auditory midbrain.

    PubMed

    Rao, Akshay; Carney, Laurel H

    2014-07-01

    A novel signal-processing strategy is proposed to enhance speech for listeners with hearing loss. The strategy focuses on improving vowel perception based on a recent hypothesis for vowel coding in the auditory system. Traditionally, studies of neural vowel encoding have focused on the representation of formants (peaks in vowel spectra) in the discharge patterns of the population of auditory-nerve (AN) fibers. A recent hypothesis focuses instead on vowel encoding in the auditory midbrain, and suggests a robust representation of formants. AN fiber discharge rates are characterized by pitch-related fluctuations having frequency-dependent modulation depths. Fibers tuned to frequencies near formants exhibit weaker pitch-related fluctuations than those tuned to frequencies between formants. Many auditory midbrain neurons show tuning to amplitude modulation frequency in addition to audio frequency. According to the auditory midbrain vowel encoding hypothesis, the response map of a population of midbrain neurons tuned to modulations near voice pitch exhibits minima near formant frequencies, due to the lack of strong pitch-related fluctuations at their inputs. This representation is robust over the range of noise conditions in which speech intelligibility is also robust for normal-hearing listeners. Based on this hypothesis, a vowel-enhancement strategy has been proposed that aims to restore vowel encoding at the level of the auditory midbrain. The signal processing consists of pitch tracking, formant tracking, and formant enhancement. The novel formant-tracking method proposed here estimates the first two formant frequencies by modeling characteristics of the auditory periphery, such as saturated discharge rates of AN fibers and modulation tuning properties of auditory midbrain neurons. The formant enhancement stage aims to restore the representation of formants at the level of the midbrain by increasing the dominance of a single harmonic near each formant and saturating

  4. Jozef Zwislocki: Impact on models of coding in the auditory nerve

    NASA Astrophysics Data System (ADS)

    Sachs, Murray B.

    2003-04-01

    The auditory nerve has long been considered a window on the biophysical mechanisms of cochlear transduction and the most carefully characterized aspect of the responses of single auditory-nerve fibers has been the tuning curve. Perhaps the most intensively studied question in auditory theory is: What is the relationship between the shapes of these tuning curves and basilar membrane displacements? The basilar membrane measurements of Georg von Bekesy stimulated a generation of basilar-membrane modelers, none more notable than Joe Zwislocki, who was awarded the first von Bekesy Medal by the Acoustical Society in 1985. The impact of Zwislocki's basilar membrane models on our understanding of auditory nerve tuning will be reviewed. The properties of auditory-nerve discharge patterns are also shaped by the filtering properties of the hair cell/synapse complex. The major contributions of Joe and his students to our understanding of this filtering through their elegant experimental and modeling studies of adaptation in the auditory nerve will be presented. Throughout his career, Joe Zwislocki has maintained an active interest in loudness summation and his work in relating the input/output characteristics of auditory-nerve fibers to loudness will be highlighted.

  5. Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks

    PubMed Central

    Dai, Lengshi; Shinn-Cunningham, Barbara G.

    2016-01-01

    Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics

  6. I can see what you are saying: Auditory labels reduce visual search times.

    PubMed

    Cho, Kit W

    2016-10-01

    The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes.

  7. Multiplicative auditory spatial receptive fields created by a hierarchy of population codes.

    PubMed

    Fischer, Brian J; Anderson, Charles H; Peña, José Luis

    2009-01-01

    A multiplicative combination of tuning to interaural time difference (ITD) and interaural level difference (ILD) contributes to the generation of spatially selective auditory neurons in the owl's midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl's inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl's inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system. PMID:19956693

  8. Using Spatial Manipulation to Examine Interactions between Visual and Auditory Encoding of Pitch and Time.

    PubMed

    McLachlan, Neil M; Greco, Loretta J; Toner, Emily C; Wilson, Sarah J

    2010-01-01

    Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians. PMID:21833287

  9. Plasticity in the neural coding of auditory space in the mammalian brain

    PubMed Central

    King, Andrew J.; Parsons, Carl H.; Moore, David R.

    2000-01-01

    Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the “cocktail party effect”) are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy. PMID:11050215

  10. Frequency tuning and intensity coding of sound in the auditory periphery of the lake sturgeon, Acipenser fulvescens

    PubMed Central

    Meyer, Michaela; Fay, Richard R.; Popper, Arthur N.

    2010-01-01

    Acipenser fulvescens, the lake sturgeon, belongs to one of the few extant non-teleost ray-finned (bony) fishes. The sturgeons (family Acipenseridae) have a phylogenetic history that dates back about 250 million years. The study reported here is the first investigation of peripheral coding strategies for spectral analysis in the auditory system in a non-teleost bony fish. We used a shaker system to simulate the particle motion component of sound during electrophysiological recordings of isolated single units from the eighth nerve innervating the saccule and lagena. Background activity and response characteristics of saccular and lagenar afferents (such as thresholds, response–level functions and temporal firing) resembled the ones found in teleosts. The distribution of best frequencies also resembled data in teleosts (except for Carassius auratus, goldfish) tested with the same stimulation method. The saccule and lagena in A. fulvescens contain otoconia, in contrast to the solid otoliths found in teleosts, however, this difference in otolith structure did not appear to affect threshold, frequency tuning, intensity- or temporal responses of auditory afferents. In general, the physiological characteristics common to A. fulvescens, teleosts and land vertebrates reflect important functions of the auditory system that may have been conserved throughout the evolution of vertebrates. PMID:20400642

  11. Slow Cholinergic Modulation of Spike Probability in Ultra-Fast Time-Coding Sensory Neurons

    PubMed Central

    Goyer, David; Kurth, Stefanie; Rübsamen, Rudolf

    2016-01-01

    Abstract Sensory processing in the lower auditory pathway is generally considered to be rigid and thus less subject to modulation than central processing. However, in addition to the powerful bottom-up excitation by auditory nerve fibers, the ventral cochlear nucleus also receives efferent cholinergic innervation from both auditory and nonauditory top–down sources. We thus tested the influence of cholinergic modulation on highly precise time-coding neurons in the cochlear nucleus of the Mongolian gerbil. By combining electrophysiological recordings with pharmacological application in vitro and in vivo, we found 55–72% of spherical bushy cells (SBCs) to be depolarized by carbachol on two time scales, ranging from hundreds of milliseconds to minutes. These effects were mediated by nicotinic and muscarinic acetylcholine receptors, respectively. Pharmacological block of muscarinic receptors hyperpolarized the resting membrane potential, suggesting a novel mechanism of setting the resting membrane potential for SBC. The cholinergic depolarization led to an increase of spike probability in SBCs without compromising the temporal precision of the SBC output in vitro. In vivo, iontophoretic application of carbachol resulted in an increase in spontaneous SBC activity. The inclusion of cholinergic modulation in an SBC model predicted an expansion of the dynamic range of sound responses and increased temporal acuity. Our results thus suggest of a top–down modulatory system mediated by acetylcholine which influences temporally precise information processing in the lower auditory pathway. PMID:27699207

  12. Slow Cholinergic Modulation of Spike Probability in Ultra-Fast Time-Coding Sensory Neurons

    PubMed Central

    Goyer, David; Kurth, Stefanie; Rübsamen, Rudolf

    2016-01-01

    Abstract Sensory processing in the lower auditory pathway is generally considered to be rigid and thus less subject to modulation than central processing. However, in addition to the powerful bottom-up excitation by auditory nerve fibers, the ventral cochlear nucleus also receives efferent cholinergic innervation from both auditory and nonauditory top–down sources. We thus tested the influence of cholinergic modulation on highly precise time-coding neurons in the cochlear nucleus of the Mongolian gerbil. By combining electrophysiological recordings with pharmacological application in vitro and in vivo, we found 55–72% of spherical bushy cells (SBCs) to be depolarized by carbachol on two time scales, ranging from hundreds of milliseconds to minutes. These effects were mediated by nicotinic and muscarinic acetylcholine receptors, respectively. Pharmacological block of muscarinic receptors hyperpolarized the resting membrane potential, suggesting a novel mechanism of setting the resting membrane potential for SBC. The cholinergic depolarization led to an increase of spike probability in SBCs without compromising the temporal precision of the SBC output in vitro. In vivo, iontophoretic application of carbachol resulted in an increase in spontaneous SBC activity. The inclusion of cholinergic modulation in an SBC model predicted an expansion of the dynamic range of sound responses and increased temporal acuity. Our results thus suggest of a top–down modulatory system mediated by acetylcholine which influences temporally precise information processing in the lower auditory pathway.

  13. Developing and Selecting Auditory Warnings for a Real-Time Behavioral Intervention

    PubMed Central

    Bellettiere, John; Hughes, Suzanne C.; Liles, Sandy; Boman-Davis, Marie; Klepeis, Neil; Blumberg, Elaine; Mills, Jeff; Berardi, Vincent; Obayashi, Saori; Allen, T. Tracy; Hovell, Melbourne F.

    2015-01-01

    Real-time sensing and computing technologies are increasingly used in the delivery of real-time health behavior interventions. Auditory signals play a critical role in many of these interventions, impacting not only behavioral response but also treatment adherence and participant retention. Yet, few behavioral interventions that employ auditory feedback report the characteristics of sounds used and even fewer design signals specifically for their intervention. This paper describes a four-step process used in developing and selecting auditory warnings for a behavioral trial designed to reduce indoor secondhand smoke exposure. In step one, relevant information was gathered from ergonomic and behavioral science literature to assist a panel of research assistants in developing criteria for intervention-specific auditory feedback. In step two, multiple sounds were identified through internet searches and modified in accordance with the developed criteria, and two sounds were selected that best met those criteria. In step three, a survey was conducted among 64 persons from the primary sampling frame of the larger behavioral trial to compare the relative aversiveness of sounds, determine respondents' reported behavioral reactions to those signals, and assess participant's preference between sounds. In the final step, survey results were used to select the appropriate sound for auditory warnings. Ultimately, a single-tone pulse, 500 milliseconds (ms) in length that repeats every 270 ms for 3 cycles was chosen for the behavioral trial. The methods described herein represent one example of steps that can be followed to develop and select auditory feedback tailored for a given behavioral intervention. PMID:25745633

  14. Auditory Attention to Frequency and Time: An Analogy to Visual Local-Global Stimuli

    ERIC Educational Resources Information Center

    Justus, Timothy; List, Alexandra

    2005-01-01

    Two priming experiments demonstrated exogenous attentional persistence to the fundamental auditory dimensions of frequency (Experiment 1) and time (Experiment 2). In a divided-attention task, participants responded to an independent dimension, the identification of three-tone sequence patterns, for both prime and probe stimuli. The stimuli were…

  15. Probing the time course of head-motion cues integration during auditory scene analysis.

    PubMed

    Kondo, Hirohito M; Toshima, Iwaki; Pressnitzer, Daniel; Kashino, Makio

    2014-01-01

    The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and rate their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues), we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved. PMID:25009456

  16. Auditory Imagery Shapes Movement Timing and Kinematics: Evidence from a Musical Task

    ERIC Educational Resources Information Center

    Keller, Peter E.; Dalla Bella, Simone; Koch, Iring

    2010-01-01

    The role of anticipatory auditory imagery in music-like sequential action was investigated by examining timing accuracy and kinematics using a motion capture system. Musicians responded to metronomic pacing signals by producing three unpaced taps on three vertically aligned keys at the given tempo. Taps triggered tones in two out of three blocked…

  17. Code for Calculating Regional Seismic Travel Time

    2009-07-10

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forwardmore » travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minus predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less

  18. Code for Calculating Regional Seismic Travel Time

    SciTech Connect

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    2009-07-10

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minus predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.

  19. Onset coding is degraded in auditory nerve fibers from mutant mice lacking synaptic ribbons

    PubMed Central

    Buran, B.N.; Strenzke, N.; Neef, A.; Gundelfinger, E.D.; Moser, T.; Liberman, M.C.

    2010-01-01

    Synaptic ribbons, found at the pre-synaptic membrane of sensory cells in both ear and eye, have been implicated in the vesicle-pool dynamics of synaptic transmission. To elucidate ribbon function, we characterized the response properties of single auditory nerve fibers in mice lacking Bassoon, a scaffolding protein involved in anchoring ribbons to the membrane. In Bassoon mutants, immunohistochemistry showed fewer than 3% of the hair cells’ afferent synapses retained anchored ribbons. Auditory nerve fibers from mutants had normal threshold, dynamic range and post-onset adaptation in response to tone bursts, and they were able to phase-lock with normal precision to amplitude-modulated tones. However, spontaneous and sound-evoked discharge rates were reduced, and the reliability of spikes, particularly at stimulus onset, was significantly degraded as shown by an increased variance of first-spike latencies. Modeling based on in vitro studies of normal and mutant hair cells links these findings to reduced release rates at the synapse. The degradation of response reliability in these mutants suggests that the ribbon and/or bassoon normally facilitate high rates of exocytosis and that its absence significantly compromises the temporal resolving power of the auditory system. PMID:20519533

  20. Onset coding is degraded in auditory nerve fibers from mutant mice lacking synaptic ribbons.

    PubMed

    Buran, Bradley N; Strenzke, Nicola; Neef, Andreas; Gundelfinger, Eckart D; Moser, Tobias; Liberman, M Charles

    2010-06-01

    Synaptic ribbons, found at the presynaptic membrane of sensory cells in both ear and eye, have been implicated in the vesicle-pool dynamics of synaptic transmission. To elucidate ribbon function, we characterized the response properties of single auditory nerve fibers in mice lacking Bassoon, a scaffolding protein involved in anchoring ribbons to the membrane. In bassoon mutants, immunohistochemistry showed that fewer than 3% of the hair cells' afferent synapses retained anchored ribbons. Auditory nerve fibers from mutants had normal threshold, dynamic range, and postonset adaptation in response to tone bursts, and they were able to phase lock with normal precision to amplitude-modulated tones. However, spontaneous and sound-evoked discharge rates were reduced, and the reliability of spikes, particularly at stimulus onset, was significantly degraded as shown by an increased variance of first-spike latencies. Modeling based on in vitro studies of normal and mutant hair cells links these findings to reduced release rates at the synapse. The degradation of response reliability in these mutants suggests that the ribbon and/or Bassoon normally facilitate high rates of exocytosis and that its absence significantly compromises the temporal resolving power of the auditory system.

  1. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar.

    PubMed

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-07-28

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power.

  2. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar.

    PubMed

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-01-01

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power. PMID:27483261

  3. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar

    PubMed Central

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-01-01

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power. PMID:27483261

  4. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses

    PubMed Central

    Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli

    2015-01-01

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in

  5. Time course of regional brain activity accompanying auditory verbal hallucinations in schizophrenia

    PubMed Central

    Hoffman, Ralph E.; Pittman, Brian; Constable, R. Todd; Bhagwagar, Zubin; Hampson, Michelle

    2011-01-01

    Background The pathophysiology of auditory verbal hallucinations remains poorly understood. Aims To characterise the time course of regional brain activity leading to auditory verbal hallucinations. Method During functional magnetic resonance imaging, 11 patients with schizophrenia or schizoaffective disorder signalled auditory verbal hallucination events by pressing a button. To control for effects of motor behaviour, regional activity associated with hallucination events was scaled against corresponding activity arising from random button-presses produced by 10 patients who did not experience hallucinations. Results Immediately prior to the hallucinations, motor-adjusted activity in the left inferior frontal gyrus was significantly greater than corresponding activity in the right inferior frontal gyrus. In contrast, motor-adjusted activity in a right posterior temporal region overshadowed corresponding activity in the left homologous temporal region. Robustly elevated motor-adjusted activity in the left temporal region associated with auditory verbal hallucinations was also detected, but only subsequent to hallucination events. At the earliest time shift studied, the correlation between left inferior frontal gyrus and right temporal activity was significantly higher for the hallucination group compared with non-hallucinating patients. Conclusions Findings suggest that heightened functional coupling between the left inferior frontal gyrus and right temporal regions leads to coactivation in these speech processing regions that is hallucinogenic. Delayed left temporal activation may reflect impaired corollary discharge contributing to source misattribution of resulting verbal images. PMID:21972276

  6. A grouped binary time code for telemetry and space applications

    NASA Technical Reports Server (NTRS)

    Chi, A. R.

    1979-01-01

    A computer oriented time code designed for users with various time resolution requirements is presented. It is intended as a time code for spacecraft and ground applications where direct code compatibility with automatic data processing equipment is of primary consideration. The principal features of this time code are: byte oriented format, selectable resolution options (from seconds to nanoseconds); and long ambiguity period. The time code is compatible with the new data handling and management concepts such as the NASA End-to-End Data System and the Telemetry Data Packetization format.

  7. System Measures Errors Between Time-Code Signals

    NASA Technical Reports Server (NTRS)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  8. Binaural noise stimulation of auditory callosal fibers of the cat: responses to interaural time delays.

    PubMed

    Poirier, P; Lepore, F; Provençal, C; Ptito, M; Guillemot, J P

    1995-01-01

    The corpus callosum, the principal neocortical commissure, allows for the interhemispheric transfer of lateralized information between the hemispheres. The aim of the present experiment was to study callosal transfer of auditory information in the cat, with particular reference to its contribution to sound localization. The corpus callosum was approached under direct visual control, and axonic responses were recorded under light anesthesia using glass micro-pipettes. Results showed that auditory information is transmitted in the posterior portion of the callosum. Diotic presentations, in which interaural time delay was manipulated, indicated that, for a large number of fibers, the largest excitatory or inhibitory interactions were obtained at null interaural time delay, a condition which supports the notion of a callosal contribution to auditory midline fusion. However, an important number of callosal fibers was also found to be excited maximally at specific, non-zero interaural time delays, suggesting that they preferred sounds situated at spatial locations other than the midline. The results are discussed in relation to those obtained electrophysiologically for the visual and somesthesic modalities and in terms of results obtained in human and animal behavioral experiments.

  9. Coding of sound direction in the auditory periphery of the lake sturgeon, Acipenser fulvescens

    PubMed Central

    Popper, Arthur N.; Fay, Richard R.

    2012-01-01

    The lake sturgeon, Acipenser fulvescens, belongs to one of the few extant nonteleost ray-finned fishes and diverged from the main vertebrate lineage about 250 million years ago. The aim of this study was to use this species to explore the peripheral neural coding strategies for sound direction and compare these results to modern bony fishes (teleosts). Extracellular recordings were made from afferent neurons innervating the saccule and lagena of the inner ear while the fish was stimulated using a shaker system. Afferents were highly directional and strongly phase locked to the stimulus. Directional response profiles resembled cosine functions, and directional preferences occurred at a wide range of stimulus intensities (spanning at least 60 dB re 1 nm displacement). Seventy-six percent of afferents were directionally selective for stimuli in the vertical plane near 90° (up down) and did not respond to horizontal stimulation. Sixty-two percent of afferents responsive to horizontal stimulation had their best axis in azimuths near 0° (front back). These findings suggest that in the lake sturgeon, in contrast to teleosts, the saccule and lagena may convey more limited information about the direction of a sound source, raising the possibility that this species uses a different mechanism for localizing sound. For azimuth, a mechanism could involve the utricle or perhaps the computation of arrival time differences. For elevation, behavioral strategies such as directing the head to maximize input to the area of best sensitivity may be used. Alternatively, the lake sturgeon may have a more limited ability for sound source localization compared with teleosts. PMID:22031776

  10. Using Reaction Time and Equal Latency Contours to Derive Auditory Weighting Functions in Sea Lions and Dolphins.

    PubMed

    Finneran, James J; Mulsow, Jason; Schlundt, Carolyn E

    2016-01-01

    Subjective loudness measurements are used to create equal-loudness contours and auditory weighting functions for human noise-mitigation criteria; however, comparable direct measurements of subjective loudness with animal subjects are difficult to conduct. In this study, simple reaction time to pure tones was measured as a proxy for subjective loudness in a Tursiops truncatus and Zalophus californianus. Contours fit to equal reaction-time curves were then used to estimate the shapes of auditory weighting functions.

  11. Using Reaction Time and Equal Latency Contours to Derive Auditory Weighting Functions in Sea Lions and Dolphins.

    PubMed

    Finneran, James J; Mulsow, Jason; Schlundt, Carolyn E

    2016-01-01

    Subjective loudness measurements are used to create equal-loudness contours and auditory weighting functions for human noise-mitigation criteria; however, comparable direct measurements of subjective loudness with animal subjects are difficult to conduct. In this study, simple reaction time to pure tones was measured as a proxy for subjective loudness in a Tursiops truncatus and Zalophus californianus. Contours fit to equal reaction-time curves were then used to estimate the shapes of auditory weighting functions. PMID:26610970

  12. Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Squires, N.; Galambos, R.

    1975-01-01

    Short latency (under 10 msec) responses elicited by bursts of white noise were recorded from the scalps of human subjects. Response alterations produced by changes in the noise burst duration (on-time), inter-burst interval (off-time), and onset and offset shapes were analyzed. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise time but was unaffected by changes in fall time. Increases in stimulus duration, and therefore in loudness, resulted in a systematic increase in latency. This was probably due to response recovery processes, since the effect was eliminated with increases in stimulus off-time. The amplitude of wave V was insensitive to changes in signal rise and fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It was concluded that wave V of the human auditory brainstem evoked response is solely an onset response.

  13. Norm-Based Coding of Voice Identity in Human Auditory Cortex

    PubMed Central

    Latinus, Marianne; McAleer, Phil; Bestelmeyer, Patricia E.G.; Belin, Pascal

    2013-01-01

    Summary Listeners exploit small interindividual variations around a generic acoustical structure to discriminate and identify individuals from their voice—a key requirement for social interactions. The human brain contains temporal voice areas (TVA) [1] involved in an acoustic-based representation of voice identity [2–6], but the underlying coding mechanisms remain unknown. Indirect evidence suggests that identity representation in these areas could rely on a norm-based coding mechanism [4, 7–11]. Here, we show by using fMRI that voice identity is coded in the TVA as a function of acoustical distance to two internal voice prototypes (one male, one female)—approximated here by averaging a large number of same-gender voices by using morphing [12]. Voices more distant from their prototype are perceived as more distinctive and elicit greater neuronal activity in voice-sensitive cortex than closer voices—a phenomenon not merely explained by neuronal adaptation [13, 14]. Moreover, explicit manipulations of distance-to-mean by morphing voices toward (or away from) their prototype elicit reduced (or enhanced) neuronal activity. These results indicate that voice-sensitive cortex integrates relevant acoustical features into a complex representation referenced to idealized male and female voice prototypes. More generally, they shed light on remarkable similarities in cerebral representations of facial and vocal identity. PMID:23707425

  14. Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation.

    PubMed

    Kristjánsson, Tómas; Thorvaldsson, Tómas Páll; Kristjánsson, Arni

    2014-01-01

    Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.

  15. Neural mechanisms underlying auditory feedback control of speech.

    PubMed

    Tourville, Jason A; Reilly, Kevin J; Guenther, Frank H

    2008-02-01

    The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 136 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech.

  16. Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Squires, N.; Galambos, R.

    1975-01-01

    Short latency (under 10 msec) evoked responses elicited by bursts of white noise were recorded from the scalp of human subjects. Response alterations produced by changes in the noise burst duration (on-time) inter-burst interval (off-time), and onset and offset shapes are reported and evaluated. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise-time but was unaffected by changes in fall-time. The amplitude of wave V was insensitive to changes in signal rise-and-fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It is concluded that wave V of the human auditory brainstem evoked response is solely an onset response.

  17. SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code

    SciTech Connect

    Hua, D; Fowler, T

    2004-06-15

    A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.

  18. Code extraction from encoded signal in time-spreading optical code division multiple access.

    PubMed

    Si, Zhijian; Yin, Feifei; Xin, Ming; Chen, Hongwei; Chen, Minghua; Xie, Shizhong

    2010-01-15

    A vulnerability that allows eavesdroppers to extract the code from the waveform of the noiselike encoded signal of an isolated user in a standard time-spreading optical code division multiple access communication system using bipolar phase code is experimentally demonstrated. The principle is based on fine structure in the encoded signal. Each dip in the waveform corresponds to a transition of the bipolar code. Eavesdroppers can get the code by analyzing the chip numbers between any two transitions; then a decoder identical to the legal user's can be fabricated, and they can get the properly decoded signal.

  19. The Visual and Auditory Reaction Time of Adolescents with Respect to Their Academic Achievements

    ERIC Educational Resources Information Center

    Taskin, Cengiz

    2016-01-01

    The aim of this study was to examine in visual and auditory reaction time of adolescents with respect to their academic achievement level. Five hundred adolescent children from the Turkey, (age=15.24±0.78 years; height=168.80±4.89 cm; weight=65.24±4.30 kg) for two hundred fifty male and (age=15.28±0.74; height=160.40±5.77 cm; weight=55.32±4.13 kg)…

  20. Auditory detection of ultrasonic coded transmitters by seals and sea lions.

    PubMed

    Cunningham, Kane A; Hayes, Sean A; Michelle Wargo Rub, A; Reichmuth, Colleen

    2014-04-01

    Ultrasonic coded transmitters (UCTs) are high-frequency acoustic tags that are often used to conduct survivorship studies of vulnerable fish species. Recent observations of differential mortality in tag control studies suggest that fish instrumented with UCTs may be selectively targeted by marine mammal predators, thereby skewing valuable survivorship data. In order to better understand the ability of pinnipeds to detect UCT outputs, behavioral high-frequency hearing thresholds were obtained from a trained harbor seal (Phoca vitulina) and a trained California sea lion (Zalophus californianus). Thresholds were measured for extended (500 ms) and brief (10 ms) 69 kHz narrowband stimuli, as well as for a stimulus recorded directly from a Vemco V16-3H UCT, which consisted of eight 10 ms, 69 kHz pure-tone pulses. Detection thresholds for the harbor seal were as expected based on existing audiometric data for this species, while the California sea lion was much more sensitive than predicted. Given measured detection thresholds of 113 dB re 1 μPa and 124 dB re 1 μPa, respectively, both species are likely able to detect acoustic outputs of the Vemco V16-3H under water from distances exceeding 200 m in typical natural conditions, suggesting that these species are capable of using UCTs to detect free-ranging fish. PMID:25234996

  1. Auditory detection of ultrasonic coded transmitters by seals and sea lions.

    PubMed

    Cunningham, Kane A; Hayes, Sean A; Michelle Wargo Rub, A; Reichmuth, Colleen

    2014-04-01

    Ultrasonic coded transmitters (UCTs) are high-frequency acoustic tags that are often used to conduct survivorship studies of vulnerable fish species. Recent observations of differential mortality in tag control studies suggest that fish instrumented with UCTs may be selectively targeted by marine mammal predators, thereby skewing valuable survivorship data. In order to better understand the ability of pinnipeds to detect UCT outputs, behavioral high-frequency hearing thresholds were obtained from a trained harbor seal (Phoca vitulina) and a trained California sea lion (Zalophus californianus). Thresholds were measured for extended (500 ms) and brief (10 ms) 69 kHz narrowband stimuli, as well as for a stimulus recorded directly from a Vemco V16-3H UCT, which consisted of eight 10 ms, 69 kHz pure-tone pulses. Detection thresholds for the harbor seal were as expected based on existing audiometric data for this species, while the California sea lion was much more sensitive than predicted. Given measured detection thresholds of 113 dB re 1 μPa and 124 dB re 1 μPa, respectively, both species are likely able to detect acoustic outputs of the Vemco V16-3H under water from distances exceeding 200 m in typical natural conditions, suggesting that these species are capable of using UCTs to detect free-ranging fish.

  2. The adaptation of visual and auditory integration in the barn owl superior colliculus with Spike Timing Dependent Plasticity.

    PubMed

    Huo, Juan; Murray, Alan

    2009-09-01

    To localize a seen object, the superior colliculus of the barn owl integrates the visual and auditory localization cues which are accessed from the sensory system of the brain. These cues are formed as visual and auditory maps. The alignment between visual and auditory maps is very important for accurate localization in prey behavior. Blindness or prism wearing may interfere this alignment. The juvenile barn owl could adapt its auditory map to this mismatch after several weeks training. Here we investigate this process by building a computational model of auditory and visual integration in deep Superior Colliculus (SC). The adaptation of the map alignment is based on activity dependent axon developing in Inferior Colliculus (IC). This axon growing process is instructed by an inhibitory network in SC while the strength of the inhibition is adjusted by Spike Timing Dependent Plasticity (STDP). The simulation results of this model are in line with the biological experiment and support the idea that STDP is involved in the alignment of sensory maps. This model also provides a new spiking neuron based mechanism capable of eliminating the disparity in visual and auditory map integration. PMID:19084371

  3. Emotion and auditory virtual environments: affect-based judgments of music reproduced with virtual reverberation times.

    PubMed

    Västfjäll, Daniel; Larsson, Pontus; Kleiner, Mendel

    2002-02-01

    Emotions are experienced both in real and virtual environments (VEs). Most research to date have focused on the content that causes emotional reactions, but noncontent features of a VE (such as the realism and quality of object rendering) may also influence emotional reactions to the mediated object. The present research studied how noncontent features (different reverberation times) of an auditory VE influenced 76 participants' ratings of emotional reactions and expressed emotional qualities of the sounds. The results showed that the two emotion dimensions of pleasantness and arousal were systematically affected if the same musical piece was rendered with different reverberation times. Overall, it was found that high reverberation time was perceived as most unpleasant. Taken together, the results suggested that noncontent features of a VE influence emotional reactions to mediated objects. Moreover, the study suggests that emotional reactions may be a important aspect of the VE experience that can help complementing standard presence questionnaires and quality evaluations.

  4. Effect of red bull energy drink on auditory reaction time and maximal voluntary contraction.

    PubMed

    Goel, Vartika; Manjunatha, S; Pai, Kirtana M

    2014-01-01

    The use of "Energy Drinks" (ED) is increasing in India. Students specially use these drinks to rejuvenate after strenuous exercises or as a stimulant during exam times. The most common ingredient in EDs is caffeine and a popular ED available and commonly used is Red Bull, containing 80 mg of caffeine in 250 ml bottle. The primary aim of this study was to investigate the effects of Red Bull energy drink on Auditory reaction time and Maximal voluntary contraction. A homogeneous group containing twenty medical students (10 males, 10 females) participated in a crossover study in which they were randomized to supplement with Red Bull (2 mg/kg body weight of caffeine) or isoenergetic isovolumetric noncaffeinated control drink (a combination of Appy Fizz, Cranberry juice and soda) separated by 7 days. Maximal voluntary contraction (MVC) was recorded as the highest of the 3 values of maximal isometric force generated from the dominant hand using hand grip dynamometer (Biopac systems). Auditory reaction time (ART) was the average of 10 values of the time interval between the click sound and response by pressing the push button using hand held switch (Biopac systems). The energy and control drinks after one hour of consumption significantly reduced the Auditory reaction time in males (ED 232 ± 59 Vs 204 ± 34 s and Control 223 ± 57 Vs 210 ± 51 s; p < 0.05) as well as in females (ED 227 ± 56 Vs 214 ± 48 s and Control 224 ± 45 Vs 215 ± 36 s; p < 0.05) but had no effect on MVC in either sex (males ED 381 ± 37 Vs 371 ± 36 and Control 375 ± 61 Vs 363 ± 36 Newton, females ED 227 ± 23 Vs 227 ± 32 and Control 234 ± 46 Vs 228 ± 37 Newton). When compared across the gender groups, there was no significant difference between males and females in the effects of any of the drinks on the ART but there was an overall significantly lower MVC in females compared to males. Both energy drink and the control drink significantly improve the reaction time but may not have any effect

  5. Effect of red bull energy drink on auditory reaction time and maximal voluntary contraction.

    PubMed

    Goel, Vartika; Manjunatha, S; Pai, Kirtana M

    2014-01-01

    The use of "Energy Drinks" (ED) is increasing in India. Students specially use these drinks to rejuvenate after strenuous exercises or as a stimulant during exam times. The most common ingredient in EDs is caffeine and a popular ED available and commonly used is Red Bull, containing 80 mg of caffeine in 250 ml bottle. The primary aim of this study was to investigate the effects of Red Bull energy drink on Auditory reaction time and Maximal voluntary contraction. A homogeneous group containing twenty medical students (10 males, 10 females) participated in a crossover study in which they were randomized to supplement with Red Bull (2 mg/kg body weight of caffeine) or isoenergetic isovolumetric noncaffeinated control drink (a combination of Appy Fizz, Cranberry juice and soda) separated by 7 days. Maximal voluntary contraction (MVC) was recorded as the highest of the 3 values of maximal isometric force generated from the dominant hand using hand grip dynamometer (Biopac systems). Auditory reaction time (ART) was the average of 10 values of the time interval between the click sound and response by pressing the push button using hand held switch (Biopac systems). The energy and control drinks after one hour of consumption significantly reduced the Auditory reaction time in males (ED 232 ± 59 Vs 204 ± 34 s and Control 223 ± 57 Vs 210 ± 51 s; p < 0.05) as well as in females (ED 227 ± 56 Vs 214 ± 48 s and Control 224 ± 45 Vs 215 ± 36 s; p < 0.05) but had no effect on MVC in either sex (males ED 381 ± 37 Vs 371 ± 36 and Control 375 ± 61 Vs 363 ± 36 Newton, females ED 227 ± 23 Vs 227 ± 32 and Control 234 ± 46 Vs 228 ± 37 Newton). When compared across the gender groups, there was no significant difference between males and females in the effects of any of the drinks on the ART but there was an overall significantly lower MVC in females compared to males. Both energy drink and the control drink significantly improve the reaction time but may not have any effect

  6. Precise inhibition is essential for microsecond interaural time difference coding

    NASA Astrophysics Data System (ADS)

    Brand, Antje; Behrend, Oliver; Marquardt, Torsten; McAlpine, David; Grothe, Benedikt

    2002-05-01

    Microsecond differences in the arrival time of a sound at the two ears (interaural time differences, ITDs) are the main cue for localizing low-frequency sounds in space. Traditionally, ITDs are thought to be encoded by an array of coincidence-detector neurons, receiving excitatory inputs from the two ears via axons of variable length (`delay lines'), to create a topographic map of azimuthal auditory space. Compelling evidence for the existence of such a map in the mammalian lTD detector, the medial superior olive (MSO), however, is lacking. Equally puzzling is the role of a-temporally very precise-glycine-mediated inhibitory input to MSO neurons. Using in vivo recordings from the MSO of the Mongolian gerbil, we found the responses of ITD-sensitive neurons to be inconsistent with the idea of a topographic map of auditory space. Moreover, local application of glycine and its antagonist strychnine by iontophoresis (through glass pipette electrodes, by means of an electric current) revealed that precisely timed glycine-controlled inhibition is a critical part of the mechanism by which the physiologically relevant range of ITDs is encoded in the MSO. A computer model, simulating the response of a coincidence-detector neuron with bilateral excitatory inputs and a temporally precise contralateral inhibitory input, supports this conclusion.

  7. The Dynamics of Disruption from Altered Auditory Feedback: Further Evidence for a Dissociation of Sequencing and Timing

    ERIC Educational Resources Information Center

    Pfordresher, Peter Q.; Kulpa, J. D.

    2011-01-01

    Three experiments were designed to test whether perception and action are coordinated in a way that distinguishes sequencing from timing (Pfordresher, 2003). Each experiment incorporated a trial design in which altered auditory feedback (AAF) was presented for varying lengths of time and then withdrawn. Experiments 1 and 2 included AAF that…

  8. Effects of inhibitory timing on contrast enhancement in auditory circuits in crickets (Teleogryllus oceanicus).

    PubMed

    Faulkes, Z; Pollack, G S

    2000-09-01

    In crickets (Teleogryllus oceanicus), the paired auditory interneuron Omega Neuron 1 (ON1) responds to sounds with frequencies in the range from 3 to 40 kHz. The neuron is tuned to frequencies similar to that of conspecific songs (4.5 kHz), but its latency is longest for these same frequencies by a margin of 5-10 ms. Each ON1 is strongly excited by input from the ipsilateral ear and inhibits contralateral auditory neurons that are excited by the contralateral ear, including the interneurons ascending neurons 1 and 2 (AN1 and AN2). We investigated the functional consequences of ON1's long latency to cricket-like sound and the resulting delay in inhibition of AN1 and AN2. Using dichotic stimuli, we controlled the timing of contralateral inhibition of the ANs relative to their excitation by ipsilateral stimuli. Advancing the stimulus to the ear driving ON1 relative to that driving the ANs "subtracted" ON1's additional latency to 4.5 kHz. This had little effect on the spike counts of AN1 and AN2. The response latencies of these neurons, however, increased markedly. This is because in the absence of a delay in ON1's response, inhibition arrived at AN1 and AN2 early enough to abolish the first spikes in their responses. This also increased the variability of AN1 latency. This suggests that one possible function of the delay in ON1's response may be to protect the precise timing of the onset of response in the contralateral AN1, thus preserving interaural difference in response latency as a reliable potential cue for sound localization. Hyperpolarizing ON1 removed all detectable contralateral inhibition of AN1 and AN2, suggesting that ON1 is the main, if not the only, source of contralateral inhibition.

  9. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    ERIC Educational Resources Information Center

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  10. Coding for Communication Channels with Dead-Time Constraints

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2004-01-01

    Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM

  11. Effects of location and timing of co-activated neurons in the auditory midbrain on cortical activity: implications for a new central auditory prosthesis

    NASA Astrophysics Data System (ADS)

    Straka, Małgorzata M.; McMahon, Melissa; Markovitz, Craig D.; Lim, Hubert H.

    2014-08-01

    Objective. An increasing number of deaf individuals are being implanted with central auditory prostheses, but their performance has generally been poorer than for cochlear implant users. The goal of this study is to investigate stimulation strategies for improving hearing performance with a new auditory midbrain implant (AMI). Previous studies have shown that repeated electrical stimulation of a single site in each isofrequency lamina of the central nucleus of the inferior colliculus (ICC) causes strong suppressive effects in elicited responses within the primary auditory cortex (A1). Here we investigate if improved cortical activity can be achieved by co-activating neurons with different timing and locations across an ICC lamina and if this cortical activity varies across A1. Approach. We electrically stimulated two sites at different locations across an isofrequency ICC lamina using varying delays in ketamine-anesthetized guinea pigs. We recorded and analyzed spike activity and local field potentials across different layers and locations of A1. Results. Co-activating two sites within an isofrequency lamina with short inter-pulse intervals (<5 ms) could elicit cortical activity that is enhanced beyond a linear summation of activity elicited by the individual sites. A significantly greater extent of normalized cortical activity was observed for stimulation of the rostral-lateral region of an ICC lamina compared to the caudal-medial region. We did not identify any location trends across A1, but the most cortical enhancement was observed in supragranular layers, suggesting further integration of the stimuli through the cortical layers. Significance. The topographic organization identified by this study provides further evidence for the presence of functional zones across an ICC lamina with locations consistent with those identified by previous studies. Clinically, these results suggest that co-activating different neural populations in the rostral-lateral ICC rather

  12. Adaptation to visual or auditory time intervals modulates the perception of visual apparent motion

    PubMed Central

    Zhang, Huihui; Chen, Lihan; Zhou, Xiaolin

    2012-01-01

    It is debated whether sub-second timing is subserved by a centralized mechanism or by the intrinsic properties of task-related neural activity in specific modalities (Ivry and Schlerf, 2008). By using a temporal adaptation task, we investigated whether adapting to different time intervals conveyed through stimuli in different modalities (i.e., frames of a visual Ternus display, visual blinking discs, or auditory beeps) would affect the subsequent implicit perception of visual timing, i.e., inter-stimulus interval (ISI) between two frames in a Ternus display. The Ternus display can induce two percepts of apparent motion (AM), depending on the ISI between the two frames: “element motion” for short ISIs, in which the endmost disc is seen as moving back and forth while the middle disc at the overlapping or central position remains stationary; “group motion” for longer ISIs, in which both discs appear to move in a manner of lateral displacement as a whole. In Experiment 1, participants adapted to either the typical “element motion” (ISI = 50 ms) or the typical “group motion” (ISI = 200 ms). In Experiments 2 and 3, participants adapted to a time interval of 50 or 200 ms through observing a series of two paired blinking discs at the center of the screen (Experiment 2) or hearing a sequence of two paired beeps (with pitch 1000 Hz). In Experiment 4, participants adapted to sequences of paired beeps with either low pitches (500 Hz) or high pitches (5000 Hz). After adaptation in each trial, participants were presented with a Ternus probe in which the ISI between the two frames was equal to the transitional threshold of the two types of motions, as determined by a pretest. Results showed that adapting to the short time interval in all the situations led to more reports of “group motion” in the subsequent Ternus probes; adapting to the long time interval, however, caused no aftereffect for visual adaptation but significantly more reports of group motion for

  13. Secular Slowing of Auditory Simple Reaction Time in Sweden (1959-1985).

    PubMed

    Madison, Guy; Woodley Of Menie, Michael A; Sänger, Justus

    2016-01-01

    There are indications that simple reaction time might have slowed in Western populations, based on both cohort- and multi-study comparisons. A possible limitation of the latter method in particular is measurement error stemming from methods variance, which results from the fact that instruments and experimental conditions change over time and between studies. We therefore set out to measure the simple auditory reaction time (SRT) of 7,081 individuals (2,997 males and 4,084 females) born in Sweden 1959-1985 (subjects were aged between 27 and 54 years at time of measurement). Depending on age cut-offs and adjustment for aging related slowing of SRT, the data indicate that SRT has increased by between 3 and 16 ms in the 27 birth years covered in the present sample. This slowing is unlikely to be explained by attrition, which was evaluated by comparing the general intelligence × birth-year interactions and standard deviations for both male participants and dropouts, utilizing military conscript cognitive ability data. The present result is consistent with previous studies employing alternative methods, and may indicate the operation of several synergistic factors, such as recent micro-evolutionary trends favoring lower g in Sweden and the effects of industrially produced neurotoxic substances on peripheral nerve conduction velocity.

  14. Secular Slowing of Auditory Simple Reaction Time in Sweden (1959–1985)

    PubMed Central

    Madison, Guy; Woodley of Menie, Michael A.; Sänger, Justus

    2016-01-01

    There are indications that simple reaction time might have slowed in Western populations, based on both cohort- and multi-study comparisons. A possible limitation of the latter method in particular is measurement error stemming from methods variance, which results from the fact that instruments and experimental conditions change over time and between studies. We therefore set out to measure the simple auditory reaction time (SRT) of 7,081 individuals (2,997 males and 4,084 females) born in Sweden 1959–1985 (subjects were aged between 27 and 54 years at time of measurement). Depending on age cut-offs and adjustment for aging related slowing of SRT, the data indicate that SRT has increased by between 3 and 16 ms in the 27 birth years covered in the present sample. This slowing is unlikely to be explained by attrition, which was evaluated by comparing the general intelligence × birth-year interactions and standard deviations for both male participants and dropouts, utilizing military conscript cognitive ability data. The present result is consistent with previous studies employing alternative methods, and may indicate the operation of several synergistic factors, such as recent micro-evolutionary trends favoring lower g in Sweden and the effects of industrially produced neurotoxic substances on peripheral nerve conduction velocity. PMID:27588000

  15. Somatosensory inputs modify auditory spike timing in dorsal cochlear nucleus principal cells

    PubMed Central

    Koehler, Seth D; Pradhan, Shashwati; Manis, Paul B; Shore, Susan E

    2010-01-01

    In addition to auditory inputs, dorsal cochlear nucleus (DCN) pyramidal cells in the guinea pig receive and respond to somatosensory inputs and perform multisensory integration. DCN pyramidal cells respond to sounds with characteristic spike-timing patterns that are partially controlled by rapidly inactivating potassium conductances. Deactivating these conductances can modify both spike rate and spike timing of responses to sound. Somatosensory pathways are known to modify response rates to subsequent acoustic stimuli, but their effect on spike timing is unknown. Here, we demonstrate that preceding tonal stimulation with spinal trigeminal nucleus (Sp5) stimulation significantly alters the first spike latency, the first interspike interval, and the average discharge regularity of firing evoked by the tone. These effects occur whether the neuron is excited or inhibited by Sp5 stimulation alone. Our results demonstrate that multisensory integration in DCN alters spike-timing representations of acoustic stimuli in pyramidal cells. These changes likely occur through synaptic modulation of intrinsic excitability or synaptic inhibition. PMID:21198989

  16. Secular Slowing of Auditory Simple Reaction Time in Sweden (1959-1985).

    PubMed

    Madison, Guy; Woodley Of Menie, Michael A; Sänger, Justus

    2016-01-01

    There are indications that simple reaction time might have slowed in Western populations, based on both cohort- and multi-study comparisons. A possible limitation of the latter method in particular is measurement error stemming from methods variance, which results from the fact that instruments and experimental conditions change over time and between studies. We therefore set out to measure the simple auditory reaction time (SRT) of 7,081 individuals (2,997 males and 4,084 females) born in Sweden 1959-1985 (subjects were aged between 27 and 54 years at time of measurement). Depending on age cut-offs and adjustment for aging related slowing of SRT, the data indicate that SRT has increased by between 3 and 16 ms in the 27 birth years covered in the present sample. This slowing is unlikely to be explained by attrition, which was evaluated by comparing the general intelligence × birth-year interactions and standard deviations for both male participants and dropouts, utilizing military conscript cognitive ability data. The present result is consistent with previous studies employing alternative methods, and may indicate the operation of several synergistic factors, such as recent micro-evolutionary trends favoring lower g in Sweden and the effects of industrially produced neurotoxic substances on peripheral nerve conduction velocity. PMID:27588000

  17. Neural Basis of the Time Window for Subjective Motor-Auditory Integration

    PubMed Central

    Toida, Koichi; Ueno, Kanako; Shimada, Sotaro

    2016-01-01

    Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs) elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN) and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2) and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms), and hence reduction in the feeling of authorship of the sound (the sense of agency). In contrast, the enhanced-P2 was most prominent in short-delay (≤200 ms) conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components. PMID:26779000

  18. The GOES Time Code Service, 1974–2004: A Retrospective

    PubMed Central

    Lombardi, Michael A.; Hanson, D. Wayne

    2005-01-01

    NIST ended its Geostationary Operational Environmental Satellites (GOES) time code service at 0 hours, 0 minutes Coordinated Universal Time (UTC) on January 1, 2005. To commemorate the end of this historically significant service, this article provides a retrospective look at the GOES service and the important role it played in the history of satellite timekeeping. PMID:27308105

  19. Coded optical time domain reflectometry: principle and applications

    NASA Astrophysics Data System (ADS)

    Park, Namkyoo; Lee, Jeonghwan; Park, Jonghan; Shim, Jae Gwang; Yoon, Hosung; Kim, Jin Hee; Kim, Kyoungmin; Byun, Jae-Oh; Bolognini, Gabriele; Lee, Duckey; Di Pasquale, Fabrizio

    2007-11-01

    In this paper, we will briefly outline our contributions for the physical realization of coded OTDR, along with its principles and also highlight recent key results related with its applications. For the communication network application, we report a multi-port / multi-wavelength, high-speed supervisory system for the in-service monitoring of a bidirectional WDM-PON system transmission line up to 16 ports x 32 nodes (512 users) capacity. Monitoring of individual branch traces up to 60 km was achieved with the application of a 127-bit simplex code, corresponding to a 7.5dB SNR coding gain effectively reducing the measurement time about 30 times when compared to conventional average mode OTDR. Transmission experiments showed negligible penalty from the monitoring system to the transmission signal quality, at a 2.5Gbps / 125Mbps (down / up stream) data rate. As an application to sensor network, a Raman scattering based coded-OTDR distributed temperature sensor system will be presented. Utilizing a 255-bit Simplex coded OTDR together with optimized sensing link (composed of cascaded fibers with different Raman coefficients), significant enhancement in the interrogation distance (19.5km from coding gain, and 9.6km from link-combination optimization) was achieved to result a total sensing range of 37km (at 17m/3K spatial/temperature resolution), employing a conventional off-shelf low power (80mW) laser diode.

  20. Using LDPC Code Constraints to Aid Recovery of Symbol Timing

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban

    2008-01-01

    A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation

  1. Channel coding and time-diversity for optical wireless links.

    PubMed

    Xu, Fang; Khalighi, Ali; Caussé, Patrice; Bourennane, Salah

    2009-01-19

    Atmospheric turbulence can cause a significant performance degradation in free space optical communication systems. An efficient solution could be to exploit the temporal diversity to improve the performance of the transmission link. Depending on the tolerable delay latency, we can benefit from some degree of time diversity that we can exploit by employing channel coding and interleaving. In this paper, we investigate the efficiency of several channel coding techniques for different time diversity orders and turbulence conditions. We show that a simple convolutional code is a suitable choice in most cases as it makes a good compromise between decoding complexity and performance. We also study the receiver performance when the channel is estimated based on some training symbols.

  2. Coding of Electric Pulse Trains Presented through Cochlear Implants in the Auditory Midbrain of Awake Rabbit: Comparison with Anesthetized Preparations

    PubMed Central

    Hancock, Kenneth E.; Nam, Sung-Il; Delgutte, Bertrand

    2014-01-01

    Cochlear implant (CI) listeners show limits at high frequencies in tasks involving temporal processing such as rate pitch and interaural time difference discrimination. Similar limits have been observed in neural responses to electric stimulation in animals with CI; however, the upper limit of temporal coding of electric pulse train stimuli in the inferior colliculus (IC) of anesthetized animals is lower than the perceptual limit. We hypothesize that the upper limit of temporal neural coding has been underestimated in previous studies due to the confound of anesthesia. To test this hypothesis, we developed a chronic, awake rabbit preparation for single-unit studies of IC neurons with electric stimulation through CI. Stimuli were periodic trains of biphasic pulses with rates varying from 20 to 1280 pulses per second. We found that IC neurons in awake rabbits showed higher spontaneous activity and greater sustained responses, both excitatory and suppressive, at high pulse rates. Maximum pulse rates that elicited synchronized responses were approximately two times higher in awake rabbits than in earlier studies with anesthetized animals. Here, we demonstrate directly that anesthesia is a major factor underlying these differences by monitoring the responses of single units in one rabbit before and after injection of an ultra-short-acting barbiturate. In general, the physiological rate limits of IC neurons in the awake rabbit are more consistent with the psychophysical limits in human CI subjects compared with limits from anesthetized animals. PMID:24381283

  3. Asynchronous inputs alter excitability, spike timing, and topography in primary auditory cortex.

    PubMed

    Pandya, Pritesh K; Moucha, Raluca; Engineer, Navzer D; Rathbun, Daniel L; Vazquez, Jessica; Kilgard, Michael P

    2005-05-01

    Correlation-based synaptic plasticity provides a potential cellular mechanism for learning and memory. Studies in the visual and somatosensory systems have shown that behavioral and surgical manipulation of sensory inputs leads to changes in cortical organization that are consistent with the operation of these learning rules. In this study, we examine how the organization of primary auditory cortex (A1) is altered by tones designed to decrease the average input correlation across the frequency map. After one month of separately pairing nucleus basalis stimulation with 2 and 14 kHz tones, a greater proportion of A1 neurons responded to frequencies below 2 kHz and above 14 kHz. Despite the expanded representation of these tones, cortical excitability was specifically reduced in the high and low frequency regions of A1, as evidenced by increased neural thresholds and decreased response strength. In contrast, in the frequency region between the two paired tones, driven rates were unaffected and spontaneous firing rate was increased. Neural response latencies were increased across the frequency map when nucleus basalis stimulation was associated with asynchronous activation of the high and low frequency regions of A1. This set of changes did not occur when pulsed noise bursts were paired with nucleus basalis stimulation. These results are consistent with earlier observations that sensory input statistics can shape cortical map organization and spike timing.

  4. Asynchronous inputs alter excitability, spike timing, and topography in primary auditory cortex

    PubMed Central

    Pandya, Pritesh K.; Moucha, Raluca; Engineer, Navzer D.; Rathbun, Daniel L.; Vazquez, Jessica; Kilgard, Michael P.

    2010-01-01

    Correlation-based synaptic plasticity provides a potential cellular mechanism for learning and memory. Studies in the visual and somatosensory systems have shown that behavioral and surgical manipulation of sensory inputs leads to changes in cortical organization that are consistent with the operation of these learning rules. In this study, we examine how the organization of primary auditory cortex (A1) is altered by tones designed to decrease the average input correlation across the frequency map. After one month of separately pairing nucleus basalis stimulation with 2 and 14 kHz tones, a greater proportion of A1 neurons responded to frequencies below 2 kHz and above 14 kHz. Despite the expanded representation of these tones, cortical excitability was specifically reduced in the high and low frequency regions of A1, as evidenced by increased neural thresholds and decreased response strength. In contrast, in the frequency region between the two paired tones, driven rates were unaffected and spontaneous firing rate was increased. Neural response latencies were increased across the frequency map when nucleus basalis stimulation was associated with asynchronous activation of the high and low frequency regions of A1. This set of changes did not occur when pulsed noise bursts were paired with nucleus basalis stimulation. These results are consistent with earlier observations that sensory input statistics can shape cortical map organization and spike timing. PMID:15855025

  5. Spike timing precision changes with spike rate adaptation in the owl's auditory space map

    PubMed Central

    Takahashi, Terry T.

    2015-01-01

    Spike rate adaptation (SRA) is a continuing change of responsiveness to ongoing stimuli, which is ubiquitous across species and levels of sensory systems. Under SRA, auditory responses to constant stimuli change over time, relaxing toward a long-term rate often over multiple timescales. With more variable stimuli, SRA causes the dependence of spike rate on sound pressure level to shift toward the mean level of recent stimulus history. A model based on subtractive adaptation (Benda J, Hennig RM. J Comput Neurosci 24: 113–136, 2008) shows that changes in spike rate and level dependence are mechanistically linked. Space-specific neurons in the barn owl's midbrain, when recorded under ketamine-diazepam anesthesia, showed these classical characteristics of SRA, while at the same time exhibiting changes in spike timing precision. Abrupt level increases of sinusoidally amplitude-modulated (SAM) noise initially led to spiking at higher rates with lower temporal precision. Spike rate and precision relaxed toward their long-term values with a time course similar to SRA, results that were also replicated by the subtractive model. Stimuli whose amplitude modulations (AMs) were not synchronous across carrier frequency evoked spikes in response to stimulus envelopes of a particular shape, characterized by the spectrotemporal receptive field (STRF). Again, abrupt stimulus level changes initially disrupted the temporal precision of spiking, which then relaxed along with SRA. We suggest that shifts in latency associated with stimulus level changes may differ between carrier frequency bands and underlie decreased spike precision. Thus SRA is manifest not simply as a change in spike rate but also as a change in the temporal precision of spiking. PMID:26269555

  6. Auditory distance coding in rabbit midbrain neurons and human perception: monaural amplitude modulation depth as a cue.

    PubMed

    Kim, Duck O; Zahorik, Pavel; Carney, Laurel H; Bishop, Brian B; Kuwada, Shigeyuki

    2015-04-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35-200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined.

  7. Auditory Distance Coding in Rabbit Midbrain Neurons and Human Perception: Monaural Amplitude Modulation Depth as a Cue

    PubMed Central

    Zahorik, Pavel; Carney, Laurel H.; Bishop, Brian B.; Kuwada, Shigeyuki

    2015-01-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35–200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060

  8. Time Shifted PN Codes for CW Lidar, Radar, and Sonar

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F. (Inventor); Prasad, Narasimha S. (Inventor); Harrison, Fenton W. (Inventor); Flood, Michael A. (Inventor)

    2013-01-01

    A continuous wave Light Detection and Ranging (CW LiDAR) system utilizes two or more laser frequencies and time or range shifted pseudorandom noise (PN) codes to discriminate between the laser frequencies. The performance of these codes can be improved by subtracting out the bias before processing. The CW LiDAR system may be mounted to an artificial satellite orbiting the earth, and the relative strength of the return signal for each frequency can be utilized to determine the concentration of selected gases or other substances in the atmosphere.

  9. Effect of Eight Weekly Aerobic Training Program on Auditory Reaction Time and MaxVO[subscript 2] in Visual Impairments

    ERIC Educational Resources Information Center

    Taskin, Cengiz

    2016-01-01

    The aim of study was to examine the effect of eight weekly aerobic exercises on auditory reaction time and MaxVO[subscript 2] in visual impairments. Forty visual impairment children that have blind 3 classification from the Turkey, experimental group; (age = 15.60 ± 1.10 years; height = 164.15 ± 4.88 cm; weight = 66.60 ± 4.77 kg) for twenty…

  10. Bimodal stimulus timing-dependent plasticity in primary auditory cortex is altered after noise exposure with and without tinnitus.

    PubMed

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2015-12-01

    Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit.

  11. Temporal properties of diagnosis code time series in aggregate.

    PubMed

    Perotte, Adler; Hripcsak, George

    2013-03-01

    Time series are essential to health data research and data mining. We aim to study the properties of one of the more commonly available but historically unreliable types of data: administrative diagnoses in the form of the International Classification of Diseases, Ninth Revision (ICD9) codes. We use differential entropy of ICD9 code time series as a surrogate measure for disease time course and also explore Gaussian kernel smoothing to characterize the time course of diseases in a more fine-grained way. Compared to a gold standard created by a panel of clinicians, the first model classified diseases into acute and chronic groups with a receiver operating characteristic area under curve of 0.83. In the second model, several characteristic temporal profiles were observed including permanent, chronic, and acute. In addition, condition dynamics such as the refractory period for giving birth following childbirth were observed. These models demonstrate that ICD9 codes, despite well-documented concerns, contain valid and potentially valuable temporal information.

  12. The role of GABAergic inhibition in processing of interaural time difference in the owl's auditory system.

    PubMed

    Fujita, I; Konishi, M

    1991-03-01

    The barn owl uses interaural time differences (ITDs) to localize the azimuthal position of sound. ITDs are processed by an anatomically distinct pathway in the brainstem. Neuronal selectivity for ITD is generated in the nucleus laminaris (NL) and conveyed to both the anterior portion of the ventral nucleus of the lateral lemniscus (VLVa) and the central (ICc) and external (ICx) nuclei of the inferior colliculus. With tonal stimuli, neurons in all regions are found to respond maximally not only to the real ITD, but also to ITDs that differ by integer multiples of the tonal period. This phenomenon, phase ambiguity, does not occur when ICx neurons are stimulated with noise. The main aim of this study was to determine the role of GABAergic inhibition in the processing of ITDs. Selectivity for ITD is similar in the NL and VLVa and improves in the ICc and ICx. Iontophoresis of bicuculline methiodide (BMI), a selective GABAA antagonist, decreased the ITD selectivity of ICc and ICx neurons, but did not affect that of VLVa neurons. Responses of VLVa and ICc neurons to unfavorable ITDs were below the monaural response levels. BMI raised both binaural responses to unfavorable ITDs and monaural responses, though the former remained smaller than the latter. During BMI application, ICx neurons showed phase ambiguity to noise stimuli and no longer responded to a unique ITD. BMI increased the response magnitude and changed the temporal discharge patterns in the VLVa, ICc, and ICx. Iontophoretically applied GABA exerted effects opposite to those of BMI, and the effects could be antagonized with simultaneous application of BMI. These results suggest that GABAergic inhibition (1) sharpens ITD selectivity in the ICc and ICx, (2) contributes to the elimination of phase ambiguity in the ICx, and (3) controls response magnitude and temporal characteristics in the VLVa, ICc, and ICx. Through these actions, GABAergic inhibition shapes the horizontal dimension of the auditory receptive

  13. Time code dissemination experiment via the SIRIO-1 VHF transponder

    NASA Technical Reports Server (NTRS)

    Detoma, E.; Gobbo, G.; Leschiutta, S.; Pettiti, V.

    1982-01-01

    An experiment to evaluate the possibility of disseminating a time code via the SIRIO-1 satellite, by using the onboard VHF repeater is described. The precision in the synchronization of remote clocks was expected to be of the order of 0.1 to 1 ms. The RF carrier was in the VHF band, so that low cost receivers could be used and then a broader class of users could be served. An already existing repeater, even if not designed specifically for communications could be utilized; the operation of this repeater was not intended to affect any other function of the spacecraft (both the SHF repeater and the VHF telemetry link were active during the time code dissemination via the VHF transponder).

  14. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    ERIC Educational Resources Information Center

    Casserly, Elizabeth D.; Pisoni, David B.

    2015-01-01

    Purpose: Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method: Experiments 1 (N = 37) and 2 (N =…

  15. The Time-Course of Auditory and Visual Distraction Effects in a New Crossmodal Paradigm

    ERIC Educational Resources Information Center

    Bendixen, Alexandra; Grimm, Sabine; Deouell, Leon Y.; Wetzel, Nicole; Madebach, Andreas; Schroger, Erich

    2010-01-01

    Vision often dominates audition when attentive processes are involved (e.g., the ventriloquist effect), yet little is known about the relative potential of the two modalities to initiate a "break through of the unattended". The present study was designed to systematically compare the capacity of task-irrelevant auditory and visual events to…

  16. A Latent Consolidation Phase in Auditory Identification Learning: Time in the Awake State Is Sufficient

    ERIC Educational Resources Information Center

    Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi

    2005-01-01

    Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…

  17. Effect of Auditory Motion Velocity on Reaction Time and Cortical Processes

    ERIC Educational Resources Information Center

    Getzmann, Stephan

    2009-01-01

    The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…

  18. Effects of Location, Frequency Region, and Time Course of Selective Attention on Auditory Scene Analysis

    ERIC Educational Resources Information Center

    Cusack, Rhodri; Decks, John; Aikman, Genevieve; Carlyon, Robert P.

    2004-01-01

    Often, the sound arriving at the ears is a mixture from many different sources, but only 1 is of interest. To assist with selection, the auditory system structures the incoming input into streams, each of which ideally corresponds to a single source. Some authors have argued that this process of streaming is automatic and invariant, but recent…

  19. Code-Time Diversity for Direct Sequence Spread Spectrum Systems

    PubMed Central

    Hassan, A. Y.

    2014-01-01

    Time diversity is achieved in direct sequence spread spectrum by receiving different faded delayed copies of the transmitted symbols from different uncorrelated channel paths when the transmission signal bandwidth is greater than the coherence bandwidth of the channel. In this paper, a new time diversity scheme is proposed for spread spectrum systems. It is called code-time diversity. In this new scheme, N spreading codes are used to transmit one data symbol over N successive symbols interval. The diversity order in the proposed scheme equals to the number of the used spreading codes N multiplied by the number of the uncorrelated paths of the channel L. The paper represents the transmitted signal model. Two demodulators structures will be proposed based on the received signal models from Rayleigh flat and frequency selective fading channels. Probability of error in the proposed diversity scheme is also calculated for the same two fading channels. Finally, simulation results are represented and compared with that of maximal ration combiner (MRC) and multiple-input and multiple-output (MIMO) systems. PMID:24982925

  20. Reducing EnergyPlus Run Time For Code Compliance Tools

    SciTech Connect

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.; Glazer, Jason

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and three climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.

  1. Transformation from a pure time delay to a mixed time and phase delay representation in the auditory forebrain pathway.

    PubMed

    Vonderschen, Katrin; Wagner, Hermann

    2012-04-25

    Birds and mammals exploit interaural time differences (ITDs) for sound localization. Subsequent to ITD detection by brainstem neurons, ITD processing continues in parallel midbrain and forebrain pathways. In the barn owl, both ITD detection and processing in the midbrain are specialized to extract ITDs independent of frequency, which amounts to a pure time delay representation. Recent results have elucidated different mechanisms of ITD detection in mammals, which lead to a representation of small ITDs in high-frequency channels and large ITDs in low-frequency channels, resembling a phase delay representation. However, the detection mechanism does not prevent a change in ITD representation at higher processing stages. Here we analyze ITD tuning across frequency channels with pure tone and noise stimuli in neurons of the barn owl's auditory arcopallium, a nucleus at the endpoint of the forebrain pathway. To extend the analysis of ITD representation across frequency bands to a large neural population, we employed Fourier analysis for the spectral decomposition of ITD curves recorded with noise stimuli. This method was validated using physiological as well as model data. We found that low frequencies convey sensitivity to large ITDs, whereas high frequencies convey sensitivity to small ITDs. Moreover, different linear phase frequency regimes in the high-frequency and low-frequency ranges suggested an independent convergence of inputs from these frequency channels. Our results are consistent with ITD being remodeled toward a phase delay representation along the forebrain pathway. This indicates that sensory representations may undergo substantial reorganization, presumably in relation to specific behavioral output. PMID:22539852

  2. A scalable population code for time in the striatum.

    PubMed

    Mello, Gustavo B M; Soares, Sofia; Paton, Joseph J

    2015-05-01

    To guide behavior and learn from its consequences, the brain must represent time over many scales. Yet, the neural signals used to encode time in the seconds-to-minute range are not known. The striatum is a major input area of the basal ganglia associated with learning and motor function. Previous studies have also shown that the striatum is necessary for normal timing behavior. To address how striatal signals might be involved in timing, we recorded from striatal neurons in rats performing an interval timing task. We found that neurons fired at delays spanning tens of seconds and that this pattern of responding reflected the interaction between time and the animals' ongoing sensorimotor state. Surprisingly, cells rescaled responses in time when intervals changed, indicating that striatal populations encoded relative time. Moreover, time estimates decoded from activity predicted timing behavior as animals adjusted to new intervals, and disrupting striatal function led to a decrease in timing performance. These results suggest that striatal activity forms a scalable population code for time, providing timing signals that animals use to guide their actions.

  3. EEG alpha spindles and prolonged brake reaction times during auditory distraction in an on-road driving study.

    PubMed

    Sonnleitner, Andreas; Treder, Matthias Sebastian; Simon, Michael; Willmann, Sven; Ewald, Arne; Buchner, Axel; Schrauf, Michael

    2014-01-01

    Driver distraction is responsible for a substantial number of traffic accidents. This paper describes the impact of an auditory secondary task on drivers' mental states during a primary driving task. N=20 participants performed the test procedure in a car following task with repeated forced braking on a non-public test track. Performance measures (provoked reaction time to brake lights) and brain activity (EEG alpha spindles) were analyzed to describe distracted drivers. Further, a classification approach was used to investigate whether alpha spindles can predict drivers' mental states. Results show that reaction times and alpha spindle rate increased with time-on-task. Moreover, brake reaction times and alpha spindle rate were significantly higher while driving with auditory secondary task opposed to driving only. In single-trial classification, a combination of spindle parameters yielded a median classification error of about 8% in discriminating the distracted from the alert driving. Reduced driving performance (i.e., prolonged brake reaction times) during increased cognitive load is assumed to be indicated by EEG alpha spindles, enabling the quantification of driver distraction in experiments on public roads without verbally assessing the drivers' mental states. PMID:24144496

  4. Change in Speech Perception and Auditory Evoked Potentials over Time after Unilateral Cochlear Implantation in Postlingually Deaf Adults.

    PubMed

    Purdy, Suzanne C; Kelly, Andrea S

    2016-02-01

    Speech perception varies widely across cochlear implant (CI) users and typically improves over time after implantation. There is also some evidence for improved auditory evoked potentials (shorter latencies, larger amplitudes) after implantation but few longitudinal studies have examined the relationship between behavioral and evoked potential measures after implantation in postlingually deaf adults. The relationship between speech perception and auditory evoked potentials was investigated in newly implanted cochlear implant users from the day of implant activation to 9 months postimplantation, on five occasions, in 10 adults age 27 to 57 years who had been bilaterally profoundly deaf for 1 to 30 years prior to receiving a unilateral CI24 cochlear implant. Changes over time in middle latency response (MLR), mismatch negativity, and obligatory cortical auditory evoked potentials and word and sentence speech perception scores were examined. Speech perception improved significantly over the 9-month period. MLRs varied and showed no consistent change over time. Three participants aged in their 50s had absent MLRs. The pattern of change in N1 amplitudes over the five visits varied across participants. P2 area increased significantly for 1,000- and 4,000-Hz tones but not for 250 Hz. The greatest change in P2 area occurred after 6 months of implant experience. Although there was a trend for mismatch negativity peak latency to reduce and width to increase after 3 months of implant experience, there was considerable variability and these changes were not significant. Only 60% of participants had a detectable mismatch initially; this increased to 100% at 9 months. The continued change in P2 area over the period evaluated, with a trend for greater change for right hemisphere recordings, is consistent with the pattern of incremental change in speech perception scores over time. MLR, N1, and mismatch negativity changes were inconsistent and hence P2 may be a more robust measure

  5. The topography of frequency and time representation in primate auditory cortices

    PubMed Central

    Baumann, Simon; Joly, Olivier; Rees, Adrian; Petkov, Christopher I; Sun, Li; Thiele, Alexander; Griffiths, Timothy D

    2015-01-01

    Natural sounds can be characterised by their spectral content and temporal modulation, but how the brain is organized to analyse these two critical sound dimensions remains uncertain. Using functional magnetic resonance imaging, we demonstrate a topographical representation of amplitude modulation rate in the auditory cortex of awake macaques. The representation of this temporal dimension is organized in approximately concentric bands of equal rates across the superior temporal plane in both hemispheres, progressing from high rates in the posterior core to low rates in the anterior core and lateral belt cortex. In A1 the resulting gradient of modulation rate runs approximately perpendicular to the axis of the tonotopic gradient, suggesting an orthogonal organisation of spectral and temporal sound dimensions. In auditory belt areas this relationship is more complex. The data suggest a continuous representation of modulation rate across several physiological areas, in contradistinction to a separate representation of frequency within each area. DOI: http://dx.doi.org/10.7554/eLife.03256.001 PMID:25590651

  6. Time-Dependent, Parallel Neutral Particle Transport Code System.

    2009-09-10

    Version 00 PARTISN (PARallel, TIme-Dependent SN) is the evolutionary successor to CCC-547/DANTSYS. The PARTISN code package is a modular computer program package designed to solve the time-independent or dependent multigroup discrete ordinates form of the Boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, the Solver Module, and themore » Edit Module, respectively. PARTISN is the evolutionary successor to the DANTSYSTM code system package. The Input and Edit Modules in PARTISN are very similar to those in DANTSYS. However, unlike DANTSYS, the Solver Module in PARTISN contains one, two, and three-dimensional solvers in a single module. In addition to the diamond-differencing method, the Solver Module also has Adaptive Weighted Diamond-Differencing (AWDD), Linear Discontinuous (LD), and Exponential Discontinuous (ED) spatial differencing methods. The spatial mesh may consist of either a standard orthogonal mesh or a block adaptive orthogonal mesh. The Solver Module may be run in parallel for two and three dimensional problems. One can now run 1-D problems in parallel using Energy Domain Decomposition (triggered by Block 5 input keyword npeg>0). EDD can also be used in 2-D/3-D with or without our standard Spatial Domain Decomposition. Both the static (fixed source or eigenvalue) and time-dependent forms of the transport equation are solved in forward or adjoint mode. In addition, PARTISN now has a probabilistic mode for Probability of Initiation (static) and Probability of Survival (dynamic) calculations. Vacuum, reflective, periodic, white, or inhomogeneous boundary conditions are solved. General anisotropic scattering and inhomogeneous sources are permitted. PARTISN solves the transport equation on orthogonal (single level or block-structured AMR) grids in 1-D

  7. Incorporating Auditory Models in Speech/Audio Applications

    NASA Astrophysics Data System (ADS)

    Krishnamoorthi, Harish

    2011-12-01

    Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to

  8. Time and Category Information in Pattern-Based Codes

    PubMed Central

    Eyherabide, Hugo Gabriel; Samengo, Inés

    2010-01-01

    Sensory stimuli are usually composed of different features (the what) appearing at irregular times (the when). Neural responses often use spike patterns to represent sensory information. The what is hypothesized to be encoded in the identity of the elicited patterns (the pattern categories), and the when, in the time positions of patterns (the pattern timing). However, this standard view is oversimplified. In the real world, the what and the when might not be separable concepts, for instance, if they are correlated in the stimulus. In addition, neuronal dynamics can condition the pattern timing to be correlated with the pattern categories. Hence, timing and categories of patterns may not constitute independent channels of information. In this paper, we assess the role of spike patterns in the neural code, irrespective of the nature of the patterns. We first define information-theoretical quantities that allow us to quantify the information encoded by different aspects of the neural response. We also introduce the notion of synergy/redundancy between time positions and categories of patterns. We subsequently establish the relation between the what and the when in the stimulus with the timing and the categories of patterns. To that aim, we quantify the mutual information between different aspects of the stimulus and different aspects of the response. This formal framework allows us to determine the precise conditions under which the standard view holds, as well as the departures from this simple case. Finally, we study the capability of different response aspects to represent the what and the when in the neural response. PMID:21151371

  9. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation

    PubMed Central

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A.

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  10. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    PubMed

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  11. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  12. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    PubMed

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  13. Development of the auditory system

    PubMed Central

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  14. Application of satellite time transfer in autonomous spacecraft clocks. [binary time code

    NASA Technical Reports Server (NTRS)

    Chi, A. R.

    1979-01-01

    The conceptual design of a spacecraft clock that will provide a standard time scale for experimenters in future spacecraft., and can be sychronized to a time scale without the need for additional calibration and validation is described. The time distribution to the users is handled through onboard computers, without human intervention for extended periods. A group parallel binary code, under consideration for onboard use, is discussed. Each group in the code can easily be truncated. The autonomously operated clock not only achieves simpler procedures and shorter lead times for data processing, but also contributes to spacecraft autonomy for onboard navigation and data packetization. The clock can be used to control the sensor in a spacecraft, compare another time signal such as that from the global positioning system, and, if the cost is not a consideration, can be used on the ground in remote sites for timekeeping and control.

  15. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition.

  16. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  17. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems. PMID:27421976

  18. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  19. Coded acoustic wave sensors and system using time diversity

    NASA Technical Reports Server (NTRS)

    Solie, Leland P. (Inventor); Hines, Jacqueline H. (Inventor)

    2012-01-01

    An apparatus and method for distinguishing between sensors that are to be wirelessly detected is provided. An interrogator device uses different, distinct time delays in the sensing signals when interrogating the sensors. The sensors are provided with different distinct pedestal delays. Sensors that have the same pedestal delay as the delay selected by the interrogator are detected by the interrogator whereas other sensors with different pedestal delays are not sensed. Multiple sensors with a given pedestal delay are provided with different codes so as to be distinguished from one another by the interrogator. The interrogator uses a signal that is transmitted to the sensor and returned by the sensor for combination and integration with the reference signal that has been processed by a function. The sensor may be a surface acoustic wave device having a differential impulse response with a power spectral density consisting of lobes. The power spectral density of the differential response is used to determine the value of the sensed parameter or parameters.

  20. The time course of auditory and language-specific mechanisms in compensation for sibilant assimilation.

    PubMed

    Clayards, Meghan; Niebuhr, Oliver; Gaskell, M Gareth

    2015-01-01

    Models of spoken-word recognition differ on whether compensation for assimilation is language-specific or depends on general auditory processing. English and French participants were taught words that began or ended with the sibilants /s/ and /∫/. Both languages exhibit some assimilation in sibilant sequences (e.g., /s/ becomes like [∫] in dress shop and classe chargée), but they differ in the strength and predominance of anticipatory versus carryover assimilation. After training, participants were presented with novel words embedded in sentences, some of which contained an assimilatory context either preceding or following. A continuum of target sounds ranging from [s] to [∫] was spliced into the novel words, representing a range of possible assimilation strengths. Listeners' perceptions were examined using a visual-world eyetracking paradigm in which the listener clicked on pictures matching the novel words. We found two distinct language-general context effects: a contrastive effect when the assimilating context preceded the target, and flattening of the sibilant categorization function (increased ambiguity) when the assimilating context followed. Furthermore, we found that English but not French listeners were able to resolve the ambiguity created by the following assimilatory context, consistent with their greater experience with assimilation in this context. The combination of these mechanisms allows listeners to deal flexibly with variability in speech forms.

  1. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    PubMed Central

    Pisoni, David B.

    2015-01-01

    Purpose Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method Experiments 1 (N = 37) and 2 (N = 21) used pre- and posttest measures of speech and nonspeech recognition to find evidence of learning (within subject) and to compare the effects of 3 kinds of training (between subject) on the perceptual abilities of adults with normal hearing listening to simulations of cochlear implant processing. Subjects were given interactive, standard lab-based, or control training experience for 1 hr between the pre- and posttest tasks (unique sets across Experiments 1 & 2). Results Subjects receiving interactive training showed significant learning on sentence recognition in quiet task (Experiment 1), outperforming controls but not lab-trained subjects following training. Training groups did not differ significantly on any other task, even those directly involved in the interactive training experience. Conclusions Interactive training has the potential to produce learning in 1 domain (sentence recognition in quiet), but the particulars of the present training method (short duration, high complexity) may have limited benefits to this single criterion task. PMID:25674884

  2. Focal manipulations of formant trajectories reveal a role of auditory feedback in the online control of both within-syllable and between-syllable speech timing

    PubMed Central

    Cai, Shanqing; Ghosh, Satrajit S.; Guenther, Frank H.; Perkell, Joseph S.

    2011-01-01

    Within the human motor repertoire, speech production has a uniquely high level of spatiotemporal complexity. The production of running speech comprises the traversing of spatial positions with precisely coordinated articulator movements to produce 10–15 sounds/second. How does the brain use auditory feedback, namely the self-perception of produced speech sounds, in the online control of spatial and temporal parameters of multisyllabic articulation? This question has important bearings on the organizational principles of sequential actions, yet its answer remains controversial due to the long latency of the auditory feedback pathway and technical challenges involved in manipulating auditory feedback in precisely controlled ways during running speech. In this study, we developed a novel technique for introducing time-varying, focal perturbations in the auditory feedback during multisyllabic, connected speech. Manipulations of spatial and temporal parameters of the formant trajectory were tested separately on two groups of subjects as they uttered “I owe you a yo-yo”. Under these perturbations, significant and specific changes were observed in both the spatial and temporal parameters of the produced formant trajectories. Compensations to spatial perturbations were bidirectional and opposed the perturbations. Furthermore, under perturbations that manipulated the timing of auditory feedback trajectory (slow-down or speed-up), significant adjustments in syllable timing were observed in the subjects’ productions. These results highlight the systematic roles of auditory feedback in the online control of a highly over learned action as connected speech articulation and provide a first look at the properties of this type of sensorimotor interaction in sequential movements. PMID:22072698

  3. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  4. Latency of tone-burst-evoked auditory brain stem responses and otoacoustic emissions: Level, frequency, and rise-time effects

    PubMed Central

    Rasetshwane, Daniel M.; Argenyi, Michael; Neely, Stephen T.; Kopun, Judy G.; Gorga, Michael P.

    2013-01-01

    Simultaneous measurement of auditory brain stem response (ABR) and otoacoustic emission (OAE) delays may provide insights into effects of level, frequency, and stimulus rise-time on cochlear delay. Tone-burst-evoked ABRs and OAEs (TBOAEs) were measured simultaneously in normal-hearing human subjects. Stimuli included a wide range of frequencies (0.5–8 kHz), levels (20–90 dB SPL), and tone-burst rise times. ABR latencies have orderly dependence on these three parameters, similar to previously reported data by Gorga et al. [J. Speech Hear. Res. 31, 87–97 (1988)]. Level dependence of ABR and TBOAE latencies was similar across a wide range of stimulus conditions. At mid-frequencies, frequency dependence of ABR and TBOAE latencies were similar. The dependence of ABR latency on both rise time and level was significant; however, the interaction was not significant, suggesting independent effects. Comparison between ABR and TBOAE latencies reveals that the ratio of TBOAE latency to ABR forward latency (the level-dependent component of ABR total latency) is close to one below 1.5 kHz, but greater than two above 1.5 kHz. Despite the fact that the current experiment was designed to test compatibility with models of reverse-wave propagation, existing models do not completely explain the current data. PMID:23654387

  5. Modeling the time-varying and level-dependent effects of the medial olivocochlear reflex in auditory nerve responses.

    PubMed

    Smalt, Christopher J; Heinz, Michael G; Strickland, Elizabeth A

    2014-04-01

    The medial olivocochlear reflex (MOCR) has been hypothesized to provide benefit for listening in noisy environments. This advantage can be attributed to a feedback mechanism that suppresses auditory nerve (AN) firing in continuous background noise, resulting in increased sensitivity to a tone or speech. MOC neurons synapse on outer hair cells (OHCs), and their activity effectively reduces cochlear gain. The computational model developed in this study implements the time-varying, characteristic frequency (CF) and level-dependent effects of the MOCR within the framework of a well-established model for normal and hearing-impaired AN responses. A second-order linear system was used to model the time-course of the MOCR using physiological data in humans. The stimulus-level-dependent parameters of the efferent pathway were estimated by fitting AN sensitivity derived from responses in decerebrate cats using a tone-in-noise paradigm. The resulting model uses a binaural, time-varying, CF-dependent, level-dependent OHC gain reduction for both ipsilateral and contralateral stimuli that improves detection of a tone in noise, similarly to recorded AN responses. The MOCR may be important for speech recognition in continuous background noise as well as for protection from acoustic trauma. Further study of this model and its efferent feedback loop may improve our understanding of the effects of sensorineural hearing loss in noisy situations, a condition in which hearing aids currently struggle to restore normal speech perception.

  6. Modeling neural adaptation in the frog auditory system

    NASA Astrophysics Data System (ADS)

    Wotton, Janine; McArthur, Kimberly; Bohara, Amit; Ferragamo, Michael; Megela Simmons, Andrea

    2005-09-01

    Extracellular recordings from the auditory midbrain, Torus semicircularis, of the leopard frog reveal a wide diversity of tuning patterns. Some cells seem to be well suited for time-based coding of signal envelope, and others for rate-based coding of signal frequency. Adaptation for ongoing stimuli plays a significant role in shaping the frequency-dependent response rate at different levels of the frog auditory system. Anuran auditory-nerve fibers are unusual in that they reveal frequency-dependent adaptation [A. L. Megela, J. Acoust. Soc. Am. 75, 1155-1162 (1984)], and therefore provide rate-based input. In order to examine the influence of these peripheral inputs on central responses, three layers of auditory neurons were modeled to examine short-term neural adaptation to pure tones and complex signals. The response of each neuron was simulated with a leaky integrate and fire model, and adaptation was implemented by means of an increasing threshold. Auditory-nerve fibers, dorsal medullary nucleus neurons, and toral cells were simulated and connected in three ascending layers. Modifying the adaptation properties of the peripheral fibers dramatically alters the response at the midbrain. [Work supported by NOHR to M.J.F.; Gustavus Presidential Scholarship to K.McA.; NIH DC05257 to A.M.S.

  7. Interference between postural control and spatial vs. non-spatial auditory reaction time tasks in older adults.

    PubMed

    Fuhrman, Susan I; Redfern, Mark S; Jennings, J Richard; Furman, Joseph M

    2015-01-01

    This study investigated whether spatial aspects of an information processing task influence dual-task interference. Two groups (Older/Young) of healthy adults participated in dual-task experiments. Two auditory information processing tasks included a frequency discrimination choice reaction time task (non-spatial task) and a lateralization choice reaction time task (spatial task). Postural tasks included combinations of standing with eyes open or eyes closed on either a fixed floor or a sway-referenced floor. Reaction times and postural sway via center of pressure were recorded. Baseline measures of reaction time and sway were subtracted from the corresponding dual-task results to calculate reaction time task costs and postural task costs. Reaction time task cost increased with eye closure (p = 0.01), sway-referenced flooring (p < 0.0001), and the spatial task (p = 0.04). Additionally, a significant (p = 0.05) task x vision x age interaction indicated that older subjects had a significant vision X task interaction whereas young subjects did not. However, when analyzed by age group, the young group showed minimal differences in interference for the spatial and non-spatial tasks with eyes open, but showed increased interference on the spatial relative to non-spatial task with eyes closed. On the contrary, older subjects demonstrated increased interference on the spatial relative to the non-spatial task with eyes open, but not with eyes closed. These findings suggest that visual-spatial interference may occur in older subjects when vision is used to maintain posture.

  8. The cortical modulation of stimulus-specific adaptation in the auditory midbrain and thalamus: a potential neuronal correlate for predictive coding

    PubMed Central

    Malmierca, Manuel S.; Anderson, Lucy A.; Antunes, Flora M.

    2015-01-01

    To follow an ever-changing auditory scene, the auditory brain is continuously creating a representation of the past to form expectations about the future. Unexpected events will produce an error in the predictions that should “trigger” the network’s response. Indeed, neurons in the auditory midbrain, thalamus and cortex, respond to rarely occurring sounds while adapting to frequently repeated ones, i.e., they exhibit stimulus specific adaptation (SSA). SSA cannot be explained solely by intrinsic membrane properties, but likely involves the participation of the network. Thus, SSA is envisaged as a high order form of adaptation that requires the influence of cortical areas. However, present research supports the hypothesis that SSA, at least in its simplest form (i.e., to frequency deviants), can be transmitted in a bottom-up manner through the auditory pathway. Here, we briefly review the underlying neuroanatomy of the corticofugal projections before discussing state of the art studies which demonstrate that SSA present in the medial geniculate body (MGB) and inferior colliculus (IC) is not inherited from the cortex but can be modulated by the cortex via the corticofugal pathways. By modulating the gain of neurons in the thalamus and midbrain, the auditory cortex (AC) would refine SSA subcortically, preventing irrelevant information from reaching the cortex. PMID:25805974

  9. Auditory hallucinations.

    PubMed

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  10. From ear to body: the auditory-motor loop in spatial cognition

    PubMed Central

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space. PMID:25249933

  11. Spiking Neurons Learning Phase Delays: How Mammals May Develop Auditory Time-Difference Sensitivity

    NASA Astrophysics Data System (ADS)

    Leibold, Christian; van Hemmen, J. Leo

    2005-04-01

    Time differences between the two ears are an important cue for animals to azimuthally locate a sound source. The first binaural brainstem nucleus, in mammals the medial superior olive, is generally believed to perform the necessary computations. Its cells are sensitive to variations of interaural time differences of about 10 μs. The classical explanation of such a neuronal time-difference tuning is based on the physical concept of delay lines. Recent data, however, are inconsistent with a temporal delay and rather favor a phase delay. By means of a biophysical model we show how spike-timing-dependent synaptic learning explains precise interplay of excitation and inhibition and, hence, accounts for a physical realization of a phase delay.

  12. Imposed delay of response: effects on aphasics auditory comprehension of visuality and non-visually cued material.

    PubMed

    Yorkston, K M; Marshall, R C; Butler, M R

    1977-04-01

    Two groups of aphasics were administered an auditory comprehension task under conditions of 0-, 5-, and 10-sec. imposed delay of response. The auditory-visual group received auditory and visual cues; the auditory group received only auditory cues. Comprehension for the auditory-visual group was significantly better than for the auditory group. Increase in delay time significantly improved comprehension for the auditory-visual group but not for the auditory group.

  13. The Effect of Dopaminergic Medication on Beat-Based Auditory Timing in Parkinson's Disease.

    PubMed

    Cameron, Daniel J; Pickett, Kristen A; Earhart, Gammon M; Grahn, Jessica A

    2016-01-01

    Parkinson's disease (PD) adversely affects timing abilities. Beat-based timing is a mechanism that times events relative to a regular interval, such as the "beat" in musical rhythm, and is impaired in PD. It is unknown if dopaminergic medication influences beat-based timing in PD. Here, we tested beat-based timing over two sessions in participants with PD (OFF then ON dopaminergic medication) and in unmedicated control participants. People with PD and control participants completed two tasks. The first was a discrimination task in which participants compared two rhythms and determined whether they were the same or different. Rhythms either had a beat structure (metric simple rhythms) or did not (metric complex rhythms), as in previous studies. Discrimination accuracy was analyzed to test for the effects of beat structure, as well as differences between participants with PD and controls, and effects of medication (PD group only). The second task was the Beat Alignment Test (BAT), in which participants listened to music with regular tones superimposed, and responded as to whether the tones were "ON" or "OFF" the beat of the music. Accuracy was analyzed to test for differences between participants with PD and controls, and for an effect of medication in patients. Both patients and controls discriminated metric simple rhythms better than metric complex rhythms. Controls also improved at the discrimination task in the second vs. first session, whereas people with PD did not. For participants with PD, the difference in performance between metric simple and metric complex rhythms was greater (sensitivity to changes in simple rhythms increased and sensitivity to changes in complex rhythms decreased) when ON vs. OFF medication. Performance also worsened with disease severity. For the BAT, no group differences or effects of medication were found. Overall, these findings suggest that timing is impaired in PD, and that dopaminergic medication influences beat-based and non

  14. The Effect of Dopaminergic Medication on Beat-Based Auditory Timing in Parkinson’s Disease

    PubMed Central

    Cameron, Daniel J.; Pickett, Kristen A.; Earhart, Gammon M.; Grahn, Jessica A.

    2016-01-01

    Parkinson’s disease (PD) adversely affects timing abilities. Beat-based timing is a mechanism that times events relative to a regular interval, such as the “beat” in musical rhythm, and is impaired in PD. It is unknown if dopaminergic medication influences beat-based timing in PD. Here, we tested beat-based timing over two sessions in participants with PD (OFF then ON dopaminergic medication) and in unmedicated control participants. People with PD and control participants completed two tasks. The first was a discrimination task in which participants compared two rhythms and determined whether they were the same or different. Rhythms either had a beat structure (metric simple rhythms) or did not (metric complex rhythms), as in previous studies. Discrimination accuracy was analyzed to test for the effects of beat structure, as well as differences between participants with PD and controls, and effects of medication (PD group only). The second task was the Beat Alignment Test (BAT), in which participants listened to music with regular tones superimposed, and responded as to whether the tones were “ON” or “OFF” the beat of the music. Accuracy was analyzed to test for differences between participants with PD and controls, and for an effect of medication in patients. Both patients and controls discriminated metric simple rhythms better than metric complex rhythms. Controls also improved at the discrimination task in the second vs. first session, whereas people with PD did not. For participants with PD, the difference in performance between metric simple and metric complex rhythms was greater (sensitivity to changes in simple rhythms increased and sensitivity to changes in complex rhythms decreased) when ON vs. OFF medication. Performance also worsened with disease severity. For the BAT, no group differences or effects of medication were found. Overall, these findings suggest that timing is impaired in PD, and that dopaminergic medication influences beat

  15. Robust Timing Synchronization for Aviation Communications, and Efficient Modulation and Coding Study for Quantum Communication

    NASA Technical Reports Server (NTRS)

    Xiong, Fugin

    2003-01-01

    One half of Professor Xiong's effort will investigate robust timing synchronization schemes for dynamically varying characteristics of aviation communication channels. The other half of his time will focus on efficient modulation and coding study for the emerging quantum communications.

  16. Feature Assignment in Perception of Auditory Figure

    ERIC Educational Resources Information Center

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  17. Selectivity for space and time in early areas of the auditory dorsal stream in the rhesus monkey

    PubMed Central

    Rauschecker, Josef P.

    2014-01-01

    The respective roles of ventral and dorsal cortical processing streams are still under discussion in both vision and audition. We characterized neural responses in the caudal auditory belt cortex, an early dorsal stream region of the macaque. We found fast neural responses with elevated temporal precision as well as neurons selective to sound location. These populations were partly segregated: Neurons in a caudomedial area more precisely followed temporal stimulus structure but were less selective to spatial location. Response latencies in this area were even shorter than in primary auditory cortex. Neurons in a caudolateral area showed higher selectivity for sound source azimuth and elevation, but responses were slower and matching to temporal sound structure was poorer. In contrast to the primary area and other regions studied previously, latencies in the caudal belt neurons were not negatively correlated with best frequency. Our results suggest that two functional substreams may exist within the auditory dorsal stream. PMID:24501260

  18. CoCoNuT: General relativistic hydrodynamics code with dynamical space-time evolution

    NASA Astrophysics Data System (ADS)

    Dimmelmeier, Harald; Novak, Jérôme; Cerdá-Durán, Pablo

    2012-02-01

    CoCoNuT is a general relativistic hydrodynamics code with dynamical space-time evolution. The main aim of this numerical code is the study of several astrophysical scenarios in which general relativity can play an important role, namely the collapse of rapidly rotating stellar cores and the evolution of isolated neutron stars. The code has two flavors: CoCoA, the axisymmetric (2D) magnetized version, and CoCoNuT, the 3D non-magnetized version.

  19. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    PubMed Central

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  20. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    PubMed

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  1. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    PubMed Central

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  2. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    PubMed

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  3. The visual-auditory color-word stroop asymmetry and its time course.

    PubMed

    Roelofs, Ardi

    2005-12-01

    Four experiments examined crossmodal versions of the Stroop task in order (1) to look for Stroop asymmetries in color naming, spoken-word naming, and written-word naming and to evaluate the time course of these asymmetries, and (2) to compare these findings to current models of the Stroop effect. Participants named color patches while ignoring spoken color words presented with an onset varying from 300 msec before to 300 msec after the onset of the color (Experiment 1), or they named the spoken words and ignored the colors (Experiment 2). A secondary visual detection task assured that the participants looked at the colors in both tasks. Spoken color words yielded Stroop effects in color naming, but colors did not yield an effect in spoken-word naming at any stimulus onset asynchrony. This asymmetry in effects was obtained with equivalent color- and spoken-word-naming latencies. Written color words yielded a Stroop effect in naming spoken words (Experiment 3), and spoken color words yielded an effect in naming written words (Experiment 4). These results were interpreted as most consistent with an architectural account of the color-word Stroop asymmetry, in contrast with discriminability and pathway strength accounts.

  4. One hundred ways to process time, frequency, rate and scale in the central auditory system: a pattern-recognition meta-analysis.

    PubMed

    Hemery, Edgar; Aucouturier, Jean-Julien

    2015-01-01

    The mammalian auditory system extracts features from the acoustic environment based on the responses of spatially distributed sets of neurons in the subcortical and cortical auditory structures. The characteristic responses of these neurons (linearly approximated by their spectro-temporal receptive fields, or STRFs) suggest that auditory representations are formed, as early as in the inferior colliculi, on the basis of a time, frequency, rate (temporal modulations) and scale (spectral modulations) analysis of sound. However, how these four dimensions are integrated and processed in subsequent neural networks remains unclear. In this work, we present a new methodology to generate computational insights into the functional organization of such processes. We first propose a systematic framework to explore more than a hundred different computational strategies proposed in the literature to process the output of a generic STRF model. We then evaluate these strategies on their ability to compute perceptual distances between pairs of environmental sounds. Finally, we conduct a meta-analysis of the dataset of all these algorithms' accuracies to examine whether certain combinations of dimensions and certain ways to treat such dimensions are, on the whole, more computationally effective than others. We present an application of this methodology to a dataset of ten environmental sound categories, in which the analysis reveals that (1) models are most effective when they organize STRF data into frequency groupings-which is consistent with the known tonotopic organization of receptive fields in auditory structures -, and that (2) models that treat STRF data as time series are no more effective than models that rely only on summary statistics along time-which corroborates recent experimental evidence on texture discrimination by summary statistics.

  5. One hundred ways to process time, frequency, rate and scale in the central auditory system: a pattern-recognition meta-analysis.

    PubMed

    Hemery, Edgar; Aucouturier, Jean-Julien

    2015-01-01

    The mammalian auditory system extracts features from the acoustic environment based on the responses of spatially distributed sets of neurons in the subcortical and cortical auditory structures. The characteristic responses of these neurons (linearly approximated by their spectro-temporal receptive fields, or STRFs) suggest that auditory representations are formed, as early as in the inferior colliculi, on the basis of a time, frequency, rate (temporal modulations) and scale (spectral modulations) analysis of sound. However, how these four dimensions are integrated and processed in subsequent neural networks remains unclear. In this work, we present a new methodology to generate computational insights into the functional organization of such processes. We first propose a systematic framework to explore more than a hundred different computational strategies proposed in the literature to process the output of a generic STRF model. We then evaluate these strategies on their ability to compute perceptual distances between pairs of environmental sounds. Finally, we conduct a meta-analysis of the dataset of all these algorithms' accuracies to examine whether certain combinations of dimensions and certain ways to treat such dimensions are, on the whole, more computationally effective than others. We present an application of this methodology to a dataset of ten environmental sound categories, in which the analysis reveals that (1) models are most effective when they organize STRF data into frequency groupings-which is consistent with the known tonotopic organization of receptive fields in auditory structures -, and that (2) models that treat STRF data as time series are no more effective than models that rely only on summary statistics along time-which corroborates recent experimental evidence on texture discrimination by summary statistics. PMID:26190996

  6. Subjective and Real Time: Coding Under Different Drug States

    PubMed Central

    Sanchez-Castillo, Hugo; Taylor, Kathleen M.; Ward, Ryan D.; Paz-Trejo, Diana B.; Arroyo-Araujo, Maria; Castillo, Oscar Galicia; Balsam, Peter D.

    2016-01-01

    Organisms are constantly extracting information from the temporal structure of the environment, which allows them to select appropriate actions and predict impending changes. Several lines of research have suggested that interval timing is modulated by the dopaminergic system. It has been proposed that higher levels of dopamine cause an internal clock to speed up, whereas less dopamine causes a deceleration of the clock. In most experiments the subjects are first trained to perform a timing task while drug free. Consequently, most of what is known about the influence of dopaminergic modulation of timing is on well-established timing performance. In the current study the impact of altered DA on the acquisition of temporal control was the focal question. Thirty male Sprague-Dawley rats were distributed randomly into three different groups (haloperidol, d-amphetamine or vehicle). Each animal received an injection 15 min prior to the start of every session from the beginning of interval training. The subjects were trained in a Fixed Interval (FI) 16s schedule followed by training on a peak procedure in which 64s non-reinforced peak trials were intermixed with FI trials. In a final test session all subjects were given vehicle injections and 10 consecutive non-reinforced peak trials to see if training under drug conditions altered the encoding of time. The current study suggests that administration of drugs that modulate dopamine do not alter the encoding temporal durations but do acutely affect the initiation of responding. PMID:27087743

  7. Speakers' acceptance of real-time speech exchange indicates that we use auditory feedback to specify the meaning of what we say.

    PubMed

    Lind, Andreas; Hall, Lars; Breidegard, Björn; Balkenius, Christian; Johansson, Petter

    2014-06-01

    Speech is usually assumed to start with a clearly defined preverbal message, which provides a benchmark for self-monitoring and a robust sense of agency for one's utterances. However, an alternative hypothesis states that speakers often have no detailed preview of what they are about to say, and that they instead use auditory feedback to infer the meaning of their words. In the experiment reported here, participants performed a Stroop color-naming task while we covertly manipulated their auditory feedback in real time so that they said one thing but heard themselves saying something else. Under ideal timing conditions, two thirds of these semantic exchanges went undetected by the participants, and in 85% of all nondetected exchanges, the inserted words were experienced as self-produced. These findings indicate that the sense of agency for speech has a strong inferential component, and that auditory feedback of one's own voice acts as a pathway for semantic monitoring, potentially overriding other feedback loops. PMID:24777489

  8. Further Development and Implementation of Implicit Time Marching in the CAA Code

    NASA Technical Reports Server (NTRS)

    Golubev, Vladimir V.

    2003-01-01

    The fellowship research project continued last-year work on implementing implicit time marching concepts in the Broadband Aeroacoustic System Simulator (BASS) code. This code is being developed at NASA Glenn for analysis of unsteady flow and sources of noise in propulsion systems, including jet noise and fan noise.

  9. The Role of Animacy in the Real Time Comprehension of Mandarin Chinese: Evidence from Auditory Event-Related Brain Potentials

    ERIC Educational Resources Information Center

    Philipp, Markus; Bornkessel-Schlesewsky, Ina; Bisang, Walter; Schlesewsky, Matthias

    2008-01-01

    Two auditory ERP studies examined the role of animacy in sentence comprehension in Mandarin Chinese by comparing active and passive sentences in simple verb-final (Experiment 1) and relative clause constructions (Experiment 2). In addition to the voice manipulation (which modulated the assignment of actor and undergoer roles to the arguments),…

  10. One hundred ways to process time, frequency, rate and scale in the central auditory system: a pattern-recognition meta-analysis

    PubMed Central

    Hemery, Edgar; Aucouturier, Jean-Julien

    2015-01-01

    The mammalian auditory system extracts features from the acoustic environment based on the responses of spatially distributed sets of neurons in the subcortical and cortical auditory structures. The characteristic responses of these neurons (linearly approximated by their spectro-temporal receptive fields, or STRFs) suggest that auditory representations are formed, as early as in the inferior colliculi, on the basis of a time, frequency, rate (temporal modulations) and scale (spectral modulations) analysis of sound. However, how these four dimensions are integrated and processed in subsequent neural networks remains unclear. In this work, we present a new methodology to generate computational insights into the functional organization of such processes. We first propose a systematic framework to explore more than a hundred different computational strategies proposed in the literature to process the output of a generic STRF model. We then evaluate these strategies on their ability to compute perceptual distances between pairs of environmental sounds. Finally, we conduct a meta-analysis of the dataset of all these algorithms' accuracies to examine whether certain combinations of dimensions and certain ways to treat such dimensions are, on the whole, more computationally effective than others. We present an application of this methodology to a dataset of ten environmental sound categories, in which the analysis reveals that (1) models are most effective when they organize STRF data into frequency groupings—which is consistent with the known tonotopic organization of receptive fields in auditory structures -, and that (2) models that treat STRF data as time series are no more effective than models that rely only on summary statistics along time—which corroborates recent experimental evidence on texture discrimination by summary statistics. PMID:26190996

  11. Auditory synesthesias.

    PubMed

    Afra, Pegah

    2015-01-01

    Synesthesia is experienced when sensory stimulation of one sensory modality (the inducer) elicits an involuntary or automatic sensation in another sensory modality or different aspect of the same sensory modality (the concurrent). Auditory synesthesias (AS) occur when auditory stimuli trigger a variety of concurrents, or when non-auditory sensory stimulations trigger auditory synesthetic perception. The AS are divided into three types: developmental, acquired, and induced. Developmental AS are not a neurologic disorder but a different way of experiencing one's environment. They are involuntary and highly consistent experiences throughout one's life. Acquired AS have been reported in association with neurologic diseases that cause deafferentation of anterior optic pathways, with pathologic lesions affecting the central nervous system (CNS) outside of the optic pathways, as well as non-lesional cases associated with migraine, and epilepsy. It also has been reported with mood disorders as well as a single idiopathic case. Induced AS has been reported in experimental and postsurgical blindfolding, as well as intake of hallucinogenics or psychedelics. In this chapter the three different types of synesthesia, their characteristics, and phenomologic differences, as well as their possible neural mechanisms are discussed. PMID:25726281

  12. Auditory system

    NASA Technical Reports Server (NTRS)

    Ades, H. W.

    1973-01-01

    The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.

  13. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  14. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  15. Accuracy and time requirements of a bar-code inventory system for medical supplies.

    PubMed

    Hanson, L B; Weinswig, M H; De Muth, J E

    1988-02-01

    The effects of implementing a bar-code system for issuing medical supplies to nursing units at a university teaching hospital were evaluated. Data on the time required to issue medical supplies to three nursing units at a 480-bed, tertiary-care teaching hospital were collected (1) before the bar-code system was implemented (i.e., when the manual system was in use), (2) one month after implementation, and (3) four months after implementation. At the same times, the accuracy of the central supply perpetual inventory was monitored using 15 selected items. One-way analysis of variance tests were done to determine any significant differences between the bar-code and manual systems. Using the bar-code system took longer than using the manual system because of a significant difference in the time required for order entry into the computer. Multiple-use requirements of the central supply computer system made entering bar-code data a much slower process. There was, however, a significant improvement in the accuracy of the perpetual inventory. Using the bar-code system for issuing medical supplies to the nursing units takes longer than using the manual system. However, the accuracy of the perpetual inventory was significantly improved with the implementation of the bar-code system.

  16. Real-time transmission of digital video using variable-length coding

    NASA Astrophysics Data System (ADS)

    Bizon, Thomas P.; Shalkhauser, Mary Jo; Whyte, Wayne A., Jr.

    1993-03-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  17. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  18. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  19. Change in the coding of interaural time difference along the tonotopic axis of the chicken nucleus laminaris.

    PubMed

    Palanca-Castan, Nicolas; Köppl, Christine

    2015-01-01

    Interaural time differences (ITDs) are an important cue for the localization of sounds in azimuthal space. Both birds and mammals have specialized, tonotopically organized nuclei in the brain stem for the processing of ITD: medial superior olive in mammals and nucleus laminaris (NL) in birds. The specific way in which ITDs are derived was long assumed to conform to a delay-line model in which arrays of systematically arranged cells create a representation of auditory space with different cells responding maximally to specific ITDs. This model was supported by data from barn owl NL taken from regions above 3 kHz and from chicken above 1 kHz. However, data from mammals often do not show defining features of the Jeffress model such as a systematic topographic representation of best ITDs or the presence of axonal delay lines, and an alternative has been proposed in which neurons are not topographically arranged with respect to ITD and coding occurs through the assessment of the overall response of two large neuron populations, one in each hemisphere. Modeling studies have suggested that the presence of different coding systems could be related to the animal's head size and frequency range rather than their phylogenetic group. Testing this hypothesis requires data from across the tonotopic range of both birds and mammals. The aim of this study was to obtain in vivo recordings from neurons in the low-frequency range (<1000 Hz) of chicken NL. Our data argues for the presence of a modified Jeffress system that uses the slopes of ITD-selective response functions instead of their peaks to topographically represent ITD at mid- to high frequencies. At low frequencies, below several 100 Hz, the data did not support any current model of ITD coding. This is different to what was previously shown in the barn owl and suggests that constraints in optimal ITD processing may be associated with the particular demands on sound localization determined by the animal's ecological niche in

  20. Change in the coding of interaural time difference along the tonotopic axis of the chicken nucleus laminaris

    PubMed Central

    Palanca-Castan, Nicolas; Köppl, Christine

    2015-01-01

    Interaural time differences (ITDs) are an important cue for the localization of sounds in azimuthal space. Both birds and mammals have specialized, tonotopically organized nuclei in the brain stem for the processing of ITD: medial superior olive in mammals and nucleus laminaris (NL) in birds. The specific way in which ITDs are derived was long assumed to conform to a delay-line model in which arrays of systematically arranged cells create a representation of auditory space with different cells responding maximally to specific ITDs. This model was supported by data from barn owl NL taken from regions above 3 kHz and from chicken above 1 kHz. However, data from mammals often do not show defining features of the Jeffress model such as a systematic topographic representation of best ITDs or the presence of axonal delay lines, and an alternative has been proposed in which neurons are not topographically arranged with respect to ITD and coding occurs through the assessment of the overall response of two large neuron populations, one in each hemisphere. Modeling studies have suggested that the presence of different coding systems could be related to the animal’s head size and frequency range rather than their phylogenetic group. Testing this hypothesis requires data from across the tonotopic range of both birds and mammals. The aim of this study was to obtain in vivo recordings from neurons in the low-frequency range (<1000 Hz) of chicken NL. Our data argues for the presence of a modified Jeffress system that uses the slopes of ITD-selective response functions instead of their peaks to topographically represent ITD at mid- to high frequencies. At low frequencies, below several 100 Hz, the data did not support any current model of ITD coding. This is different to what was previously shown in the barn owl and suggests that constraints in optimal ITD processing may be associated with the particular demands on sound localization determined by the animal’s ecological niche

  1. Rapid programmable/code-length-variable, time-domain bit-by-bit code shifting for high-speed secure optical communication.

    PubMed

    Gao, Zhensen; Dai, Bo; Wang, Xu; Kataoka, Nobuyuki; Wada, Naoya

    2011-05-01

    We propose and experimentally demonstrate a time-domain bit-by-bit code-shifting scheme that can rapidly program ultralong, code-length variable optical code by using only a dispersive element and a high-speed phase modulator for improving information security. The proposed scheme operates in the bit overlap regime and could eliminate the vulnerability of extracting the code by analyzing the fine structure of the time-domain spectral phase encoded signal. It is also intrinsically immune to eavesdropping via conventional power detection and differential-phase-shift-keying (DPSK) demodulation attacks. With this scheme, 10 Gbits/s of return-to-zero-DPSK data secured by bit-by-bit code shifting using up to 1024 chip optical code patterns have been transmitted over 49 km error free. The proposed scheme exhibits the potential for high-data-rate secure optical communication and to realize even one time pad.

  2. Driving-Simulator-Based Test on the Effectiveness of Auditory Red-Light Running Vehicle Warning System Based on Time-To-Collision Sensor

    PubMed Central

    Yan, Xuedong; Xue, Qingwan; Ma, Lu; Xu, Yongcun

    2014-01-01

    The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR) collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM). The collisions avoidance related variables were measured in terms of brake reaction time (BRT), maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles. PMID:24566631

  3. Performance of asynchronous fiber-optic code division multiple access system based on three-dimensional wavelength/time/space codes and its link analysis.

    PubMed

    Singh, Jaswinder

    2010-03-10

    A novel family of three-dimensional (3-D) wavelength/time/space codes for asynchronous optical code-division-multiple-access (CDMA) systems with "zero" off-peak autocorrelation and "unity" cross correlation is reported. Antipodal signaling and differential detection is employed in the system. A maximum of [(W x T+1) x W] codes are generated for unity cross correlation, where W and T are the number of wavelengths and time chips used in the code and are prime. The conditions for violation of the cross-correlation constraint are discussed. The expressions for number of generated codes are determined for various code dimensions. It is found that the maximum number of codes are generated for S < or = min(W,T), where W and T are prime and S is the number of space channels. The performance of these codes is compared to the earlier reported two-dimensional (2-D)/3-D codes for asynchronous systems. The codes have a code-set-size to code-size ratio greater than W/S. For instance, with a code size of 2065 (59 x 7 x 5), a total of 12,213 users can be supported, and 130 simultaneous users at a bit-error rate (BER) of 10(-9). An arrayed-waveguide-grating-based reconfigurable encoder/decoder design for 2-D implementation for the 3-D codes is presented so that the need for multiple star couplers and fiber ribbons is eliminated. The hardware requirements of the coders used for various modulation/detection schemes are given. The effect of insertion loss in the coders is shown to be significantly reduced with loss compensation by using an amplifier after encoding. An optical CDMA system for four users is simulated and the results presented show the improvement in performance with the use of loss compensation.

  4. A comparative study of visual and auditory reaction times on the basis of gender and physical activity levels of medical first year students

    PubMed Central

    Jain, Aditya; Bansal, Ramta; Kumar, Avnish; Singh, KD

    2015-01-01

    Background: Reaction time (RT) is a measure of the response to a stimulus. RT plays a very important role in our lives as its practical implications may be of great consequences. Factors that can affect the average human RT include age, sex, left or right hand, central versus peripheral vision, practice, fatigue, fasting, breathing cycle, personality types, exercise, and intelligence of the subject. Aim: The aim was to compare visual RTs (VRTs) and auditory RTs (ARTs) on the basis of gender and physical activity levels of medical 1st year students. Materials and Methods: The present cross-sectional study was conducted on 120 healthy medical students in age group of 18–20 years. RT for target stimulus that is, for the beep tone for measuring ART, and red circle for measuring VRT was determined using Inquisit 4.0 (Computer Software) in the laptop. The task was to press the spacebar as soon as the stimulus is presented. Five readings of each stimulus were taken, and their respective fastest RT's for each stimuli were recorded. Statistical analysis was done. Results: In both the sexes’ RT to the auditory stimulus was significantly less (P < 0.001) as compared to the visual stimulus. Significant difference was found between RT of male and female medical students (P < 0.001) as well as between sedentary and regularly exercising healthy medical 1st year students. Conclusion: The ART is faster than the VRT in medical students. Furthermore, male medical students have faster RTs as compared to female medical students for both auditory as well as visual stimuli. Regularly exercising medical students have faster RTs when compared with medical students with sedentary lifestyles. PMID:26097821

  5. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  6. Neural code alterations and abnormal time patterns in Parkinson’s disease

    NASA Astrophysics Data System (ADS)

    Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo

    2015-04-01

    Objective. The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson’s disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. Approach. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. Main results. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson’s disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. Significance. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson’s disease.

  7. Adaptation in the auditory system: an overview

    PubMed Central

    Pérez-González, David; Malmierca, Manuel S.

    2014-01-01

    The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception. PMID:24600361

  8. Timing group delay and differential code bias corrections for BeiDou positioning

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Zhang, Xiaohong; Wang, Jinling

    2015-05-01

    This article first clearly figures out the relationship between parameters of timing group delay (TGD) and differential code bias (DCB) for BDS, and demonstrates the equivalence of TGD and DCB correction models combining theory with practice. The TGD/DCB correction models have been extended to various occasions for BDS positioning, and such models have been evaluated by real triple-frequency datasets. To test the effectiveness of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both standard point positioning (SPP) and precise point positioning (PPP) tests are carried out for BDS signals with different schemes. Furthermore, the influence of differential code biases on BDS positioning estimates such as coordinates, receiver clock biases, tropospheric delays and carrier phase ambiguities is investigated comprehensively. Comparative analysis show that the unmodeled differential code biases degrade the performance of BDS SPP by a factor of two or more, whereas the estimates of PPP are subject to varying degrees of influences. For SPP, the accuracy of dual-frequency combinations is slightly worse than that of single-frequency, and they are much more sensitive to the differential code biases, particularly for the B2B3 combination. For PPP, the uncorrected differential code biases are mostly absorbed into the receiver clock bias and carrier phase ambiguities and thus resulting in a much longer convergence time. Even though the influence of the differential code biases could be mitigated over time and comparable positioning accuracy could be achieved after convergence, it is suggested to properly handle with the differential code biases since it is vital for PPP convergence and integer ambiguity resolution.

  9. Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals

    PubMed Central

    Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro

    2012-01-01

    Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497

  10. Auditory sequence analysis and phonological skill.

    PubMed

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E; Turton, Stuart; Griffiths, Timothy D

    2012-11-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  11. Hemodynamic imaging of the auditory cortex.

    PubMed

    Deborah, Ann Hall; Karima, Susi

    2015-01-01

    Over the past 20 years or so, functional magnetic resonance imaging (fMRI) has proven to be an influential tool for measuring perceptual and cognitive processing non-invasively in the human brain. This article provides a brief yet comprehensive overview of this dominant method for human auditory neuroscience, providing the reader with knowledge about the practicalities of using this technique to assess central auditory coding. Key learning objectives include developing an understanding of the basic MR physics underpinning the technique, the advantage of auditory fMRI over other current neuroimaging alternatives, and highlighting some of the practical considerations involved in setting up, running, and analyzing an auditory fMRI experiment. The future utility of fMRI and anticipated technical developments is also briefly evaluated. Throughout the review, key concepts are illustrated using specific author examples, with particular emphasis on fMRI findings that address questions pertaining to basic sound coding (such as frequency and pitch).

  12. Neural coding properties based on spike timing and pattern correlation of retinal ganglion cells

    PubMed Central

    Gong, Han-Yan; Zhang, Ying-Ying; Liang, Pei-Ji

    2010-01-01

    Correlation between spike trains or neurons sometimes indicates certain neural coding rules in the visual system. In this paper, the relationship between spike timing correlation and pattern correlation is discussed, and their ability to represent stimulus features is compared to examine their coding strategies not only in individual neurons but also in population. Two kinds of stimuli, natural movies and checkerboard, are used to arouse firing activities in chicken retinal ganglion cells. The spike timing correlation and pattern correlation are calculated by cross-correlation function and Lempel–Ziv distance respectively. According to the correlation values, it is demonstrated that spike trains with similar spike patterns are not necessarily concerted in firing time. Moreover, spike pattern correlation values between individual neurons’ responses reflect the difference of natural movies and checkerboard; neurons cooperate with each other with higher pattern correlation values which represent spatiotemporal correlations during response to natural movies. Spike timing does not reflect stimulus features as obvious as spike patterns, caused by their particular coding properties or physiological foundation. As a result, separating the pattern correlation out of traditional timing correlation concept uncover additional insight in neural coding. PMID:22132042

  13. Just in time? Using QR codes for multi-professional learning in clinical practice.

    PubMed

    Jamu, Joseph Tawanda; Lowi-Jones, Hannah; Mitchell, Colin

    2016-07-01

    Clinical guidelines and policies are widely available on the hospital intranet or from the internet, but can be difficult to access at the required time and place. Clinical staff with smartphones could use Quick Response (QR) codes for contemporaneous access to relevant information to support the Just in Time Learning (JIT-L) paradigm. There are several studies that advocate the use of smartphones to enhance learning amongst medical students and junior doctors in UK. However, these participants are already technologically orientated. There are limited studies that explore the use of smartphones in nursing practice. QR Codes were generated for each topic and positioned at relevant locations on a medical ward. Support and training were provided for staff. Website analytics and semi-structured interviews were performed to evaluate the efficacy, acceptability and feasibility of using QR codes to facilitate Just in Time learning. Use was intermittently high but not sustained. Thematic analysis of interviews revealed a positive assessment of the Just in Time learning paradigm and context-sensitive clinical information. However, there were notable barriers to acceptance, including usability of QR codes and appropriateness of smartphone use in a clinical environment. The use of Just in Time learning for education and reference may be beneficial to healthcare professionals. However, alternative methods of access for less technologically literate users and a change in culture of mobile device use in clinical areas may be needed. PMID:27428702

  14. Auditory short-term memory in the primate auditory cortex.

    PubMed

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  15. Trading Speed and Accuracy by Coding Time: A Coupled-circuit Cortical Model

    PubMed Central

    Standage, Dominic; You, Hongzhi; Wang, Da-Hui; Dorris, Michael C.

    2013-01-01

    Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by ‘climbing’ activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification. PMID:23592967

  16. Time-frequency analysis of transient evoked-otoacoustic emissions in individuals with auditory neuropathy spectrum disorder.

    PubMed

    Narne, Vijaya Kumar; Prabhu, P Prashanth; Chatni, Suma

    2014-07-01

    The aim of the study was to describe and quantify the cochlear active mechanisms in individuals with Auditory Neuropathy Spectrum Disorders (ANSD). Transient Evoked Otoacoustic Emissions (TEOAEs) were recorded in 15 individuals with ANSD and 22 individuals with normal hearing. TEOAEs were analyzed by Wavelet transform method to describe and quantify the characteristics of TEOAEs in narrow-band frequency regions. It was noted that the amplitude of TEOAEs was higher and latency slightly shorter in individuals with ANSD compared to normal hearing individuals at low and mid frequencies. The increased amplitude and reduced latencies of TEOAEs in ANSD group could be attributed to the efferent system damage, especially at low and mid frequencies seen in individuals with ANSD. Thus, wavelet analysis of TEOAEs proves to be another important tool to understand the patho-physiology in individuals with ANSD. PMID:24768764

  17. On the effect of timing errors in run length codes. [redundancy removal algorithms for digital channels

    NASA Technical Reports Server (NTRS)

    Wilkins, L. C.; Wintz, P. A.

    1975-01-01

    Many redundancy removal algorithms employ some sort of run length code. Blocks of timing words are coded with synchronization words inserted between blocks. The probability of incorrectly reconstructing a sample because of a channel error in the timing data is a monotonically nondecreasing function of time since the last synchronization word. In this paper we compute the 'probability that the accumulated magnitude of timing errors equal zero' as a function of time since the last synchronization word for a zero-order predictor (ZOP). The result is valid for any data source that can be modeled by a first-order Markov chain and any digital channel that can be modeled by a channel transition matrix. An example is presented.

  18. Cumulative Time Series Representation for Code Blue prediction in the Intensive Care Unit.

    PubMed

    Salas-Boni, Rebeca; Bai, Yong; Hu, Xiao

    2015-01-01

    Patient monitors in hospitals generate a high number of false alarms that compromise patients care and burden clinicians. In our previous work, an attempt to alleviate this problem by finding combinations of monitor alarms and laboratory test that were predictive of code blue events, called SuperAlarms. Our current work consists of developing a novel time series representation that accounts for both cumulative effects and temporality was developed, and it is applied to code blue prediction in the intensive care unit (ICU). The health status of patients is represented both by a term frequency approach, TF, often used in natural language processing; and by our novel cumulative approach. We call this representation "weighted accumulated occurrence representation", or WAOR. These two representations are fed into a L1 regularized logistic regression classifier, and are used to predict code blue events. Our performance was assessed online in an independent set. We report the sensitivity of our algorithm at different time windows prior to the code blue event, as well as the work-up to detect ratio and the proportion of false code blue detections divided by the number of false monitor alarms. We obtained a better performance with our cumulative representation, retaining a sensitivity close to our previous work while improving the other metrics. PMID:26306261

  19. Computer code for space-time diagnostics of nuclear safety parameters

    SciTech Connect

    Solovyev, D. A.; Semenov, A. A.; Gruzdov, F. V.; Druzhaev, A. A.; Shchukin, N. V.; Dolgenko, S. G.; Solovyeva, I. V.; Ovchinnikova, E. A.

    2012-07-01

    The computer code ECRAN 3D (Experimental and Calculation Reactor Analysis) is designed for continuous monitoring and diagnostics of reactor cores and databases for RBMK-1000 on the basis of analytical methods for the interrelation parameters of nuclear safety. The code algorithms are based on the analysis of deviations between the physically obtained figures and the results of neutron-physical and thermal-hydraulic calculations. Discrepancies between the measured and calculated signals are equivalent to obtaining inadequacy between performance of the physical device and its simulator. The diagnostics system can solve the following problems: identification of facts and time for inconsistent results, localization of failures, identification and quantification of the causes for inconsistencies. These problems can be effectively solved only when the computer code is working in a real-time mode. This leads to increasing requirements for a higher code performance. As false operations can lead to significant economic losses, the diagnostics system must be based on the certified software tools. POLARIS, version 4.2.1 is used for the neutron-physical calculation in the computer code ECRAN 3D. (authors)

  20. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response

    PubMed Central

    Nuttall, Helen E.; Moore, David R.; Barry, Johanna G.; Krumbholz, Katrin

    2015-01-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. PMID:25787954

  1. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response.

    PubMed

    Nuttall, Helen E; Moore, David R; Barry, Johanna G; Krumbholz, Katrin; de Boer, Jessica

    2015-06-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics.

  2. A novel repetition space-time coding scheme for mobile FSO systems

    NASA Astrophysics Data System (ADS)

    Li, Ming; Cao, Yang; Li, Shu-ming; Yang, Shao-wen

    2015-03-01

    Considering the influence of more random atmospheric turbulence, worse pointing errors and highly dynamic link on the transmission performance of mobile multiple-input multiple-output (MIMO) free space optics (FSO) communication systems, this paper establishes a channel model for the mobile platform. Based on the combination of Alamouti space-time code and time hopping ultra-wide band (TH-UWB) communications, a novel repetition space-time coding (RSTC) method for mobile 2×2 free-space optical communications with pulse position modulation (PPM) is developed. In particular, two decoding methods of equal gain combining (EGC) maximum likelihood detection (MLD) and correlation matrix detection (CMD) are derived. When a quasi-static fading and weak turbulence channel model are considered, simulation results show that whether the channel state information (CSI) is known or not, the coding system demonstrates more significant performance of the symbol error rate (SER) than the uncoding. In other words, transmitting diversity can be achieved while conveying the information only through the time delays of the modulated signals transmitted from different antennas. CMD has almost the same effect of signal combining with maximal ratio combining (MRC). However, when the channel correlation increases, SER performance of the coding 2×2 system degrades significantly.

  3. The Role of Coding Time in Estimating and Interpreting Growth Curve Models.

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.; Deeb-Sossa, Natalia; Papadakis, Alison A.; Bollen, Kenneth A.; Curran, Patrick J.

    2004-01-01

    The coding of time in growth curve models has important implications for the interpretation of the resulting model that are sometimes not transparent. The authors develop a general framework that includes predictors of growth curve components to illustrate how parameter estimates and their standard errors are exactly determined as a function of…

  4. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.10...

  5. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.10...

  6. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.10...

  7. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.10...

  8. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.10...

  9. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex.

    PubMed

    Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  10. Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1980-01-01

    There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.

  11. Reliable Wireless Broadcast with Linear Network Coding for Multipoint-to-Multipoint Real-Time Communications

    NASA Astrophysics Data System (ADS)

    Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi

    This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.

  12. a Real-Time Earthquake Moment Tensor Scanning Code for the Antelope System (brtt, Inc)

    NASA Astrophysics Data System (ADS)

    Macpherson, K. A.; Ruppert, N. A.; Freymueller, J. T.; Lindquist, K.; Harvey, D.; Dreger, D. S.; Lombard, P. N.; Guilhem, A.

    2015-12-01

    While all seismic observatories routinely determine hypocentral location and local magnitude within a few minutes of an earthquake's occurrence, the ability to estimate seismic moment and sense of slip in a similar time frame is less widespread. This is unfortunate, because moment and mechanism are critical parameters for rapid hazard assessment; for larger events, moment magnitude is more reliable due to the tendency of local magnitude to saturate, and certain mechanisms such as off-shore thrust events might indicate earthquakes with tsunamigenic potential. In order to increase access to this capability, we have developed a continuous moment tensor scanning code for Antelope, the ubiquitous open-architecture seismic acquisition and processing software in use around the world. The scanning code, which uses an algorithm that has previously been employed for real-time monitoring at the University of California, Berkeley, is able to produce full moment tensor solutions for moderate events from regional seismic data. The algorithm monitors a grid of potential sources by continuously cross-correlating pre-computed synthetic seismograms with long-period recordings from a sparse network of broad-band stations. The code package consists of 3 modules. One module is used to create a monitoring grid by constructing source-receiver geometry, calling a frequency-wavenumber code to produce synthetics, and computing the generalized linear inverse of the array of synthetics. There is a real-time scanning module that correlates streaming data with pre-inverted synthetics, monitors the variance reduction, and writes the moment tensor solution to a database if an earthquake detection occurs. Finally, there is an 'off-line' module that is very similar to the real-time scanner, with the exception that it utilizes pre-recorded data stored in Antelope databases and is useful for testing purposes or for quickly producing moment tensor catalogs for long time series. The code is open source

  13. A time dependent 2D divertor code with TVD scheme for complex divertor configurations

    NASA Astrophysics Data System (ADS)

    Shimizu, K.; Takizuka, T.; Hirayama, T.

    1999-11-01

    In order to study the transport of heat and particles in the SOL and divertor plasmas, a two-dimensional divertor code, SOLDOR has been developed. The model used in this code is identical to the B2-code. Fluid equations are discretized in space under a non orthogonal mesh to treat accurately the W shape divertor configuration of JT-60U. The total variation diminishing scheme (TVD), which is a most familiar one in computational fluid dynamics, is applied for convective terms. The equations obtained by a finite volume method (FVM) are discretized in time with a full implicit scheme and are solved time-dependently using the Newton-Raphson method. The discretized equations are solved efficiently using approximate factorization method (AF). Test calculations in the slab geometry successfully reproduced the B2 results (B.J. Braams, NET report 1987) . We are going to apply this code to JT-60U divertor plasma and investigate the flow reversal and impurity transport.

  14. Automation from pictures: Producing real time code from a state transition diagram

    SciTech Connect

    Kozubal, A.J.

    1991-01-01

    The state transition diagram (STD) model has been helpful in the design of real time software, especially with the emergence of graphical computer aided software engineering (CASE) tools. Nevertheless, the translation of the STD to real time code has in the past been primarily a manual task. At Los Alamos we have automated this process. The designer constructs the STD using a CASE tool (Cadre Teamwork) using a special notation for events and actions. A translator converts the STD into an intermediate state notation language (SNL), and this SNL is compiled directly into C code (a state program). Execution of the state program is driven by external events, allowing multiple state programs to effectively share the resources of the host processor. Since the design and the code are tightly integrated through the CASE tool, the design and code never diverge, and we avoid design obsolescence. Furthermore, the CASE tool automates the production of formal technical documents from the graphic description encapsulated by the CASE tool. 10 refs., 3 figs.

  15. Neural processing of auditory signals in the time domain: delay-tuned coincidence detectors in the mustached bat.

    PubMed

    Suga, Nobuo

    2015-06-01

    The central auditory system produces combination-sensitive neurons tuned to a specific combination of multiple signal elements. Some of these neurons act as coincidence detectors with delay lines for the extraction of spectro-temporal information from sounds. "Delay-tuned" neurons of mustached bats are tuned to a combination of up to four signal elements with a specific delay between them and form a delay map. They are produced in the inferior colliculus by the coincidence of the rebound response following glycinergic inhibition to the first harmonic of a biosonar pulse with the short-latency response to the 2nd-4th harmonics of its echo. Compared with collicular delay-tuned neurons, thalamic and cortical ones respond more to pulse-echo pairs than individual sounds. Cortical delay-tuned neurons are clustered in the three separate areas. They interact with each other through a circuit mediating positive feedback and lateral inhibition for adjustment and improvement of the delay tuning of cortical and subcortical neurons. The current article reviews the mechanisms for delay tuning and the response properties of collicular, thalamic and cortical delay-tuned neurons in relation to hierarchical signal processing. PMID:25752443

  16. A gradual neural-network algorithm for jointly time-slot/code assignment problems in packet radio networks.

    PubMed

    Funabiki, N; Kitamichi, J

    1998-01-01

    A gradual neural network (GNN) algorithm is presented for the jointly time-slot/code assignment problem (JTCAP) in a packet radio network in this paper. The goal of this newly defined problem is to find a simultaneous assignment of a time-slot and a code to each communication link, whereas time-slots and codes have been independently assigned in existing algorithms. A time/code division multiple access protocol is adopted for conflict-free communications, where packets are transmitted in repetition of fixed-length time-slots with specific codes. GNN seeks the time-slot/code assignment with the minimum number of time-slots subject to two constraints: 1) the number of codes must not exceed its upper limit and 2) any couple of links within conflict distance must not be assigned to the same time-slot/code pair. The restricted problem for only one code is known to be NP-complete. The performance of GNN is verified through solving 3000 instances with 100-500 nodes and 100-1000 links. The comparison with the lower bound and a greedy algorithm shows the superiority of GNN in terms of the solution quality with the comparable computation time.

  17. Why two "Distractors" are better than one: modeling the effect of non-target auditory and tactile stimuli on visual saccadic reaction time.

    PubMed

    Diederich, Adele; Colonius, Hans

    2007-05-01

    Saccadic reaction time (SRT) was measured in a focused attention task with a visual target stimulus (LED) and auditory (white noise burst) and tactile (vibration applied to palm) stimuli presented as non-targets at five different onset times (SOAs) with respect to the target. Mean SRT was reduced (i) when the number of non-targets was increased and (ii) when target and non-targets were all presented in the same hemifield; (iii) this facilitation first increases and then decreases as the time point of presenting the non-targets is shifted from early to late relative to the target presentation. These results are consistent with the time-window-of-integration (TWIN) model (Colonius and Diederich in J Cogn Neurosci 16:1000-1009, 2004) which distinguishes a peripheral stage of independent sensory channels racing against each other from a second stage of neural integration of the input and preparation of an oculomotor response. Cross-modal interaction manifests itself in an increase or decrease of second stage processing time. For the first time, without making specific distributional assumptions on the processing times, TWIN is shown to yield numerical estimates for the facilitative effects of the number of non-targets and of the spatial configuration of target and non-targets. More generally, the TWIN model framework suggests that multisensory integration is a function of unimodal stimulus properties, like intensity, in the first stage and of cross-modal stimulus properties, like spatial disparity, in the second stage. PMID:17216154

  18. Bat's auditory system: Corticofugal feedback and plasticity

    NASA Astrophysics Data System (ADS)

    Suga, Nobuo

    2001-05-01

    The auditory system of the mustached bat consists of physiologically distinct subdivisions for processing different types of biosonar information. It was found that the corticofugal (descending) auditory system plays an important role in improving and adjusting auditory signal processing. Repetitive acoustic stimulation, cortical electrical stimulation or auditory fear conditioning evokes plastic changes of the central auditory system. The changes are based upon egocentric selection evoked by focused positive feedback associated with lateral inhibition. Focal electric stimulation of the auditory cortex evokes short-term changes in the auditory cortex and subcortical auditory nuclei. An increase in a cortical acetylcholine level during the electric stimulation changes the cortical changes from short-term to long-term. There are two types of plastic changes (reorganizations): centripetal best frequency shifts for expanded reorganization of a neural frequency map and centrifugal best frequency shifts for compressed reorganization of the map. Which changes occur depends on the balance between inhibition and facilitation. Expanded reorganization has been found in different sensory systems and different species of mammals, whereas compressed reorganization has been thus far found only in the auditory subsystems highly specialized for echolocation. The two types of reorganizations occur in both the frequency and time domains. [Work supported by NIDCO DC00175.

  19. Is Auditory Discrimination Mature by Middle Childhood? A Study Using Time-Frequency Analysis of Mismatch Responses from 7 Years to Adulthood

    ERIC Educational Resources Information Center

    Bishop, Dorothy V. M.; Hardiman, Mervyn J.; Barry, Johanna G.

    2011-01-01

    Behavioural and electrophysiological studies give differing impressions of when auditory discrimination is mature. Ability to discriminate frequency and speech contrasts reaches adult levels only around 12 years of age, yet an electrophysiological index of auditory discrimination, the mismatch negativity (MMN), is reported to be as large in…

  20. Imaginary time propagation code for large-scale two-dimensional eigenvalue problems in magnetic fields

    NASA Astrophysics Data System (ADS)

    Luukko, P. J. J.; Räsänen, E.

    2013-03-01

    We present a code for solving the single-particle, time-independent Schrödinger equation in two dimensions. Our program utilizes the imaginary time propagation (ITP) algorithm, and it includes the most recent developments in the ITP method: the arbitrary order operator factorization and the exact inclusion of a (possibly very strong) magnetic field. Our program is able to solve thousands of eigenstates of a two-dimensional quantum system in reasonable time with commonly available hardware. The main motivation behind our work is to allow the study of highly excited states and energy spectra of two-dimensional quantum dots and billiard systems with a single versatile code, e.g., in quantum chaos research. In our implementation we emphasize a modern and easily extensible design, simple and user-friendly interfaces, and an open-source development philosophy. Catalogue identifier: AENR_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 11310 No. of bytes in distributed program, including test data, etc.: 97720 Distribution format: tar.gz Programming language: C++ and Python. Computer: Tested on x86 and x86-64 architectures. Operating system: Tested under Linux with the g++ compiler. Any POSIX-compliant OS with a C++ compiler and the required external routines should suffice. Has the code been vectorised or parallelized?: Yes, with OpenMP. RAM: 1 MB or more, depending on system size. Classification: 7.3. External routines: FFTW3 (http://www.fftw.org), CBLAS (http://netlib.org/blas), LAPACK (http://www.netlib.org/lapack), HDF5 (http://www.hdfgroup.org/HDF5), OpenMP (http://openmp.org), TCLAP (http://tclap.sourceforge.net), Python (http://python.org), Google Test (http://code.google.com/p/googletest/) Nature of problem: Numerical calculation

  1. Real-time speech encoding based on Code-Excited Linear Prediction (CELP)

    NASA Technical Reports Server (NTRS)

    Leblanc, Wilfrid P.; Mahmoud, S. A.

    1988-01-01

    This paper reports on the work proceeding with regard to the development of a real-time voice codec for the terrestrial and satellite mobile radio environments. The codec is based on a complexity reduced version of code-excited linear prediction (CELP). The codebook search complexity was reduced to only 0.5 million floating point operations per second (MFLOPS) while maintaining excellent speech quality. Novel methods to quantize the residual and the long and short term model filters are presented.

  2. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  3. Coded throughput performance simulations for the time-varying satellite channel. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Han, LI

    1995-01-01

    The design of a reliable satellite communication link involving the data transfer from a small, low-orbit satellite to a ground station, but through a geostationary satellite, was examined. In such a scenario, the received signal power to noise density ratio increases as the transmitting low-orbit satellite comes into view, and then decreases as it then departs, resulting in a short-duration, time-varying communication link. The optimal values of the small satellite antenna beamwidth, signaling rate, modulation scheme and the theoretical link throughput (in bits per day) have been determined. The goal of this thesis is to choose a practical coding scheme which maximizes the daily link throughput while satisfying a prescribed probability of error requirement. We examine the throughput of both fixed rate and variable rate concatenated forward error correction (FEC) coding schemes for the additive white Gaussian noise (AWGN) channel, and then examine the effect of radio frequency interference (RFI) on the best coding scheme among them. Interleaving is used to mitigate degradation due to RFI. It was found that the variable rate concatenated coding scheme could achieve 74 percent of the theoretical throughput, equivalent to 1.11 Gbits/day based on the cutoff rate R(sub 0). For comparison, 87 percent is achievable for AWGN-only case.

  4. Development of the N1–P2 auditory evoked response to amplitude rise time and rate of formant transition of speech sounds

    PubMed Central

    Carpenter, Allen L.; Shahin, Antoine J.

    2013-01-01

    We investigated the development of weighting strategies for acoustic cues by examining the morphology of the N1–P2 auditory evoked potential (AEP) to changes in amplitude rise time (ART) and rate of formant transition (RFT) of consonant–vowel (CV) pairsin4–6-year olds and adults. In the AEP session, individuals listened passively to the CVs /ba/, /wa/, and a /ba/ with a superimposed slower-rising /wa/ envelope (/ba/wa). In the behavioral session, individuals listened to the same stimuli and judged whether they heard a /ba/ or /wa/. We hypothesized that a developmental shift in weighting strategies should be reflected in a change in the morphology of the N1–P2 AEP. In 6-year olds and adults, the N1–P2 amplitude at the vertex reflected a change in RFT but not in ART. In contrast, in the 4–5-year olds, the vertex N1–P2 did not show specificity to changes in ART or RFT. In all groups, the N1–P2 amplitude at channel C4 (right hemisphere) reflected a change in ART but not in RFT. Behaviorally, 6-year olds and adults predominately utilized RFT cues (classified /ba/wa as /ba/) during phonetic judgments, as opposed to 4–5-year olds which utilized both cues equally. Our findings suggest that both ART and RFT are encoded in the auditory cortex, but an N1–P2 shift toward the vertex following age 4–5 indicates a shift toward an adult-like weighting strategy, such that, to utilize RFT to a greater extent. PMID:23570734

  5. A unified mathematical framework for coding time, space, and sequences in the hippocampal region.

    PubMed

    Howard, Marc W; MacDonald, Christopher J; Tiganj, Zoran; Shankar, Karthik H; Du, Qian; Hasselmo, Michael E; Eichenbaum, Howard

    2014-03-26

    The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1.

  6. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  7. Auditory Training for Central Auditory Processing Disorder.

    PubMed

    Weihing, Jeffrey; Chermak, Gail D; Musiek, Frank E

    2015-11-01

    Auditory training (AT) is an important component of rehabilitation for patients with central auditory processing disorder (CAPD). The present article identifies and describes aspects of AT as they relate to applications in this population. A description of the types of auditory processes along with information on relevant AT protocols that can be used to address these specific deficits is included. Characteristics and principles of effective AT procedures also are detailed in light of research that reflects on their value. Finally, research investigating AT in populations who show CAPD or present with auditory complaints is reported. Although efficacy data in this area are still emerging, current findings support the use of AT for treatment of auditory difficulties. PMID:27587909

  8. Auditory Training for Central Auditory Processing Disorder

    PubMed Central

    Weihing, Jeffrey; Chermak, Gail D.; Musiek, Frank E.

    2015-01-01

    Auditory training (AT) is an important component of rehabilitation for patients with central auditory processing disorder (CAPD). The present article identifies and describes aspects of AT as they relate to applications in this population. A description of the types of auditory processes along with information on relevant AT protocols that can be used to address these specific deficits is included. Characteristics and principles of effective AT procedures also are detailed in light of research that reflects on their value. Finally, research investigating AT in populations who show CAPD or present with auditory complaints is reported. Although efficacy data in this area are still emerging, current findings support the use of AT for treatment of auditory difficulties. PMID:27587909

  9. Blind Adaptive Decorrelating RAKE (DRAKE) Downlink Receiver for Space-Time Block Coded Multipath CDMA

    NASA Astrophysics Data System (ADS)

    Jayaweera, Sudharman K.; Poor, H. Vincent

    2003-12-01

    A downlink receiver is proposed for space-time block coded CDMA systems operating in multipath channels. By combining the powerful RAKE receiver concept for a frequency selective channel with space-time decoding, it is shown that the performance of mobile receivers operating in the presence of channel fading can be improved significantly. The proposed receiver consists of a bank of decorrelating filters designed to suppress the multiple access interference embedded in the received signal before the space-time decoding. The new receiver performs the space-time decoding along each resolvable multipath component and then the outputs are diversity combined to obtain the final decision statistic. The proposed receiver relies on a key constraint imposed on the output of each filter in the bank of decorrelating filters in order to maintain the space-time block code structure embedded in the signal. The proposed receiver can easily be adapted blindly, requiring only the desired user's signature sequence, which is also attractive in the context of wireless mobile communications. Simulation results are provided to confirm the effectiveness of the proposed receiver in multipath CDMA systems.

  10. Timing Precision in Population Coding of Natural Scenes in the Early Visual System

    PubMed Central

    Desbordes, Gaëlle; Jin, Jianzhong; Weng, Chong; Lesica, Nicholas A; Stanley, Garrett B; Alonso, Jose-Manuel

    2008-01-01

    The timing of spiking activity across neurons is a fundamental aspect of the neural population code. Individual neurons in the retina, thalamus, and cortex can have very precise and repeatable responses but exhibit degraded temporal precision in response to suboptimal stimuli. To investigate the functional implications for neural populations in natural conditions, we recorded in vivo the simultaneous responses, to movies of natural scenes, of multiple thalamic neurons likely converging to a common neuronal target in primary visual cortex. We show that the response of individual neurons is less precise at lower contrast, but that spike timing precision across neurons is relatively insensitive to global changes in visual contrast. Overall, spike timing precision within and across cells is on the order of 10 ms. Since closely timed spikes are more efficient in inducing a spike in downstream cortical neurons, and since fine temporal precision is necessary to represent the more slowly varying natural environment, we argue that preserving relative spike timing at a ∼10-ms resolution is a crucial property of the neural code entering cortex. PMID:19090624

  11. Auditory learning: a developmental method.

    PubMed

    Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan

    2005-05-01

    Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers.

  12. Cryptographic robustness of a quantum cryptography system using phase-time coding

    SciTech Connect

    Molotkov, S. N.

    2008-01-15

    A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In the absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.

  13. Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code System.

    2013-06-24

    Version 07 TART2012 is a coupled neutron-photon Monte Carlo transport code designed to use three-dimensional (3-D) combinatorial geometry. Neutron and/or photon sources as well as neutron induced photon production can be tracked. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART2012 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared tomore » other similar codes. Use of the entire system can save you a great deal of time and energy. TART2012 extends the general utility of the code to even more areas of application than available in previous releases by concentrating on improving the physics, particularly with regard to improved treatment of neutron fission, resonance self-shielding, molecular binding, and extending input options used by the code. Several utilities are included for creating input files and displaying TART results and data. TART2012 uses the latest ENDF/B-VI, Release 8, data. New for TART2012 is the use of continuous energy neutron cross sections, in addition to its traditional multigroup cross sections. For neutron interaction, the data are derived using ENDF-ENDL2005 and include both continuous energy cross sections and 700 group neutron data derived using a combination of ENDF/B-VI, Release 8, and ENDL data. The 700 group structure extends from 10-5 eV up to 1 GeV. Presently nuclear data are only available up to 20 MeV, so that only 616 of the groups are currently used. For photon interaction, 701 point photon data were derived using the Livermore EPDL97 file. The new 701 point structure extends from 100 eV up to 1 GeV, and is currently used over this entire energy range. TART2012 completely supersedes all older versions of TART, and it is strongly recommended that one use only the most recent version of TART2012 and its data files. Check author’s homepage for related information: http

  14. Space-Time Coded MC-CDMA: Blind Channel Estimation, Identifiability, and Receiver Design

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Li, Hongbin

    2003-12-01

    Integrating the strengths of multicarrier (MC) modulation and code division multiple access (CDMA), MC-CDMA systems are of great interest for future broadband transmissions. This paper considers the problem of channel identification and signal combining/detection schemes for MC-CDMA systems equipped with multiple transmit antennas and space-time (ST) coding. In particular, a subspace based blind channel identification algorithm is presented. Identifiability conditions are examined and specified which guarantee unique and perfect (up to a scalar) channel estimation when knowledge of the noise subspace is available. Several popular single-user based signal combining schemes, namely the maximum ratio combining (MRC) and the equal gain combining (EGC), which are often utilized in conventional single-transmit-antenna based MC-CDMA systems, are extended to the current ST-coded MC-CDMA (STC-MC-CDMA) system to perform joint combining and decoding. In addition, a linear multiuser minimum mean-squared error (MMSE) detection scheme is also presented, which is shown to outperform the MRC and EGC at some increased computational complexity. Numerical examples are presented to evaluate and compare the proposed channel identification and signal detection/combining techniques.

  15. Auditory Temporal Conditioning in Neonates.

    ERIC Educational Resources Information Center

    Franz, W. K.; And Others

    Twenty normal newborns, approximately 36 hours old, were tested using an auditory temporal conditioning paradigm which consisted of a slow rise, 75 db tone played for five seconds every 25 seconds, ten times. Responses to the tones were measured by instantaneous, beat-to-beat heartrate; and the test trial was designated as the 2 1/2-second period…

  16. Delayed Auditory Feedback and Movement

    ERIC Educational Resources Information Center

    Pfordresher, Peter Q.; Dalla Bella, Simone

    2011-01-01

    It is well known that timing of rhythm production is disrupted by delayed auditory feedback (DAF), and that disruption varies with delay length. We tested the hypothesis that disruption depends on the state of the movement trajectory at the onset of DAF. Participants tapped isochronous rhythms at a rate specified by a metronome while hearing DAF…

  17. Passive Auditory Stimulation Improves Vision in Hemianopia

    PubMed Central

    Lewald, Jörg; Tegenthoff, Martin; Peters, Sören; Hausmann, Markus

    2012-01-01

    Techniques employed in rehabilitation of visual field disorders such as hemianopia are usually based on either visual or audio-visual stimulation and patients have to perform a training task. Here we present results from a completely different, novel approach that was based on passive unimodal auditory stimulation. Ten patients with either left or right-sided pure hemianopia (without neglect) received one hour of unilateral passive auditory stimulation on either their anopic or their intact side by application of repetitive trains of sound pulses emitted simultaneously via two loudspeakers. Immediately before and after passive auditory stimulation as well as after a period of recovery, patients completed a simple visual task requiring detection of light flashes presented along the horizontal plane in total darkness. The results showed that one-time passive auditory stimulation on the side of the blind, but not of the intact, hemifield of patients with hemianopia induced an improvement in visual detections by almost 100% within 30 min after passive auditory stimulation. This enhancement in performance was reversible and was reduced to baseline 1.5 h later. A non-significant trend of a shift of the visual field border toward the blind hemifield was obtained after passive auditory stimulation. These results are compatible with the view that passive auditory stimulation elicited some activation of the residual visual pathways, which are known to be multisensory and may also be sensitive to unimodal auditory stimuli as were used here. Trial Registration DRKS00003577 PMID:22666311

  18. Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests

    SciTech Connect

    Zilhao, Miguel; Herdeiro, Carlos; Witek, Helvi; Nerozzi, Andrea; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo

    2010-04-15

    The numerical evolution of Einstein's field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D{>=}5, or SO(D-3) for D{>=}6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.

  19. Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests

    NASA Astrophysics Data System (ADS)

    Zilhão, Miguel; Witek, Helvi; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Nerozzi, Andrea

    2010-04-01

    The numerical evolution of Einstein’s field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.

  20. Comparison of WDM/Pulse-Position-Modulation (WDM/PPM) with Code/Pulse-Position-Swapping (C/PPS) Based on Wavelength/Time Codes

    SciTech Connect

    Mendez, A J; Hernandez, V J; Gagliardi, R M; Bennett, C V

    2009-06-19

    Pulse position modulation (PPM) signaling is favored in intensity modulated/direct detection (IM/DD) systems that have average power limitations. Combining PPM with WDM over a fiber link (WDM/PPM) enables multiple accessing and increases the link's throughput. Electronic bandwidth and synchronization advantages are further gained by mapping the time slots of PPM onto a code space, or code/pulse-position-swapping (C/PPS). The property of multiple bits per symbol typical of PPM can be combined with multiple accessing by using wavelength/time [W/T] codes in C/PPS. This paper compares the performance of WDM/PPM and C/PPS for equal wavelengths and bandwidth.

  1. Imaging of human tooth using ultrasound based chirp-coded nonlinear time reversal acoustics.

    PubMed

    Dos Santos, Serge; Prevorovsky, Zdenek

    2011-08-01

    Human tooth imaging sonography is investigated experimentally with an acousto-optic noncoupling set-up based on the chirp-coded nonlinear time reversal acoustic concept. The complexity of the tooth internal structure (enamel-dentine interface, cracks between internal tubules) is analyzed by adapting the nonlinear elastic wave spectroscopy (NEWS) with the objective of the tomography of damage. Optimization of excitations using intrinsic symmetries, such as time reversal (TR) invariance, reciprocity, correlation properties are then proposed and implemented experimentally. The proposed medical application of this TR-NEWS approach is implemented on a third molar human tooth and constitutes an alternative of noncoupling echodentography techniques. A 10 MHz bandwidth ultrasonic instrumentation has been developed including a laser vibrometer and a 20 MHz contact piezoelectric transducer. The calibrated chirp-coded TR-NEWS imaging of the tooth is obtained using symmetrized excitations, pre- and post-signal processing, and the highly sensitive 14 bit resolution TR-NEWS instrumentation previously calibrated. Nonlinear signature coming from the symmetry properties is observed experimentally in the tooth using this bi-modal TR-NEWS imaging after and before the focusing induced by the time-compression process. The TR-NEWS polar B-scan of the tooth is described and suggested as a potential application for modern echodentography. It constitutes the basis of the self-consistent harmonic imaging sonography for monitoring cracks propagation in the dentine, responsible of human tooth structural health.

  2. Manipulation of BK channel expression is sufficient to alter auditory hair cell thresholds in larval zebrafish

    PubMed Central

    Rohmann, Kevin N.; Tripp, Joel A.; Genova, Rachel M.; Bass, Andrew H.

    2014-01-01

    Non-mammalian vertebrates rely on electrical resonance for frequency tuning in auditory hair cells. A key component of the resonance exhibited by these cells is an outward calcium-activated potassium current that flows through large-conductance calcium-activated potassium (BK) channels. Previous work in midshipman fish (Porichthys notatus) has shown that BK expression correlates with seasonal changes in hearing sensitivity and that pharmacologically blocking these channels replicates the natural decreases in sensitivity during the winter non-reproductive season. To test the hypothesis that reducing BK channel function is sufficient to change auditory thresholds in fish, morpholino oligonucleotides (MOs) were used in larval zebrafish (Danio rerio) to alter expression of slo1a and slo1b, duplicate genes coding for the pore-forming α-subunits of BK channels. Following MO injection, microphonic potentials were recorded from the inner ear of larvae. Quantitative real-time PCR was then used to determine the MO effect on slo1a and slo1b expression in these same fish. Knockdown of either slo1a or slo1b resulted in disrupted gene expression and increased auditory thresholds across the same range of frequencies of natural auditory plasticity observed in midshipman. We conclude that interference with the normal expression of individual slo1 genes is sufficient to increase auditory thresholds in zebrafish larvae and that changes in BK channel expression are a direct mechanism for regulation of peripheral hearing sensitivity among fishes. PMID:24803460

  3. TTVFast: An efficient and accurate code for transit timing inversion problems

    SciTech Connect

    Deck, Katherine M.; Agol, Eric; Holman, Matthew J.; Nesvorný, David

    2014-06-01

    Transit timing variations (TTVs) have proven to be a powerful technique for confirming Kepler planet candidates, for detecting non-transiting planets, and for constraining the masses and orbital elements of multi-planet systems. These TTV applications often require the numerical integration of orbits for computation of transit times (as well as impact parameters and durations); frequently tens of millions to billions of simulations are required when running statistical analyses of the planetary system properties. We have created a fast code for transit timing computation, TTVFast, which uses a symplectic integrator with a Keplerian interpolator for the calculation of transit times. The speed comes at the expense of accuracy in the calculated times, but the accuracy lost is largely unnecessary, as transit times do not need to be calculated to accuracies significantly smaller than the measurement uncertainties on the times. The time step can be tuned to give sufficient precision for any particular system. We find a speed-up of at least an order of magnitude relative to dynamical integrations with high precision using a Bulirsch-Stoer integrator.

  4. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    SciTech Connect

    Arndt, S.A.

    1997-07-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.

  5. Spatially resolved time-frequency analysis of odour coding in the insect antennal lobe.

    PubMed

    Paoli, Marco; Weisz, Nathan; Antolini, Renzo; Haase, Albrecht

    2016-09-01

    Antennal lobes constitute the first neurophils in the insect brain involved in coding and processing of olfactory information. With their stereotyped functional and anatomical organization, they provide an accessible model with which to investigate information processing of an external stimulus in a neural network in vivo. Here, by combining functional calcium imaging with time-frequency analysis, we have been able to monitor the oscillatory components of neural activity upon olfactory stimulation. The aim of this study is to investigate the presence of stimulus-induced oscillatory patterns in the honeybee antennal lobe, and to analyse the distribution of those patterns across the antennal lobe glomeruli. Fast two-photon calcium imaging reveals the presence of low-frequency oscillations, the intensity of which is perturbed by an incoming stimulus. Moreover, analysis of the spatial arrangement of this activity indicates that it is not homogeneous throughout the antennal lobe. On the contrary, each glomerulus displays an odorant-specific time-frequency profile, and acts as a functional unit of the oscillatory activity. The presented approach allows simultaneous recording of complex activity patterns across several nodes of the antennal lobe, providing the means to better understand the network dynamics regulating olfactory coding and leading to perception. PMID:27452956

  6. Spatially resolved time-frequency analysis of odour coding in the insect antennal lobe.

    PubMed

    Paoli, Marco; Weisz, Nathan; Antolini, Renzo; Haase, Albrecht

    2016-09-01

    Antennal lobes constitute the first neurophils in the insect brain involved in coding and processing of olfactory information. With their stereotyped functional and anatomical organization, they provide an accessible model with which to investigate information processing of an external stimulus in a neural network in vivo. Here, by combining functional calcium imaging with time-frequency analysis, we have been able to monitor the oscillatory components of neural activity upon olfactory stimulation. The aim of this study is to investigate the presence of stimulus-induced oscillatory patterns in the honeybee antennal lobe, and to analyse the distribution of those patterns across the antennal lobe glomeruli. Fast two-photon calcium imaging reveals the presence of low-frequency oscillations, the intensity of which is perturbed by an incoming stimulus. Moreover, analysis of the spatial arrangement of this activity indicates that it is not homogeneous throughout the antennal lobe. On the contrary, each glomerulus displays an odorant-specific time-frequency profile, and acts as a functional unit of the oscillatory activity. The presented approach allows simultaneous recording of complex activity patterns across several nodes of the antennal lobe, providing the means to better understand the network dynamics regulating olfactory coding and leading to perception.

  7. Optogenetic stimulation of the auditory pathway

    PubMed Central

    Hernandez, Victor H.; Gehrt, Anna; Reuter, Kirsten; Jing, Zhizi; Jeschke, Marcus; Mendoza Schulz, Alejandro; Hoch, Gerhard; Bartels, Matthias; Vogt, Gerhard; Garnham, Carolyn W.; Yawo, Hiromu; Fukazawa, Yugo; Augustine, George J.; Bamberg, Ernst; Kügler, Sebastian; Salditt, Tim; de Hoz, Livia; Strenzke, Nicola; Moser, Tobias

    2014-01-01

    Auditory prostheses can partially restore speech comprehension when hearing fails. Sound coding with current prostheses is based on electrical stimulation of auditory neurons and has limited frequency resolution due to broad current spread within the cochlea. In contrast, optical stimulation can be spatially confined, which may improve frequency resolution. Here, we used animal models to characterize optogenetic stimulation, which is the optical stimulation of neurons genetically engineered to express the light-gated ion channel channelrhodopsin-2 (ChR2). Optogenetic stimulation of spiral ganglion neurons (SGNs) activated the auditory pathway, as demonstrated by recordings of single neuron and neuronal population responses. Furthermore, optogenetic stimulation of SGNs restored auditory activity in deaf mice. Approximation of the spatial spread of cochlear excitation by recording local field potentials (LFPs) in the inferior colliculus in response to suprathreshold optical, acoustic, and electrical stimuli indicated that optogenetic stimulation achieves better frequency resolution than monopolar electrical stimulation. Virus-mediated expression of a ChR2 variant with greater light sensitivity in SGNs reduced the amount of light required for responses and allowed neuronal spiking following stimulation up to 60 Hz. Our study demonstrates a strategy for optogenetic stimulation of the auditory pathway in rodents and lays the groundwork for future applications of cochlear optogenetics in auditory research and prosthetics. PMID:24509078

  8. Investigating bottom-up auditory attention

    PubMed Central

    Kaya, Emine Merve; Elhilali, Mounya

    2014-01-01

    Bottom-up attention is a sensory-driven selection mechanism that directs perception toward a subset of the stimulus that is considered salient, or attention-grabbing. Most studies of bottom-up auditory attention have adapted frameworks similar to visual attention models whereby local or global “contrast” is a central concept in defining salient elements in a scene. In the current study, we take a more fundamental approach to modeling auditory attention; providing the first examination of the space of auditory saliency spanning pitch, intensity and timbre; and shedding light on complex interactions among these features. Informed by psychoacoustic results, we develop a computational model of auditory saliency implementing a novel attentional framework, guided by processes hypothesized to take place in the auditory pathway. In particular, the model tests the hypothesis that perception tracks the evolution of sound events in a multidimensional feature space, and flags any deviation from background statistics as salient. Predictions from the model corroborate the relationship between bottom-up auditory attention and statistical inference, and argues for a potential role of predictive coding as mechanism for saliency detection in acoustic scenes. PMID:24904367

  9. Auditory imagery: empirical findings.

    PubMed

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear). PMID:20192565

  10. Auditory imagery: empirical findings.

    PubMed

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear).

  11. A Brain System for Auditory Working Memory

    PubMed Central

    Joseph, Sabine; Gander, Phillip E.; Barascud, Nicolas; Halpern, Andrea R.; Griffiths, Timothy D.

    2016-01-01

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. SIGNIFICANCE STATEMENT In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. PMID:27098693

  12. Aging effects on the binaural interaction component of the auditory brainstem response in the Mongolian gerbil: Effects of interaural time and level differences.

    PubMed

    Laumen, Geneviève; Tollin, Daniel J; Beutelmann, Rainer; Klump, Georg M

    2016-07-01

    The effect of interaural time difference (ITD) and interaural level difference (ILD) on wave 4 of the binaural and summed monaural auditory brainstem responses (ABRs) as well as on the DN1 component of the binaural interaction component (BIC) of the ABR in young and old Mongolian gerbils (Meriones unguiculatus) was investigated. Measurements were made at a fixed sound pressure level (SPL) and a fixed level above visually detected ABR threshold to compensate for individual hearing threshold differences. In both stimulation modes (fixed SPL and fixed level above visually detected ABR threshold) an effect of ITD on the latency and the amplitude of wave 4 as well as of the BIC was observed. With increasing absolute ITD values BIC latencies were increased and amplitudes were decreased. ILD had a much smaller effect on these measures. Old animals showed a reduced amplitude of the DN1 component. This difference was due to a smaller wave 4 in the summed monaural ABRs of old animals compared to young animals whereas wave 4 in the binaural-evoked ABR showed no age-related difference. In old animals the small amplitude of the DN1 component was correlated with small binaural-evoked wave 1 and wave 3 amplitudes. This suggests that the reduced peripheral input affects central binaural processing which is reflected in the BIC. PMID:27173973

  13. Diffusion-Weighted Imaging with Color-Coded Images: Towards a Reduction in Reading Time While Keeping a Similar Accuracy.

    PubMed

    Campos Kitamura, Felipe; de Medeiros Alves, Srhael; Antônio Tobaru Tibana, Luis; Abdala, Nitamar

    2016-01-01

    The aim of this study was to develop a diagnostic tool capable of providing diffusion and apparent diffusion coefficient (ADC) map information in a single color-coded image and to assess the performance of color-coded images compared with their corresponding diffusion and ADC map. The institutional review board approved this retrospective study, which sequentially enrolled 36 head MRI scans. Diffusion-weighted images (DWI) and ADC maps were compared to their corresponding color-coded images. Four raters had their interobserver agreement measured for both conventional (DWI) and color-coded images. Differences between conventional and color-coded images were also estimated for each of the 4 raters. Cohen's kappa and percent agreement were used. Also, paired-samples t-test was used to compare reading time for rater 1. Conventional and color-coded images had substantial or almost perfect agreement for all raters. Mean reading time of rater 1 was 47.4 seconds for DWI and 27.9 seconds for color-coded images (P = .00007). These findings are important because they support the role of color-coded images as being equivalent to that of the conventional DWI in terms of diagnostic capability. Reduction in reading time (which makes the reading easier) is also demonstrated for one rater in this study.

  14. Feature Assignment in Perception of Auditory Figure

    PubMed Central

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory “objects” (relatively punctate events, such as a dog's bark) and auditory “streams” (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial two sounds -- an object (a vowel) and a stream (a series of tones) – were presented with one target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to one of the two sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to three. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed. PMID:22288691

  15. Tracking the Time Course of Word-Frequency Effects in Auditory Word Recognition with Event-Related Potentials

    ERIC Educational Resources Information Center

    Dufour, Sophie; Brunelliere, Angele; Frauenfelder, Ulrich H.

    2013-01-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to…

  16. Learning Novel Phonological Representations in Developmental Dyslexia: Associations with Basic Auditory Processing of Rise Time and Phonological Awareness

    ERIC Educational Resources Information Center

    Thomson, Jennifer M.; Goswami, Usha

    2010-01-01

    Across languages, children with developmental dyslexia are known to have impaired lexical phonological representations. Here, we explore associations between learning new phonological representations, phonological awareness, and sensitivity to amplitude envelope onsets (rise time). We show that individual differences in learning novel phonological…

  17. Novel space-time trellis codes for free-space optical communications using transmit laser selection.

    PubMed

    García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2015-09-21

    In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results.

  18. Dynamics of auditory working memory

    PubMed Central

    Kaiser, Jochen

    2015-01-01

    Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions. PMID:26029146

  19. Real Time Optimizing Code for Stabilization and Control of Plasma Reactors

    1995-09-25

    LOOP4 is a flexible real-time control code that acquires signals (input variables) from an array of sensors, that computes therefrom the actual state of the reactor system, that compares the actual state to the desired state (a goal), and that commands changes to reactor controls (output, or manipulated variables) in order to minimize the difference between the actual state of the reactor and the desired state. The difference between actual and desired states is quantifiedmore » in terms of a distance metric in the space defined by the sensor measurements. The desired state of the reactor is specified in terms of target values of sensor readings that were obtained previously during development and optimization of a process engineer using conventional techniques.« less

  20. GATOR: A 3-D time-dependent simulation code for helix TWTs

    SciTech Connect

    Zaidman, E.G.; Freund, H.P.

    1996-12-31

    A 3D nonlinear analysis of helix TWTs is presented. The analysis and simulation code is based upon a spectral decomposition using the vacuum sheath helix modes. The field equations are integrated on a grid and advanced in time using a MacCormack predictor-corrector scheme, and the electron orbit equations are integrated using a fourth order Runge-Kutta algorithm. Charge is accumulated on the grid and the field is interpolated to the particle location by a linear map. The effect of dielectric liners on the vacuum sheath helix dispersion is included in the analysis. Several numerical cases are considered. Simulation of the injection of a DC beam and a signal at a single frequency is compared with a linear field theory of the helix TWT interaction, and good agreement is found.

  1. The impact of time step definition on code convergence and robustness

    NASA Technical Reports Server (NTRS)

    Venkateswaran, S.; Weiss, J. M.; Merkle, C. L.

    1992-01-01

    We have implemented preconditioning for multi-species reacting flows in two independent codes, an implicit (ADI) code developed in-house and the RPLUS code (developed at LeRC). The RPLUS code was modified to work on a four-stage Runge-Kutta scheme. The performance of both the codes was tested, and it was shown that preconditioning can improve convergence by a factor of two to a hundred depending on the problem. Our efforts are currently focused on evaluating the effect of chemical sources and on assessing how preconditioning may be applied to improve convergence and robustness in the calculation of reacting flows.

  2. A Simple Method for Guaranteeing ECG Quality in Real-Time Wavelet Lossy Coding

    NASA Astrophysics Data System (ADS)

    Alesanco, Álvaro; García, José

    2007-12-01

    Guaranteeing ECG signal quality in wavelet lossy compression methods is essential for clinical acceptability of reconstructed signals. In this paper, we present a simple and efficient method for guaranteeing reconstruction quality measured using the new distortion index wavelet weighted PRD (WWPRD), which reflects in a more accurate way the real clinical distortion of the compressed signal. The method is based on the wavelet transform and its subsequent coding using the set partitioning in hierarchical trees (SPIHT) algorithm. By thresholding the WWPRD in the wavelet transform domain, a very precise reconstruction error can be achieved thus enabling to obtain clinically useful reconstructed signals. Because of its computational efficiency, the method is suitable to work in a real-time operation, thus being very useful for real-time telecardiology systems. The method is extensively tested using two different ECG databases. Results led to an excellent conclusion: the method controls the quality in a very accurate way not only in mean value but also with a low-standard deviation. The effects of ECG baseline wandering as well as noise in compression are also discussed. Baseline wandering provokes negative effects when using WWPRD index to guarantee quality because this index is normalized by the signal energy. Therefore, it is better to remove it before compression. On the other hand, noise causes an increase in signal energy provoking an artificial increase of the coded signal bit rate. Clinical validation by cardiologists showed that a WWPRD value of 10[InlineEquation not available: see fulltext.] preserves the signal quality and thus they recommend this value to be used in the compression system.

  3. Auditory memory function in expert chess players

    PubMed Central

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. Results: The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Conclusion: Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time. PMID:26793666

  4. Representation of reward feedback in primate auditory cortex.

    PubMed

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2011-01-01

    It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1) the reward expectancy for each trial, (2) the reward-size received, and (3) the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.

  5. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it

  6. Effective real-time vehicle tracking using discriminative sparse coding on local patches

    NASA Astrophysics Data System (ADS)

    Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei

    2016-01-01

    A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.

  7. One-time collision arbitration algorithm in radio-frequency identification based on the Manchester code

    NASA Astrophysics Data System (ADS)

    Liu, Chen-Chung; Chan, Yin-Tsung

    2011-02-01

    In radio-requency identification (RFID) systems, when multiple tags transmit data to a reader simultaneously, these data may collide and create unsuccessful identifications; hence, anticollision algorithms are needed to reduce collisions (collision cycles) to improve the tag identification speed. We propose a one-time collision arbitration algorithm to reduce both the number of collisions and the time consumption for tags' identification in RFID. The proposed algorithm uses Manchester coding to detect the locations of collided bits, uses the divide-and-conquer strategy to find the structure of colliding bits to generate 96-bit query strings as the 96-bit candidate query strings (96BCQSs), and uses query-tree anticollision schemes with 96BCQSs to identify tags. The performance analysis and experimental results show that the proposed algorithm has three advantages: (i) reducing the number of collisions to only one, so that the time complexity of tag identification is the simplest O(1), (ii) storing identified identification numbers (IDs) and the 96BCQSs in a register to save the used memory, and (iii) resulting in the number of bits transmitted by both the reader and tags being evidently less than the other algorithms in one-tag identification or in all tags identification.

  8. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    SciTech Connect

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I. E-mail: sshibata@post.kek.jp

    2015-08-15

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source function is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.

  9. Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses

    PubMed Central

    Sudhakar, Shyam Kumar; Torben-Nielsen, Benjamin; De Schutter, Erik

    2015-01-01

    Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it. PMID:26630202

  10. Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses.

    PubMed

    Sudhakar, Shyam Kumar; Torben-Nielsen, Benjamin; De Schutter, Erik

    2015-12-01

    Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it.

  11. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain

    PubMed Central

    Lu, Kai; Vicario, David S.

    2014-01-01

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds. PMID:25246563

  12. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features. PMID:22271265

  13. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  14. Sexual Orientation and the Auditory System

    PubMed Central

    McFadden, Dennis

    2011-01-01

    The auditory system exhibits differences by sex and by sexual orientation, and the implication is that relevant auditory structures are altered during prenatal development, possibly by exposure to androgens. The otoacoustic emissions (OAEs) of newborn male infants are weaker than those of newborn females, and these sex differences persist through the lifespan. The OAEs of nonheterosexual females also are weaker than those of heterosexual females, suggesting an atypically strong exposure to androgens some time early in development. Auditory evoked potentials (AEPs) also exhibit sex differences beginning early in life. Some AEPs are different for heterosexual and nonheterosexual females, and other AEPs are different for heterosexual and nonheterosexual males. Research on non-humans treated with androgenic or anti-androgenic agents also suggests that OAEs are masculinized by prenatal exposure to androgens late in gestation. Collectively, the evidence suggests that prenatal androgens, acting globally or locally, affect both nonheterosexuality and the auditory system. PMID:21310172

  15. Translating Neurocognitive Models of Auditory-Verbal Hallucinations into Therapy: Using Real-time fMRI-Neurofeedback to Treat Voices.

    PubMed

    Fovet, Thomas; Orlov, Natasza; Dyck, Miriam; Allen, Paul; Mathiak, Klaus; Jardri, Renaud

    2016-01-01

    Auditory-verbal hallucinations (AVHs) are frequent and disabling symptoms, which can be refractory to conventional psychopharmacological treatment in more than 25% of the cases. Recent advances in brain imaging allow for a better understanding of the neural underpinnings of AVHs. These findings strengthened transdiagnostic neurocognitive models that characterize these frequent and disabling experiences. At the same time, technical improvements in real-time functional magnetic resonance imaging (fMRI) enabled the development of innovative and non-invasive methods with the potential to relieve psychiatric symptoms, such as fMRI-based neurofeedback (fMRI-NF). During fMRI-NF, brain activity is measured and fed back in real time to the participant in order to help subjects to progressively achieve voluntary control over their own neural activity. Precisely defining the target brain area/network(s) appears critical in fMRI-NF protocols. After reviewing the available neurocognitive models for AVHs, we elaborate on how recent findings in the field may help to develop strong a priori strategies for fMRI-NF target localization. The first approach relies on imaging-based "trait markers" (i.e., persistent traits or vulnerability markers that can also be detected in the presymptomatic and remitted phases of AVHs). The goal of such strategies is to target areas that show aberrant activations during AVHs or are known to be involved in compensatory activation (or resilience processes). Brain regions, from which the NF signal is derived, can be based on structural MRI and neurocognitive knowledge, or functional MRI information collected during specific cognitive tasks. Because hallucinations are acute and intrusive symptoms, a second strategy focuses more on "state markers." In this case, the signal of interest relies on fMRI capture of the neural networks exhibiting increased activity during AVHs occurrences, by means of multivariate pattern recognition methods. The fine

  16. Translating Neurocognitive Models of Auditory-Verbal Hallucinations into Therapy: Using Real-time fMRI-Neurofeedback to Treat Voices

    PubMed Central

    Fovet, Thomas; Orlov, Natasza; Dyck, Miriam; Allen, Paul; Mathiak, Klaus; Jardri, Renaud

    2016-01-01

    Auditory-verbal hallucinations (AVHs) are frequent and disabling symptoms, which can be refractory to conventional psychopharmacological treatment in more than 25% of the cases. Recent advances in brain imaging allow for a better understanding of the neural underpinnings of AVHs. These findings strengthened transdiagnostic neurocognitive models that characterize these frequent and disabling experiences. At the same time, technical improvements in real-time functional magnetic resonance imaging (fMRI) enabled the development of innovative and non-invasive methods with the potential to relieve psychiatric symptoms, such as fMRI-based neurofeedback (fMRI-NF). During fMRI-NF, brain activity is measured and fed back in real time to the participant in order to help subjects to progressively achieve voluntary control over their own neural activity. Precisely defining the target brain area/network(s) appears critical in fMRI-NF protocols. After reviewing the available neurocognitive models for AVHs, we elaborate on how recent findings in the field may help to develop strong a priori strategies for fMRI-NF target localization. The first approach relies on imaging-based “trait markers” (i.e., persistent traits or vulnerability markers that can also be detected in the presymptomatic and remitted phases of AVHs). The goal of such strategies is to target areas that show aberrant activations during AVHs or are known to be involved in compensatory activation (or resilience processes). Brain regions, from which the NF signal is derived, can be based on structural MRI and neurocognitive knowledge, or functional MRI information collected during specific cognitive tasks. Because hallucinations are acute and intrusive symptoms, a second strategy focuses more on “state markers.” In this case, the signal of interest relies on fMRI capture of the neural networks exhibiting increased activity during AVHs occurrences, by means of multivariate pattern recognition methods. The fine

  17. Neural timing nets.

    PubMed

    Cariani, P A

    2001-01-01

    Formulations of artificial neural networks are directly related to assumptions about neural coding in the brain. Traditional connectionist networks assume channel-based rate coding, while time-delay networks convert temporally-coded inputs into rate-coded outputs. Neural timing nets that operate on time structured input spike trains to produce meaningful time-structured outputs are proposed. Basic computational properties of simple feedforward and recurrent timing nets are outlined and applied to auditory computations. Feed-forward timing nets consist of arrays of coincidence detectors connected via tapped delay lines. These temporal sieves extract common spike patterns in their inputs that can subserve extraction of common fundamental frequencies (periodicity pitch) and common spectrum (timbre). Feedforward timing nets can also be used to separate time-shifted patterns, fusing patterns with similar internal temporal structure and spatially segregating different ones. Simple recurrent timing nets consisting of arrays of delay loops amplify and separate recurring time patterns. Single- and multichannel recurrent timing nets are presented that demonstrate the separation of concurrent, double vowels. Timing nets constitute a new and general neural network strategy for performing temporal computations on neural spike trains: extraction of common periodicities, detection of recurring temporal patterns, and formation and separation of invariant spike patterns that subserve auditory objects.

  18. Multi-fluid transport code modeling of time-dependent recycling in ELMy H-mode

    SciTech Connect

    Pigarov, A. Yu.; Krasheninnikov, S. I.; Rognlien, T. D.; Hollmann, E. M.; Lasnier, C. J.; Unterberg, Ezekial A

    2014-01-01

    Simulations of a high-confinement-mode (H-mode) tokamak discharge with infrequent giant type-I ELMs are performed by the multi-fluid, multi-species, two-dimensional transport code UEDGE-MB, which incorporates the Macro-Blob approach for intermittent non-diffusive transport due to filamentary coherent structures observed during the Edge Localized Modes (ELMs) and simple time-dependent multi-parametric models for cross-field plasma transport coefficients and working gas inventory in material surfaces. Temporal evolutions of pedestal plasma profiles, divertor recycling, and wall inventory in a sequence of ELMs are studied and compared to the experimental time-dependent data. Short- and long-time-scale variations of the pedestal and divertor plasmas where the ELM is described as a sequence of macro-blobs are discussed. It is shown that the ELM recovery includes the phase of relatively dense and cold post-ELM divertor plasma evolving on a several ms scale, which is set by the transport properties of H-mode barrier. The global gas balance in the discharge is also analyzed. The calculated rates of working gas deposition during each ELM and wall outgassing between ELMs are compared to the ELM particle losses from the pedestal and neutral-beam-injection fueling rate, correspondingly. A sensitivity study of the pedestal and divertor plasmas to model assumptions for gas deposition and release on material surfaces is presented. The performed simulations show that the dynamics of pedestal particle inventory is dominated by the transient intense gas deposition into the wall during each ELM followed by continuous gas release between ELMs at roughly a constant rate.

  19. Multi-fluid transport code modeling of time-dependent recycling in ELMy H-mode

    SciTech Connect

    Pigarov, A. Yu.; Krasheninnikov, S. I.; Hollmann, E. M.; Rognlien, T. D.; Lasnier, C. J.; Unterberg, E.

    2014-06-15

    Simulations of a high-confinement-mode (H-mode) tokamak discharge with infrequent giant type-I ELMs are performed by the multi-fluid, multi-species, two-dimensional transport code UEDGE-MB, which incorporates the Macro-Blob approach for intermittent non-diffusive transport due to filamentary coherent structures observed during the Edge Localized Modes (ELMs) and simple time-dependent multi-parametric models for cross-field plasma transport coefficients and working gas inventory in material surfaces. Temporal evolutions of pedestal plasma profiles, divertor recycling, and wall inventory in a sequence of ELMs are studied and compared to the experimental time-dependent data. Short- and long-time-scale variations of the pedestal and divertor plasmas where the ELM is described as a sequence of macro-blobs are discussed. It is shown that the ELM recovery includes the phase of relatively dense and cold post-ELM divertor plasma evolving on a several ms scale, which is set by the transport properties of H-mode barrier. The global gas balance in the discharge is also analyzed. The calculated rates of working gas deposition during each ELM and wall outgassing between ELMs are compared to the ELM particle losses from the pedestal and neutral-beam-injection fueling rate, correspondingly. A sensitivity study of the pedestal and divertor plasmas to model assumptions for gas deposition and release on material surfaces is presented. The performed simulations show that the dynamics of pedestal particle inventory is dominated by the transient intense gas deposition into the wall during each ELM followed by continuous gas release between ELMs at roughly a constant rate.

  20. Effects of an Auditory Lateralization Training in Children Suspected to Central Auditory Processing Disorder

    PubMed Central

    Lotfi, Yones; Moosavi, Abdollah; Bakhshi, Enayatollah; Sadjedi, Hamed

    2016-01-01

    Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim of this research was investigating effects of an auditory lateralization training on speech perception in presence of noise/competing signals in children suspected to (C)APD. Subjects and Methods In this analytical interventional study, 60 children suspected to (C)APD were selected based on multiple auditory processing assessment subtests. They were randomly divided into two groups: control (mean age 9.07) and training groups (mean age 9.00). Training program consisted of detection and pointing to sound sources delivered with interaural time differences under headphones for 12 formal sessions (6 weeks). Spatial word recognition score (WRS) and monaural selective auditory attention test (mSAAT) were used to follow the auditory lateralization training effects. Results This study showed that in the training group, mSAAT score and spatial WRS in noise (p value≤0.001) improved significantly after the auditory lateralization training. Conclusions We used auditory lateralization training for 6 weeks and showed that auditory lateralization can improve speech understanding in noise significantly. The generalization of this results needs further researches. PMID:27626084

  1. Effects of an Auditory Lateralization Training in Children Suspected to Central Auditory Processing Disorder

    PubMed Central

    Lotfi, Yones; Moosavi, Abdollah; Bakhshi, Enayatollah; Sadjedi, Hamed

    2016-01-01

    Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim of this research was investigating effects of an auditory lateralization training on speech perception in presence of noise/competing signals in children suspected to (C)APD. Subjects and Methods In this analytical interventional study, 60 children suspected to (C)APD were selected based on multiple auditory processing assessment subtests. They were randomly divided into two groups: control (mean age 9.07) and training groups (mean age 9.00). Training program consisted of detection and pointing to sound sources delivered with interaural time differences under headphones for 12 formal sessions (6 weeks). Spatial word recognition score (WRS) and monaural selective auditory attention test (mSAAT) were used to follow the auditory lateralization training effects. Results This study showed that in the training group, mSAAT score and spatial WRS in noise (p value≤0.001) improved significantly after the auditory lateralization training. Conclusions We used auditory lateralization training for 6 weeks and showed that auditory lateralization can improve speech understanding in noise significantly. The generalization of this results needs further researches.

  2. Auditory scene analysis by echolocation in bats.

    PubMed

    Moss, C F; Surlykke, A

    2001-10-01

    Echolocating bats transmit ultrasonic vocalizations and use information contained in the reflected sounds to analyze the auditory scene. Auditory scene analysis, a phenomenon that applies broadly to all hearing vertebrates, involves the grouping and segregation of sounds to perceptually organize information about auditory objects. The perceptual organization of sound is influenced by the spectral and temporal characteristics of acoustic signals. In the case of the echolocating bat, its active control over the timing, duration, intensity, and bandwidth of sonar transmissions directly impacts its perception of the auditory objects that comprise the scene. Here, data are presented from perceptual experiments, laboratory insect capture studies, and field recordings of sonar behavior of different bat species, to illustrate principles of importance to auditory scene analysis by echolocation in bats. In the perceptual experiments, FM bats (Eptesicus fuscus) learned to discriminate between systematic and random delay sequences in echo playback sets. The results of these experiments demonstrate that the FM bat can assemble information about echo delay changes over time, a requirement for the analysis of a dynamic auditory scene. Laboratory insect capture experiments examined the vocal production patterns of flying E. fuscus taking tethered insects in a large room. In each trial, the bats consistently produced echolocation signal groups with a relatively stable repetition rate (within 5%). Similar temporal patterning of sonar vocalizations was also observed in the field recordings from E. fuscus, thus suggesting the importance of temporal control of vocal production for perceptually guided behavior. It is hypothesized that a stable sonar signal production rate facilitates the perceptual organization of echoes arriving from objects at different directions and distances as the bat flies through a dynamic auditory scene. Field recordings of E. fuscus, Noctilio albiventris, N

  3. Selective adaptation to "oddball" sounds by the human auditory system.

    PubMed

    Simpson, Andrew J R; Harper, Nicol S; Reiss, Joshua D; McAlpine, David

    2014-01-29

    Adaptation to both common and rare sounds has been independently reported in neurophysiological studies using probabilistic stimulus paradigms in small mammals. However, the apparent sensitivity of the mammalian auditory system to the statistics of incoming sound has not yet been generalized to task-related human auditory perception. Here, we show that human listeners selectively adapt to novel sounds within scenes unfolding over minutes. Listeners' performance in an auditory discrimination task remains steady for the most common elements within the scene but, after the first minute, performance improves for distinct and rare (oddball) sound elements, at the expense of rare sounds that are relatively less distinct. Our data provide the first evidence of enhanced coding of oddball sounds in a human auditory discrimination task and suggest the existence of an adaptive mechanism that tracks the long-term statistics of sounds and deploys coding resources accordingly. PMID:24478375

  4. The Distributed Auditory Cortex

    PubMed Central

    Winer, Jeffery A.; Lee, Charles C.

    2009-01-01

    A synthesis of cat auditory cortex (AC) organization is presented in which the extrinsic and intrinsic connections interact to derive a unified profile of the auditory stream and use it to direct and modify cortical and subcortical information flow. Thus, the thalamocortical input provides essential sensory information about peripheral stimulus events, which AC redirects locally for feature extraction, and then conveys to parallel auditory, multisensory, premotor, limbic, and cognitive centers for further analysis. The corticofugal output influences areas as remote as the pons and the cochlear nucleus, structures whose effects upon AC are entirely indirect, and has diverse roles in the transmission of information through the medial geniculate body and inferior colliculus. The distributed AC is thus construed as a functional network in which the auditory percept is assembled for subsequent redistribution in sensory, premotor, and cognitive streams contingent on the derived interpretation of the acoustic events. The confluence of auditory and multisensory streams likely precedes cognitive processing of sound. The distributed AC constitutes the largest and arguably the most complete representation of the auditory world. Many facets of this scheme may apply in rodent and primate AC as well. We propose that the distributed auditory cortex contributes to local processing regimes in regions as disparate as the frontal pole and the cochlear nucleus to construct the acoustic percept. PMID:17329049

  5. Detection by real time PCR of walnut allergen coding sequences in processed foods.

    PubMed

    Linacero, Rosario; Ballesteros, Isabel; Sanchiz, Africa; Prieto, Nuria; Iniesto, Elisa; Martinez, Yolanda; Pedrosa, Mercedes M; Muzquiz, Mercedes; Cabanillas, Beatriz; Rovira, Mercè; Burbano, Carmen; Cuadrado, Carmen

    2016-07-01

    A quantitative real-time PCR (RT-PCR) method, employing novel primer sets designed on Jug r 1, Jug r 3, and Jug r 4 allergen-coding sequences, was set up and validated. Its specificity, sensitivity, and applicability were evaluated. The DNA extraction method based on CTAB-phenol-chloroform was best for walnut. RT-PCR allowed a specific and accurate amplification of allergen sequence, and the limit of detection was 2.5pg of walnut DNA. The method sensitivity and robustness were confirmed with spiked samples, and Jug r 3 primers detected up to 100mg/kg of raw walnut (LOD 0.01%, LOQ 0.05%). Thermal treatment combined with pressure (autoclaving) reduced yield and amplification (integrity and quality) of walnut DNA. High hydrostatic pressure (HHP) did not produce any effect on the walnut DNA amplification. This RT-PCR method showed greater sensitivity and reliability in the detection of walnut traces in commercial foodstuffs compared with ELISA assays.

  6. Global Time Dependent Solutions of Stochastically Driven Standard Accretion Disks: Development of Hydrodynamical Code

    NASA Astrophysics Data System (ADS)

    Wani, Naveel; Maqbool, Bari; Iqbal, Naseer; Misra, Ranjeev

    2016-07-01

    X-ray binaries and AGNs are powered by accretion discs around compact objects, where the x-rays are emitted from the inner regions and uv emission arise from the relatively cooler outer parts. There has been an increasing evidence that the variability of the x-rays in different timescales is caused by stochastic fluctuations in the accretion disc at different radii. These fluctuations although arise in the outer parts of the disc but propagate inwards to give rise to x-ray variability and hence provides a natural connection between the x-ray and uv variability. There are analytical expressions to qualitatively understand the effect of these stochastic variabilities, but quantitative predictions are only possible by a detailed hydrodynamical study of the global time dependent solution of standard accretion disc. We have developed numerical efficient code (to incorporate all these effects), which considers gas pressure dominated solutions and stochastic fluctuations with the inclusion of boundary effect of the last stable orbit.

  7. Detection by real time PCR of walnut allergen coding sequences in processed foods.

    PubMed

    Linacero, Rosario; Ballesteros, Isabel; Sanchiz, Africa; Prieto, Nuria; Iniesto, Elisa; Martinez, Yolanda; Pedrosa, Mercedes M; Muzquiz, Mercedes; Cabanillas, Beatriz; Rovira, Mercè; Burbano, Carmen; Cuadrado, Carmen

    2016-07-01

    A quantitative real-time PCR (RT-PCR) method, employing novel primer sets designed on Jug r 1, Jug r 3, and Jug r 4 allergen-coding sequences, was set up and validated. Its specificity, sensitivity, and applicability were evaluated. The DNA extraction method based on CTAB-phenol-chloroform was best for walnut. RT-PCR allowed a specific and accurate amplification of allergen sequence, and the limit of detection was 2.5pg of walnut DNA. The method sensitivity and robustness were confirmed with spiked samples, and Jug r 3 primers detected up to 100mg/kg of raw walnut (LOD 0.01%, LOQ 0.05%). Thermal treatment combined with pressure (autoclaving) reduced yield and amplification (integrity and quality) of walnut DNA. High hydrostatic pressure (HHP) did not produce any effect on the walnut DNA amplification. This RT-PCR method showed greater sensitivity and reliability in the detection of walnut traces in commercial foodstuffs compared with ELISA assays. PMID:26920302

  8. Distinct Spatiotemporal Response Properties of Excitatory Versus Inhibitory Neurons in the Mouse Auditory Cortex

    PubMed Central

    Maor, Ido; Shalev, Amos; Mizrahi, Adi

    2016-01-01

    In the auditory system, early neural stations such as brain stem are characterized by strict tonotopy, which is used to deconstruct sounds to their basic frequencies. But higher along the auditory hierarchy, as early as primary auditory cortex (A1), tonotopy starts breaking down at local circuits. Here, we studied the response properties of both excitatory and inhibitory neurons in the auditory cortex of anesthetized mice. We used in vivo two photon-targeted cell-attached recordings from identified parvalbumin-positive neurons (PVNs) and their excitatory pyramidal neighbors (PyrNs). We show that PyrNs are locally heterogeneous as characterized by diverse best frequencies, pairwise signal correlations, and response timing. In marked contrast, neighboring PVNs exhibited homogenous response properties in pairwise signal correlations and temporal responses. The distinct physiological microarchitecture of different cell types is maintained qualitatively in response to natural sounds. Excitatory heterogeneity and inhibitory homogeneity within the same circuit suggest different roles for each population in coding natural stimuli. PMID:27600839

  9. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123

    PubMed Central

    Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  10. Auditory hallucinations inhibit exogenous activation of auditory association cortex.

    PubMed

    David, A S; Woodruff, P W; Howard, R; Mellers, J D; Brammer, M; Bullmore, E; Wright, I; Andrew, C; Williams, S C

    1996-03-22

    Percepts unaccompanied by a veridical stimulus, such as hallucinations, provide an opportunity for mapping the neural correlates of conscious perception. Functional magnetic resonance imaging (fMRI) can reveal localized changes in blood oxygenation in response to actual as well as imagined sensory stimulation. The safe repeatability of fMRI enabled us to study a patient with schizophrenia while he was experiencing auditory hallucinations and when hallucination-free (with supporting data from a second case). Cortical activation was measured in response to periodic exogenous auditory and visual stimulations using time series regression analysis. Functional brain images were obtained in each hallucination condition both while the patient was on and off antipsychotic drugs. The response of the temporal cortex to exogenous auditory stimulation (speech) was markedly reduced when the patient was experiencing hallucinating voices addressing him, regardless of medication. Visual cortical activation (to flashing lights) remained normal over four scans. From the results of this study and previous work on visual hallucinations we conclude that hallucinations coincide with maximal activation of the sensory and association cortex, specific to the modality of the experience. PMID:8724677

  11. Theoretical and experimental studies of turbo product code with time diversity in free space optical communication.

    PubMed

    Han, Yaoqiang; Dang, Anhong; Ren, Yongxiong; Tang, Junxiong; Guo, Hong

    2010-12-20

    In free space optical communication (FSOC) systems, channel fading caused by atmospheric turbulence degrades the system performance seriously. However, channel coding combined with diversity techniques can be exploited to mitigate channel fading. In this paper, based on the experimental study of the channel fading effects, we propose to use turbo product code (TPC) as the channel coding scheme, which features good resistance to burst errors and no error floor. However, only channel coding cannot cope with burst errors caused by channel fading, interleaving is also used. We investigate the efficiency of interleaving for different interleaving depths, and then the optimum interleaving depth for TPC is also determined. Finally, an experimental study of TPC with interleaving is demonstrated, and we show that TPC with interleaving can significantly mitigate channel fading in FSOC systems.

  12. Glial Cell Contributions to Auditory Brainstem Development

    PubMed Central

    Cramer, Karina S.; Rubel, Edwin W

    2016-01-01

    Glial cells, previously thought to have generally supporting roles in the central nervous system, are emerging as essential contributors to multiple aspects of neuronal circuit function and development. This review focuses on the contributions of glial cells to the development of auditory pathways in the brainstem. These pathways display specialized synapses and an unusually high degree of precision in circuitry that enables sound source localization. The development of these pathways thus requires highly coordinated molecular and cellular mechanisms. Several classes of glial cells, including astrocytes, oligodendrocytes and microglia, have now been explored in these circuits in both avian and mammalian brainstems. Distinct populations of astrocytes are found over the course of auditory brainstem maturation. Early appearing astrocytes are associated with spatial compartments in the avian auditory brainstem. Factors from late appearing astrocytes promote synaptogenesis and dendritic maturation, and astrocytes remain integral parts of specialized auditory synapses. Oligodendrocytes play a unique role in both birds and mammals in highly regulated myelination essential for proper timing to decipher interaural cues. Microglia arise early in brainstem development and may contribute to maturation of auditory pathways. Together these studies demonstrate the importance of non-neuronal cells in the assembly of specialized auditory brainstem circuits.

  13. Auditory temporal processing skills in musicians with dyslexia.

    PubMed

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia.

  14. [Influence of hypoxia on the human auditory system].

    PubMed

    Lucertini, M; Urbani, L

    1997-02-01

    The present paper presents a review of the literature on "hypoxia and human auditory mechanisms". It examines and discusses, above all, the results obtained in the various studies using pure tone audiometry and auditory evoked potentials. At the present time, the two areas which appear most sensitive to hypoxia are the cochlea and, above all, the telencephalic auditory cortex (specifically those sectors dedicated to cognitive processing of auditory stimulation). However, many other areas which are sensitive to hypoxia, but to a lesser extent, have also been identified, even in other sectors of the auditory pathway. Particularly worthy of note is the effectiveness of the metabolic compensatory mechanisms which come into play upon hypoxic stress. These mechanisms include vasodilation and the presence of metabolic reservoirs. Nevertheless, there are still a number of open questions regarding how the auditory pathway functions in the case of hypoxia; thus the experimental study of hypoxic hypoxia is still an interesting, fruitful research field in audiology.

  15. Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes

    SciTech Connect

    Lu, S.

    2002-07-01

    As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent of this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)

  16. Auditory processing deficits in reading disabled adults.

    PubMed

    Amitay, Sygal; Ahissar, Meray; Nelken, Israel

    2002-09-01

    The nature of the auditory processing deficit of disabled readers is still an unresolved issue. The quest for a fundamental, nonlinguistic, perceptual impairment has been dominated by the hypothesis that the difficulty lies in processing sequences of stimuli at presentation rates of tens of milliseconds. The present study examined this hypothesis using tasks that require processing of a wide range of stimulus time constants. About a third of the sampled population of disabled readers (classified as "poor auditory processors") had difficulties in most of the tasks tested: detection of frequency differences, detection of tones in narrowband noise, detection of amplitude modulation, detection of the direction of sound sources moving in virtual space, and perception of the lateralized position of tones based on their interaural phase differences. Nevertheless, across-channel integration was intact in these poor auditory processors since comodulation masking release was not reduced. Furthermore, phase locking was presumably intact since binaural masking level differences were normal. In a further examination of temporal processing, participants were asked to discriminate two tones at various intervals where the frequency difference was ten times each individual's frequency just noticeable difference (JND). Under these conditions, poor auditory processors showed no specific difficulty at brief intervals, contrary to predictions under a fast temporal processing deficit assumption. The complementary subgroup of disabled readers who were not poor auditory processors showed some difficulty in this condition when compared with their direct controls. However, they had no difficulty on auditory tasks such as amplitude modulation detection, which presumably taps processing of similar time scales. These two subgroups of disabled readers had similar reading performance but those with a generally poor auditory performance scored lower on some cognitive tests. Taken together, these

  17. Restoration of auditory nerve synapses in cats by cochlear implants.

    PubMed

    Ryugo, D K; Kretzmer, E A; Niparko, J K

    2005-12-01

    Congenital deafness results in abnormal synaptic structure in endings of the auditory nerve. If these abnormalities persist after restoration of auditory nerve activity by a cochlear implant, the processing of time-varying signals such as speech would likely be impaired. We stimulated congenitally deaf cats for 3 months with a six-channel cochlear implant. The device used human speech-processing programs, and cats responded to environmental sounds. Auditory nerve fibers exhibited a recovery of normal synaptic structure in these cats. This rescue of synapses is attributed to a return of spike activity in the auditory nerve and may help explain cochlear implant benefits in childhood deafness. PMID:16322457

  18. [Central auditory prosthesis].

    PubMed

    Lenarz, T; Lim, H; Joseph, G; Reuter, G; Lenarz, M

    2009-06-01

    Deaf patients with severe sensory hearing loss can benefit from a cochlear implant (CI), which stimulates the auditory nerve fibers. However, patients who do not have an intact auditory nerve cannot benefit from a CI. The majority of these patients are neurofibromatosis type 2 (NF2) patients who developed neural deafness due to growth or surgical removal of a bilateral acoustic neuroma. The only current solution is the auditory brainstem implant (ABI), which stimulates the surface of the cochlear nucleus in the brainstem. Although the ABI provides improvement in environmental awareness and lip-reading capabilities, only a few NF2 patients have achieved some limited open set speech perception. In the search for alternative procedures our research group in collaboration with Cochlear Ltd. (Australia) developed a human prototype auditory midbrain implant (AMI), which is designed to electrically stimulate the inferior colliculus (IC). The IC has the potential as a new target for an auditory prosthesis as it provides access to neural projections necessary for speech perception as well as a systematic map of spectral information. In this paper the present status of research and development in the field of central auditory prostheses is presented with respect to technology, surgical technique and hearing results as well as the background concepts of ABI and AMI. PMID:19517084

  19. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  20. Navajo Code Talker Joe Morris, Sr. shared insights from his time as a secret World War Two messenger

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Navajo Code Talker Joe Morris, Sr. shared insights from his time as a secret World War Two messenger with his audience at NASA's Dryden Flight Research Center on Nov. 26, 2002. NASA Dryden is located on Edwards Air Force Base in California's Mojave Desert.

  1. Experimental Electrically Reconfigurable Time-Domain Spectral Amplitude Encoding/Decoding in an Optical Code Division Multiple Access System

    NASA Astrophysics Data System (ADS)

    Tainta, Santiago; Erro, María J.; Garde, María J.; Muriel, Miguel A.

    2013-11-01

    An electrically reconfigurable time-domain spectral amplitude encoding/decoding scheme is proposed herein. The setup is based on the concept of temporally pulse shaping dual to spatial arrangements. The transmitter is based on a short pulse source and uses two conjugate dispersive fiber gratings and an electro-optic intensity modulator placed in between. Proof of concept results are shown for an optical pulse train operating at 1.25 Gbps using codes from the Hadamard family with a length of eight chips. The system is electrically reconfigurable, compatible with fiber systems, and permits scalability in the size of the codes by modifying only the modulator velocity.

  2. Coding ill-defined and unknown cause of death is 13 times more frequent in Denmark than in Finland.

    PubMed

    Ylijoki-Sørensen, Seija; Sajantila, Antti; Lalu, Kaisa; Bøggild, Henrik; Boldsen, Jesper Lier; Boel, Lene Warner Thorup

    2014-11-01

    Exact cause and manner of death determination improves legislative safety for the individual and for society and guides aspects of national public health. In the International Classification of Diseases, codes R00-R99 are used for "symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified" designated as "ill-defined" or "with unknown etiology". The World Health Organisation recommends avoiding the use of ill-defined and unknown causes of death in the death certificate as this terminology does not give any information concerning the possible conditions that led to the death. Thus, the aim of the study was, firstly, to analyse the frequencies of R00-R99-coded deaths in mortality statistics in Finland and in Denmark and, secondly, to compare these and the methods used to investigate the cause of death. To do so, we extracted a random 90% sample of the Finnish death certificates and 100% of the Danish certificates from the national mortality registries for 2000, 2005 and 2010. Subsequently, we analysed the frequencies of forensic and medical autopsies and external clinical examinations of the bodies in R00-R99-coded deaths. The use of R00-R99 codes was significantly higher in Denmark than in Finland; OR 18.6 (95% CI 15.3-22.4; p<0.001) for 2000, OR 9.5 (95% CI 8.0-11.3; p<0.001) for 2005 and OR 13.2 (95% CI 11.1-15.7; p<0.001) for 2010. More than 80% of Danish deaths with R00-R99 codes were over 70 years of age at the time of death. Forensic autopsy was performed in 88.3% of Finnish R00-R99-coded deaths, whereas only 3.5% of Danish R00-R99-coded deaths were investigated with forensic or medical autopsy. The codes that were most used in both countries were R96-R99, meaning "unknown cause of death". In Finland, all of these deaths were investigated with a forensic autopsy. Our study suggests that if all deaths in all age groups with unclear cause of death were systematically investigated with a forensic autopsy, only 2-3/1000 deaths per year

  3. Auditory models for speech analysis

    NASA Astrophysics Data System (ADS)

    Maybury, Mark T.

    This paper reviews the psychophysical basis for auditory models and discusses their application to automatic speech recognition. First an overview of the human auditory system is presented, followed by a review of current knowledge gleaned from neurological and psychoacoustic experimentation. Next, a general framework describes established peripheral auditory models which are based on well-understood properties of the peripheral auditory system. This is followed by a discussion of current enhancements to that models to include nonlinearities and synchrony information as well as other higher auditory functions. Finally, the initial performance of auditory models in the task of speech recognition is examined and additional applications are mentioned.

  4. Intensity modulation and direct detection Alamouti polarization-time coding for optical fiber transmission systems with polarization mode dispersion

    NASA Astrophysics Data System (ADS)

    Reza, Ahmed Galib; Rhee, June-Koo Kevin

    2016-07-01

    Alamouti space-time coding is modified in the form of polarization-time coding to combat against polarization mode dispersion (PMD) impairments in exploiting a polarization diversity multiplex (PDM) gain with simple intensity modulation and direct detection (IM/DD) in optical transmission systems. A theoretical model for the proposed IM/DD Alamouti polarization-time coding (APTC-IM/DD) using nonreturn-to-zero on-off keying signal can surprisingly eliminate the requirement of channel estimation for decoding in the low PMD regime, when a two-transmitter and two-receiver channel is adopted. Even in the high PMD regime, the proposed APTC-IM/DD still reveals coding gain demonstrating the robustness of APTC-IM/DD. In addition, this scheme can eliminate the requirements for a polarization state controller, a coherent receiver, and a high-speed analog-to-digital converter at a receiver. Simulation results reveal that the proposed APTC scheme is able to reduce the optical signal-to-noise ratio requirement by ˜3 dB and significantly enhance the PMD tolerance of a PDM-based IM/DD system.

  5. A circuit for motor cortical modulation of auditory cortical activity.

    PubMed

    Nelson, Anders; Schneider, David M; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan; Mooney, Richard

    2013-09-01

    Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity. PMID:24005287

  6. Surveying Multidisciplinary Aspects in Real-Time Distributed Coding for Wireless Sensor Networks

    PubMed Central

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-01

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, “real-time” coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories. PMID:25633597

  7. Ultrasonic imaging of human tooth using chirp-coded nonlinear time reversal acoustics

    NASA Astrophysics Data System (ADS)

    Santos, Serge Dos; Domenjoud, Mathieu; Prevorovsky, Zdenek

    2010-01-01

    We report in this paper the first use of TR-NEWS, included chirp-coded excitation and applied for ultrasonic imaging of human tooth. Feasibility of the focusing of ultrasound at the surface of the human tooth is demonstrated and potentiality of a new echodentography of the dentine-enamel interface using TR-NEWS is discussed.

  8. Multimodal Geographic Information Systems: Adding Haptic and Auditory Display.

    ERIC Educational Resources Information Center

    Jeong, Wooseob; Gluck, Myke

    2003-01-01

    Investigated the feasibility of adding haptic and auditory displays to traditional visual geographic information systems (GISs). Explored differences in user performance, including task completion time and accuracy, and user satisfaction with a multimodal GIS which was implemented with a haptic display, auditory display, and combined display.…

  9. Basic Auditory Processing and Developmental Dyslexia in Chinese

    ERIC Educational Resources Information Center

    Wang, Hsiao-Lan Sharon; Huss, Martina; Hamalainen, Jarmo A.; Goswami, Usha

    2012-01-01

    The present study explores the relationship between basic auditory processing of sound rise time, frequency, duration and intensity, phonological skills (onset-rime and tone awareness, sound blending, RAN, and phonological memory) and reading disability in Chinese. A series of psychometric, literacy, phonological, auditory, and character…

  10. Multimodal Bivariate Thematic Maps: Auditory and Haptic Display.

    ERIC Educational Resources Information Center

    Jeong, Wooseob; Gluck, Myke

    2002-01-01

    Explores the possibility of multimodal bivariate thematic maps by utilizing auditory and haptic (sense of touch) displays. Measured completion time of tasks and the recall (retention) rate in two experiments, and findings confirmed the possibility of using auditory and haptic displays in geographic information systems (GIS). (Author/LRW)

  11. Context dependence of spectro-temporal receptive fields with implications for neural coding.

    PubMed

    Eggermont, Jos J

    2011-01-01

    The spectro-temporal receptive field (STRF) is frequently used to characterize the linear frequency-time filter properties of the auditory system up to the neuron recorded from. STRFs are extremely stimulus dependent, reflecting the strong non-linearities in the auditory system. Changes in the STRF with stimulus type (tonal, noise-like, vocalizations), sound level and spectro-temporal sound density are reviewed here. Effects on STRF shape of task and attention are also briefly reviewed. Models to account for these changes, potential improvements to STRF analysis, and implications for neural coding are discussed. PMID:20123121

  12. Auditory processing--speech, space and auditory objects.

    PubMed

    Scott, Sophie K

    2005-04-01

    There have been recent developments in our understanding of the auditory neuroscience of non-human primates that, to a certain extent, can be integrated with findings from human functional neuroimaging studies. This framework can be used to consider the cortical basis of complex sound processing in humans, including implications for speech perception, spatial auditory processing and auditory scene segregation. PMID:15831402

  13. A new class of auditory warning signals for complex systems: auditory icons.

    PubMed

    Belz, S M; Robinson, G S; Casali, J G

    1999-12-01

    This simulator-based study examined conventional auditory warnings (tonal, nonverbal sounds) and auditory icons (representational, nonverbal sounds), alone and in combination with a dash-mounted visual display, to present information about impending collision situations to commercial motor vehicle operators. Brake response times were measured for impending front-to-rear collision scenarios under 6 display configurations, 2 vehicle speeds, and 2 levels of headway. Accident occurrence was measured for impending side collision scenarios under 2 vehicle speeds, 2 levels of visual workload, 2 auditory displays, absence/presence of mirrors, and absence/presence of a dash-mounted iconic visual display. For both front-to-rear and side collision scenarios, auditory icons elicited significantly improved driver performance over conventional auditory warnings. Driver performance improved when collision warning information was presented through multiple modalities. Brake response times were significantly faster for impending front-to-rear collision scenarios using the longer headway condition. The presence of mirrors significantly reduced the number of accidents for impending side collision scenarios. Subjective preference data indicated that participants preferred multimodal displays over single-modality displays. Actual or potential applications for this research include auditory displays and warnings, information presentation, and the development of alternative user interfaces.

  14. Effects of Multimodal Presentation and Stimulus Familiarity on Auditory and Visual Processing

    ERIC Educational Resources Information Center

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2010-01-01

    Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of…

  15. [Use of self-organizing neural networks (Kohonen maps) for classification of voice acoustic signals exemplified by the infant voice with and without time-delayed auditory feedback].

    PubMed

    Schönweiler, R; Kaese, S; Möller, S; Rinscheid, A; Ptok, M

    1996-04-01

    Subjective and auditory assessment of the voice is now more commonly being replaced by objective voice analysis. Because of the amount of data available from computer-aided voice analysis, subjective selection and interpretation of single data sets remain a matter of experience of the individual investigator. Since neuronal networks are widely used in telecommunication and speech recognition, we applied self-organizing Kohonen networks to classify voice patterns. In the phase of "learning," the Kohonen map is adapted to patterns of the primary signals obtained. If, in the phase of using the map, the input signal hits the field of the primary signals, it will resemble them closely. In this study, we recorded newborn and young infant cries using a DAT recorder and a high-quality microphone. The cries were elicited by wearing uncomfortable headphones ("cries of discomfort"). Spectrographic characteristics of the cries were classified by 20-step bark spectra and then applied to the neuronal networks. It was possible to recognize similarities of different cries of the same children and interindividual differences, as well as cries of children with profound hearing loss. In addition, delayed auditory feedback at 80 dB SL was presented to 27 children via headphone using a three-headed tape-recorder as a model for induced individual cry changes. However, it was not possible to classify short-term changes as in a delayed feedback procedure. Nevertheless, neuronal networks may be helpful as an additional tool in spectrographic voice analysis.

  16. GASPS: A time-dependent, one-dimensional, planar gas dynamics computer code

    SciTech Connect

    Pierce, R.E.; Sutton, S.B.; Comfort, W.J. III

    1986-12-05

    GASP is a transient, one-dimensional planar gas dynamic computer code that can be used to calculate the propagation of a shock wave. GASP, developed at LLNL, solves the one-dimensional planar equations governing momentum, mass and energy conservation. The equations are cast in an Eulerian formulation where the mesh is fixed in space, and material flows through it. Thus it is necessary to account for convection of material from one cell to its neighbor.

  17. Auditory Brainstem Response Latency in Noise as a Marker of Cochlear Synaptopathy

    PubMed Central

    Hickox, Ann E.; Bharadwaj, Hari M.; Goldberg, Hannah; Verhulst, Sarah; Liberman, M. Charles; Shinn-Cunningham, Barbara G.

    2016-01-01

    Evidence from animal and human studies suggests that moderate acoustic exposure, causing only transient threshold elevation, can nonetheless cause “hidden hearing loss” that interferes with coding of suprathreshold sound. Such noise exposure destroys synaptic connections between cochlear hair cells and auditory nerve fibers; however, there is no clinical test of this synaptopathy in humans. In animals, synaptopathy reduces the amplitude of auditory brainstem response (ABR) wave-I. Unfortunately, ABR wave-I is difficult to measure in humans, limiting its clinical use. Here, using analogous measurements in humans and mice, we show that the effect of masking noise on the latency of the more robust ABR wave-V mirrors changes in ABR wave-I amplitude. Furthermore, in our human cohort, the effect of noise on wave-V latency predicts perceptual temporal sensitivity. Our results suggest that measures of the effects of noise on ABR wave-V latency can be used to diagnose cochlear synaptopathy in humans. SIGNIFICANCE STATEMENT Although there are suspicions that cochlear synaptopathy affects humans with normal hearing thresholds, no one has yet reported a clinical measure that is a reliable marker of such loss. By combining human and animal data, we demonstrate that the latency of auditory brainstem response wave-V in noise reflects auditory nerve loss. This is the first study of human listeners with normal hearing thresholds that links individual differences observed in behavior and auditory brainstem response timing to cochlear synaptopathy. These results can guide development of a clinical test to reveal this previously unknown form of noise-induced hearing loss in humans. PMID:27030760

  18. The Drosophila Auditory System

    PubMed Central

    Boekhoff-Falk, Grace; Eberl, Daniel F.

    2013-01-01

    Development of a functional auditory system in Drosophila requires specification and differentiation of the chordotonal sensilla of Johnston’s organ (JO) in the antenna, correct axonal targeting to the antennal mechanosensory and motor center (AMMC) in the brain, and synaptic connections to neurons in the downstream circuit. Chordotonal development in JO is functionally complicated by structural, molecular and functional diversity that is not yet fully understood, and construction of the auditory neural circuitry is only beginning to unfold. Here we describe our current understanding of developmental and molecular mechanisms that generate the exquisite functions of the Drosophila auditory system, emphasizing recent progress and highlighting important new questions arising from research on this remarkable sensory system. PMID:24719289

  19. Neural representation of concurrent harmonic sounds in monkey primary auditory cortex: implications for models of auditory scene analysis.

    PubMed

    Fishman, Yonatan I; Steinschneider, Mitchell; Micheyl, Christophe

    2014-09-10

    The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0. A difference in pitch between simultaneous HCTs provides a powerful cue for their segregation. The neural mechanisms underlying concurrent sound segregation based on pitch differences are poorly understood. Here, we examined neural responses in monkey primary auditory cortex (A1) to two concurrent HCTs that differed in F0 such that they are heard as two separate "auditory objects" with distinct pitches. We found that A1 can resolve, via a rate-place code, the lower harmonics of both HCTs, a prerequisite for deriving their pitches and for their perceptual segregation. Onset asynchrony between the HCTs enhanced the neural representation of their harmonics, paralleling their improved perceptual segregation in humans. Pitches of the concurrent HCTs could also be temporally represented by neuronal phase-locking at their respective F0s. Furthermore, a model of A1 responses using harmonic templates could qualitatively reproduce psychophysical data on concurrent sound segregation in humans. Finally, we identified a possible intracortical homolog of the "object-related negativity" recorded noninvasively in humans, which correlates with the perceptual segregation of concurrent sounds. Findings indicate that A1 contains sufficient spectral and temporal information for segregating concurrent sounds based on differences in pitch.

  20. Auditory perceptual simulation: Simulating speech rates or accents?

    PubMed

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. PMID:27177077

  1. Auditory perceptual simulation: Simulating speech rates or accents?

    PubMed

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects.

  2. A parallel code to calculate rate-state seismicity evolution induced by time dependent, heterogeneous Coulomb stress changes

    NASA Astrophysics Data System (ADS)

    Cattania, C.; Khalid, F.

    2016-09-01

    The estimation of space and time-dependent earthquake probabilities, including aftershock sequences, has received increased attention in recent years, and Operational Earthquake Forecasting systems are currently being implemented in various countries. Physics based earthquake forecasting models compute time dependent earthquake rates based on Coulomb stress changes, coupled with seismicity evolution laws derived from rate-state friction. While early implementations of such models typically performed poorly compared to statistical models, recent studies indicate that significant performance improvements can be achieved by considering the spatial heterogeneity of the stress field and secondary sources of stress. However, the major drawback of these methods is a rapid increase in computational costs. Here we present a code to calculate seismicity induced by time dependent stress changes. An important feature of the code is the possibility to include aleatoric uncertainties due to the existence of multiple receiver faults and to the finite grid size, as well as epistemic uncertainties due to the choice of input slip model. To compensate for the growth in computational requirements, we have parallelized the code for shared memory systems (using OpenMP) and distributed memory systems (using MPI). Performance tests indicate that these parallelization strategies lead to a significant speedup for problems with different degrees of complexity, ranging from those which can be solved on standard multicore desktop computers, to those requiring a small cluster, to a large simulation that can be run using up to 1500 cores.

  3. Change Detection in Auditory Textures.

    PubMed

    Boubenec, Yves; Lawlor, Jennifer; Shamma, Shihab; Englitz, Bernhard

    2016-01-01

    Many natural sounds have spectrotemporal signatures only on a statistical level, e.g. wind, fire or rain. While their local structure is highly variable, the spectrotemporal statistics of these auditory textures can be used for recognition. This suggests the existence of a neural representation of these statistics. To explore their encoding, we investigated the detectability of changes in the spectral statistics in relation to the properties of the change. To achieve precise parameter control, we designed a minimal sound texture--a modified cloud of tones--which retains the central property of auditory textures: solely statistical predictability. Listeners had to rapidly detect a change in the frequency marginal probability of the tone cloud occurring at a random time.The size of change as well as the time available to sample the original statistics were found to correlate positively with performance and negatively with reaction time, suggesting the accumulation of noisy evidence. In summary we quantified dynamic aspects of change detection in statistically defined contexts, and found evidence of integration of statistical information.

  4. Phonetic categorization in auditory word perception.

    PubMed

    Ganong, W F

    1980-02-01

    To investigate the interaction in speech perception of auditory information and lexical knowledge (in particular, knowledge of which phonetic sequences are words), acoustic continua varying in voice onset time were constructed so that for each acoustic continuum, one of the two possible phonetic categorizations made a word and the other did not. For example, one continuum ranged between the word dash and the nonword tash; another used the nonword dask and the word task. In two experiments, subjects showed a significant lexical effect--that is, a tendency to make phonetic categorizations that make words. This lexical effect was greater at the phoneme boundary (where auditory information is ambiguous) than at the ends of the condinua. Hence the lexical effect must arise at a stage of processing sensitive to both lexical knowledge and auditory information.

  5. The auditory hallucination: a phenomenological survey.

    PubMed

    Nayani, T H; David, A S

    1996-01-01

    A comprehensive semi-structured questionnaire was administered to 100 psychotic patients who had experienced auditory hallucinations. The aim was to extend the phenomenology of the hallucination into areas of both form and content and also to guide future theoretical development. All subjects heard 'voices' talking to or about them. The location of the voice, its characteristics and the nature of address were described. Precipitants and alleviating factors plus the effect of the hallucinations on the sufferer were identified. Other hallucinatory experiences, thought insertion and insight were examined for their inter-relationships. A pattern emerged of increasing complexity of the auditory-verbal hallucination over time by a process of accretion, with the addition of more voices and extended dialogues, and more intimacy between subject and voice. Such evolution seemed to relate to the lessening of distress and improved coping. These findings should inform both neurological and cognitive accounts of the pathogenesis of auditory hallucinations in psychotic disorders. PMID:8643757

  6. ACT-ARA: Code System for the Calculation of Changes in Radiological Source Terms with Time

    1988-02-01

    The program calculates the source term activity as a function of time for parent isotopes as well as daughters. Also, at each time, the "probable release" is produced. Finally, the program determines the time integrated probable release for each isotope over the time period of interest.

  7. Auditory Channel Problems.

    ERIC Educational Resources Information Center

    Mann, Philip H.; Suiter, Patricia A.

    This teacher's guide contains a list of general auditory problem areas where students have the following problems: (a) inability to find or identify source of sound; (b) difficulty in discriminating sounds of words and letters; (c) difficulty with reproducing pitch, rhythm, and melody; (d) difficulty in selecting important from unimportant sounds;…

  8. Estimation of the reaction times in tasks of varying difficulty from the phase coherence of the auditory steady-state response using the least absolute shrinkage and selection operator analysis.

    PubMed

    Yokota, Yusuke; Igarashi, Yasuhiko; Okada, Masato; Naruse, Yasushi

    2015-01-01

    Quantitative estimation of the workload in the brain is an important factor for helping to predict the behavior of humans. The reaction time when performing a difficult task is longer than that when performing an easy task. Thus, the reaction time reflects the workload in the brain. In this study, we employed an N-back task in order to regulate the degree of difficulty of the tasks, and then estimated the reaction times from the brain activity. The brain activity that we used to estimate the reaction time was the auditory steady-state response (ASSR) evoked by a 40-Hz click sound. Fifteen healthy participants participated in the present study and magnetoencephalogram (MEG) responses were recorded using a 148-channel magnetometer system. The least absolute shrinkage and selection operator (LASSO), which is a type of sparse modeling, was employed to estimate the reaction times from the ASSR recorded by MEG. The LASSO showed higher estimation accuracy than the least squares method. This result indicates that LASSO overcame the over-fitting to the learning data. Furthermore, the LASSO selected channels in not only the parietal region, but also in the frontal and occipital regions. Since the ASSR is evoked by auditory stimuli, it is usually large in the parietal region. However, since LASSO also selected channels in regions outside the parietal region, this suggests that workload-related neural activity occurs in many brain regions. In the real world, it is more practical to use a wearable electroencephalography device with a limited number of channels than to use MEG. Therefore, determining which brain areas should be measured is essential. The channels selected by the sparse modeling method are informative for determining which brain areas to measure. PMID:26737821

  9. The neglected neglect: auditory neglect.

    PubMed

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  10. Developmental Changes in Auditory Temporal Perception.

    ERIC Educational Resources Information Center

    Morrongiello, Barbara A.; And Others

    1984-01-01

    Infants, preschoolers, and adults were tested to determine the shortest time interval at which they would respond to the precedence effect, an auditory phenomenon produced by presenting the same sound through two loudspeakers with the input to one loudspeaker delayed in relation to the other. Results revealed developmental differences in threshold…

  11. Cerebral responses to local and global auditory novelty under general anesthesia.

    PubMed

    Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir

    2016-11-01

    Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices.

  12. Cerebral responses to local and global auditory novelty under general anesthesia.

    PubMed

    Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir

    2016-11-01

    Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046

  13. Maturation of human auditory cortex: implications for speech perception.

    PubMed

    Moore, Jean K

    2002-05-01

    This project traced the maturation of the human auditory cortex from midgestation to young adulthood, using immunostaining of axonal neurofilaments to determine the time of onset of rapid conduction. The study identified 3 developmental periods, each characterized by maturation of a different axonal system. During the perinatal period (3rd trimester to 4th postnatal month), neurofilament expression occurs only in axons of the marginal layer. These axons drive the structural and functional development of cells in the deeper cortical layers, but do not relay external stimuli. In early childhood (6 months to 5 years), maturing thalamocortical afferents to the deeper cortical layers are the first source of input to the auditory cortex from lower levels of the auditory system. During later childhood (5 to 12 years), maturation of commissural and association axons in the superficial cortical layers allows communication between different subdivisions of the auditory cortex, thus forming a basis for more complex cortical processing of auditory stimuli. PMID:12018354

  14. Comparisons of memory for nonverbal auditory and visual sequential stimuli.

    PubMed

    McFarland, D J; Cacace, A T

    1995-01-01

    Properties of auditory and visual sensory memory were compared by examining subjects' recognition performance of randomly generated binary auditory sequential frequency patterns and binary visual sequential color patterns within a forced-choice paradigm. Experiment 1 demonstrated serial-position effects in auditory and visual modalities consisting of both primacy and recency effects. Experiment 2 found that retention of auditory and visual information was remarkably similar when assessed across a 10s interval. Experiments 3 and 4, taken together, showed that the recency effect in sensory memory is affected more by the type of response required (recognition vs. reproduction) than by the sensory modality employed. These studies suggest that auditory and visual sensory memory stores for nonverbal stimuli share similar properties with respect to serial-position effects and persistence over time.

  15. Altered auditory function in rats exposed to hypergravic fields

    NASA Technical Reports Server (NTRS)

    Jones, T. A.; Hoffman, L.; Horowitz, J. M.

    1982-01-01

    The effect of an orthodynamic hypergravic field of 6 G on the brainstem auditory projections was studied in rats. The brain temperature and EEG activity were recorded in the rats during 6 G orthodynamic acceleration and auditory brainstem responses were used to monitor auditory function. Results show that all animals exhibited auditory brainstem responses which indicated impaired conduction and transmission of brainstem auditory signals during the exposure to the 6 G acceleration field. Significant increases in central conduction time were observed for peaks 3N, 4P, 4N, and 5P (N = negative, P = positive), while the absolute latency values for these same peaks were also significantly increased. It is concluded that these results, along with those for fields below 4 G (Jones and Horowitz, 1981), indicate that impaired function proceeds in a rostro-caudal progression as field strength is increased.

  16. Detection of almond allergen coding sequences in processed foods by real time PCR.

    PubMed

    Prieto, Nuria; Iniesto, Elisa; Burbano, Carmen; Cabanillas, Beatriz; Pedrosa, Mercedes M; Rovira, Mercè; Rodríguez, Julia; Muzquiz, Mercedes; Crespo, Jesus F; Cuadrado, Carmen; Linacero, Rosario

    2014-06-18

    The aim of this work was to develop and analytically validate a quantitative RT-PCR method, using novel primer sets designed on Pru du 1, Pru du 3, Pru du 4, and Pru du 6 allergen-coding sequences, and contrast the sensitivity and specificity of these probes. The temperature and/or pressure processing influence on the ability to detect these almond allergen targets was also analyzed. All primers allowed a specific and accurate amplification of these sequences. The specificity was assessed by amplifying DNA from almond, different Prunus species and other common plant food ingredients. The detection limit was 1 ppm in unprocessed almond kernels. The method's robustness and sensitivity were confirmed using spiked samples. Thermal treatment under pressure (autoclave) reduced yield and amplificability of almond DNA; however, high-hydrostatic pressure treatments did not produced such effects. Compared with ELISA assay outcomes, this RT-PCR showed higher sensitivity to detect almond traces in commercial foodstuffs. PMID:24857239

  17. Auditory Brainstem Gap Responses Start to Decline in Middle Age Mice: A Novel Physiological Biomarker for Age-Related Hearing Loss

    PubMed Central

    Williamson, Tanika T.; Zhu, Xiaoxia; Walton, Joseph P.; Frisina, Robert D.

    2014-01-01

    The CBA/CaJ mouse strain's auditory function is normal during the early phases of life and gradually declines over its lifespan, much like human age-related hearing loss (ARHL), but on a mouse life cycle “time frame”. This pattern of ARHL is relatively similar to that of most humans: difficult to clinically diagnose at its onset, and currently not treatable medically. To address the challenge of early diagnosis, CBA mice were used for the present study to analyze the beginning stages and functional onset biomarkers of ARHL. The results from Auditory Brainstem Response (ABR) audiogram and Gap-in-noise (GIN) ABR tests were compared for two groups of mice of different ages, young adult and middle age. ABR peak components from the middle age group displayed minor changes in audibility, but had a significantly higher prolonged peak latency and decreased peak amplitude in response to temporal gaps in comparison to the young adult group. The results for the younger subjects revealed gap thresholds and recovery rates that were comparable to previous studies of auditory neural gap coding. Our findings suggest that age-linked degeneration of the peripheral and brainstem auditory system is already beginning in middle age, allowing for the possibility of preventative biomedical or hearing protection measures to be implemented as a possibility for attenuating further damage to the auditory system due to ARHL. PMID:25307161

  18. Auditory brainstem gap responses start to decline in mice in middle age: a novel physiological biomarker for age-related hearing loss.

    PubMed

    Williamson, Tanika T; Zhu, Xiaoxia; Walton, Joseph P; Frisina, Robert D

    2015-07-01

    The auditory function of the CBA/CaJ mouse strain is normal during the early phases of life and gradually declines over its lifespan, much like human age-related hearing loss (ARHL) but within the "time frame" of a mouse life cycle. This pattern of ARHL is similar to that of most humans: difficult to diagnose clinically at its onset and currently not treatable medically. To address the challenge of early diagnosis, we use CBA mice to analyze the initial stages and functional onset biomarkers of ARHL. The results from Auditory Brainstem Response (ABR) audiogram and Gap-in-noise (GIN) ABR tests were compared for two groups of mice of different ages, namely young adult and middle age. ABR peak components from the middle age group displayed minor changes in audibility but had a significantly higher prolonged peak latency and decreased peak amplitude in response to temporal gaps in comparison with the young adult group. The results for the younger subjects revealed gap thresholds and recovery rates that were comparable with previous studies of auditory neural gap coding. Our findings suggest that age-linked degeneration of the peripheral and brainstem auditory system begins in middle age, allowing for the possibility of preventative biomedical or hearing protection measures to be implemented in order to attenuate further damage to the auditory system attributable to ARHL.

  19. Auditory Brainstem Response Improvements in Hyperbillirubinemic Infants

    PubMed Central

    Abdollahi, Farzaneh Zamiri; Manchaiah, Vinaya; Lotfi, Yones

    2016-01-01

    Background and Objectives Hyperbillirubinemia in infants have been associated with neuronal damage including in the auditory system. Some researchers have suggested that the bilirubin-induced auditory neuronal damages may be temporary and reversible. This study was aimed at investigating the auditory neuropathy and reversibility of auditory abnormalities in hyperbillirubinemic infants. Subjects and Methods The study participants included 41 full term hyperbilirubinemic infants (mean age 39.24 days) with normal birth weight (3,200-3,700 grams) that admitted in hospital for hyperbillirubinemia and 39 normal infants (mean age 35.54 days) without any hyperbillirubinemia or other hearing loss risk factors for ruling out maturational changes. All infants in hyperbilirubinemic group had serum bilirubin level more than 20 milligram per deciliter and undergone one blood exchange transfusion. Hearing evaluation for each infant was conducted twice: the first one after hyperbilirubinemia treatment and before leaving hospital and the second one three months after the first hearing evaluation. Hearing evaluations included transient evoked otoacoustic emission (TEOAE) screening and auditory brainstem response (ABR) threshold tracing. Results The TEOAE and ABR results of control group and TEOAE results of the hyperbilirubinemic group did not change significantly from the first to the second evaluation. However, the ABR results of the hyperbilirubinemic group improved significantly from the first to the second assessment (p=0.025). Conclusions The results suggest that the bilirubin induced auditory neuronal damage can be reversible over time so we suggest that infants with hyperbilirubinemia who fail the first hearing tests should be reevaluated after 3 months of treatment. PMID:27144228

  20. Real-time implementation of a speech digitization algorithm combining time-domain harmonic scaling and adaptive residual coding, volume 2

    NASA Astrophysics Data System (ADS)

    Melsa, J. L.; Mills, J. D.; Arora, A. A.

    1983-06-01

    This report describes the results of a fifteen month study of the real-time implementation of an algorithm combining time-domain harmonic scaling and Adaptive Residual Coding at a transmission bit rate of 16 kb/s. The modifications of this encoding algorithm as originally presented by Melso and Pande to allow real-time implementation are described in detail. A non real-time FORTRAN simulation using a sixteen-bit word length was developed and tested to establish feasibility. The hardware implementation of a full-duplex, real-time system has demonstrated that this algorithm is capable of producing toll quality speech digitization. This report has been divided into two volumes. The second volume discusses details of the hardware implementation, schematics for the system and operating instructions.

  1. Real-time implementation of a speech digitization algorithm combining time-domain harmonic scaling and adaptive residual coding, volume 1

    NASA Astrophysics Data System (ADS)

    Melsa, J. L.; Mills, J. D.; Arora, A. A.

    1983-06-01

    This report describes the results of a fifteen-month study of the real-time implementation of algorithm combining time-domain harmonic scaling and Adaptive Residual Coding at a transmission bit rate of 16 kb/s. The modifications of this encoding algorithm as originally presented by Melsa and Pande to allow real-time implementation are described in detail. A non real-time FORTRAN simulation using a sixteen-bit word length was developed and tested to establish feasibility. The hardware implementation of a full-duplex, real-time system has demonstrated that this algorithm is capable of producing toll quality speech digitization. This report has been divided into two volumes. The first volume discusses the algorithm modifications and FORTRAN simulation. The details of the hardware implementation, schematics for the system and operating instructions are included in Volume 2 of this final report.

  2. Fast wave propagation in auditory cortex of an awake cat using a chronic microelectrode array

    NASA Astrophysics Data System (ADS)

    Witte, Russell S.; Rousche, Patrick J.; Kipke, Daryl R.

    2007-06-01

    We investigated fast wave propagation in auditory cortex of an alert cat using a chronically implanted microelectrode array. A custom, real-time imaging template exhibited wave dynamics within the 33-microwire array (3 mm2) during ten recording sessions spanning 1 month post implant. Images were based on the spatial arrangement of peri-stimulus time histograms at each recording site in response to auditory stimuli consisting of tone pips between 1 and 10 kHz at 75 dB SPL. Functional images portray stimulus-locked spiking activity and exhibit waves of excitation and inhibition that evolve during the onset, sustained and offset period of the tones. In response to 5 kHz, for example, peak excitation occurred at 27 ms after onset and again at 15 ms following tone offset. Variability of the position of the centroid of excitation during ten recording sessions reached a minimum at 31 ms post onset (σ = 125 µm) and 18 ms post offset (σ = 145 µm), suggesting a fine place/time representation of the stimulus in the cortex. The dynamics of these fast waves also depended on stimulus frequency, likely reflecting the tonotopicity in auditory cortex projected from the cochlea. Peak wave velocities of 0.2 m s-1 were also consistent with those purported across horizontal layers of cat visual cortex. The fine resolution offered by microimaging may be critical for delivering optimal coding strategies used with an auditory prosthesis. Based on the initial results, future studies seek to determine the relevance of these waves to sensory perception and behavior. The work was performed at Department of Bioengineering, Arizona State University, ECG 334 MS-9709 Arizona State University, Tempe, AZ 85287-9709, USA.

  3. Learning Dictionaries of Sparse Codes of 3D Movements of Body Joints for Real-Time Human Activity Understanding

    PubMed Central

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications. PMID:25473850

  4. Coding of azimuthal directions via time-compensated combination of celestial compass cues.

    PubMed

    Pfeiffer, Keram; Homberg, Uwe

    2007-06-01

    Many animals use the sun as a reference for spatial orientation [1-3]. In addition to sun position, the sky provides two other sources of directional information, a color gradient [4] and a polarization pattern [5]. Work on insects has predominantly focused on celestial polarization as an orientation cue [6, 7]. Relying on sky polarization alone, however, poses the following two problems: E vector orientations in the sky are not suited to distinguish between the solar and antisolar hemisphere of the sky, and the polarization pattern changes with changing solar elevation during the day [8, 9]. Here, we present neurons that overcome both problems in a locust's brain. The spiking activity of these neurons depends (1) on the E vector orientation of dorsally presented polarized light, (2) on the azimuthal, i.e., horizontal, direction, and (3) on the wavelength of an unpolarized light source. Their tuning to these stimuli matches the distribution of a UV/green chromatic contrast as well as the polarization of natural skylight and compensates for changes in solar elevation during the day. The neurons are, therefore, suited to code for solar azimuth by concurrent combination of signals from the spectral gradient, intensity gradient, and polarization pattern of the sky.

  5. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    NASA Astrophysics Data System (ADS)

    Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan

    2005-12-01

    Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.

  6. Vision contingent auditory pitch aftereffects.

    PubMed

    Teramoto, Wataru; Kobayashi, Maori; Hidaka, Souta; Sugita, Yoichi

    2013-08-01

    Visual motion aftereffects can occur contingent on arbitrary sounds. Two circles, placed side by side, were alternately presented, and the onsets were accompanied by tone bursts of high and low frequencies, respectively. After a few minutes of exposure to the visual apparent motion with the tones, a circle blinking at a fixed location was perceived as a lateral motion in the same direction as the previously exposed apparent motion (Teramoto et al. in PLoS One 5:e12255, 2010). In the present study, we attempted to reverse this contingency (pitch aftereffects contingent on visual information). Results showed that after prolonged exposure to the audio-visual stimuli, the apparent visual motion systematically affected the perceived pitch of the auditory stimuli. When the leftward apparent visual motion was paired with the high-low-frequency sequence during the adaptation phase, a test tone sequence was more frequently perceived as a high-low-pitch sequence when the leftward apparent visual motion was presented and vice versa. Furthermore, the effect was specific for the exposed visual field and did not transfer to the other side, thus ruling out an explanation in terms of simple response bias. These results suggest that new audiovisual associations can be established within a short time, and visual information processing and auditory processing can mutually influence each other. PMID:23727883

  7. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...: (1) Based on reportable flight data provided to the Department, calculate the percentage of on-time... reportable flight, except those scheduled to operate three times or less during a month. In addition, each...-stop flights, or portion thereof, that the carrier holds out to the public through a CRS, the...

  8. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...: (1) Based on reportable flight data provided to the Department, calculate the percentage of on-time... reportable flight, except those scheduled to operate three times or less during a month. In addition, each...-stop flights, or portion thereof, that the carrier holds out to the public through a CRS, the...

  9. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...: (1) Based on reportable flight data provided to the Department, calculate the percentage of on-time... reportable flight, except those scheduled to operate three times or less during a month. In addition, each...-stop flights, or portion thereof, that the carrier holds out to the public through a CRS, the...

  10. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...: (1) Based on reportable flight data provided to the Department, calculate the percentage of on-time... reportable flight, except those scheduled to operate three times or less during a month. In addition, each...-stop flights, or portion thereof, that the carrier holds out to the public through a CRS, the...

  11. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...: (1) Based on reportable flight data provided to the Department, calculate the percentage of on-time... reportable flight, except those scheduled to operate three times or less during a month. In addition, each...-stop flights, or portion thereof, that the carrier holds out to the public through a CRS, the...

  12. Auditory object cognition in dementia

    PubMed Central

    Goll, Johanna C.; Kim, Lois G.; Hailstone, Julia C.; Lehmann, Manja; Buckley, Aisling; Crutch, Sebastian J.; Warren, Jason D.

    2011-01-01

    The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n = 21), progressive nonfluent aphasia (PNFA; n = 5), logopenic progressive aphasia (LPA; n = 7) and aphasia in association with a progranulin gene mutation (GAA; n = 1), and in healthy age-matched controls (n = 20). Based on a cognitive framework treating complex sounds as ‘auditory objects’, we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes. PMID:21689671

  13. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    PubMed Central

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  14. Stochastic undersampling steepens auditory threshold/duration functions: implications for understanding auditory deafferentation and aging

    PubMed Central

    Marmel, Frédéric; Rodríguez-Mendoza, Medardo A.; Lopez-Poveda, Enrique A.

    2015-01-01

    It has long been known that some listeners experience hearing difficulties out of proportion with their audiometric losses. Notably, some older adults as well as auditory neuropathy patients have temporal-processing and speech-in-noise intelligibility deficits not accountable for by elevated audiometric thresholds. The study of these hearing deficits has been revitalized by recent studies that show that auditory deafferentation comes with aging and can occur even in the absence of an audiometric loss. The present study builds on the stochastic undersampling principle proposed by Lopez-Poveda and Barrios (2013) to account for the perceptual effects of auditory deafferentation. Auditory threshold/duration functions were measured for broadband noises that were stochastically undersampled to various different degrees. Stimuli with and without undersampling were equated for overall energy in order to focus on the changes that undersampling elicited on the stimulus waveforms, and not on its effects on the overall stimulus energy. Stochastic undersampling impaired the detection of short sounds (<20 ms). The detection of long sounds (>50 ms) did not change or improved, depending on the degree of undersampling. The results for short sounds show that stochastic undersampling, and hence presumably deafferentation, can account for the steeper threshold/duration functions observed in auditory neuropathy patients and older adults with (near) normal audiometry. This suggests that deafferentation might be diagnosed using pure-tone audiometry with short tones. It further suggests that the auditory system of audiometrically normal older listeners might not be “slower than normal”, as is commonly thought, but simply less well afferented. Finally, the results for both short and long sounds support the probabilistic theories of detectability that challenge the idea that auditory threshold occurs by integration of sound energy over time. PMID:26029098

  15. Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.

    2012-04-15

    Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, wasmore » developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.« less

  16. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    SciTech Connect

    Candel, A; Kabel, A.; Lee, L.; Li, Z.; Ng, C.; Schussman, G.; Ko, K.; Syratchev, I.; /CERN

    2009-06-19

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  17. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.9 Reporting...

  18. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.9 Reporting...

  19. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.9 Reporting...

  20. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.9 Reporting...

  1. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... deliver, or arrange to have delivered, to each system vendor, as defined in 14 CFR part 255, the on-time... (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.9 Reporting...

  2. Real-time video streaming using H.264 scalable video coding (SVC) in multihomed mobile networks: a testbed approach

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2011-03-01

    Users of the next generation wireless paradigm known as multihomed mobile networks expect satisfactory quality of service (QoS) when accessing streamed multimedia content. The recent H.264 Scalable Video Coding (SVC) extension to the Advanced Video Coding standard (AVC), offers the facility to adapt real-time video streams in response to the dynamic conditions of multiple network paths encountered in multihomed wireless mobile networks. Nevertheless, preexisting streaming algorithms were mainly proposed for AVC delivery over multipath wired networks and were evaluated by software simulation. This paper introduces a practical, hardware-based testbed upon which we implement and evaluate real-time H.264 SVC streaming algorithms in a realistic multihomed wireless mobile networks environment. We propose an optimised streaming algorithm with multi-fold technical contributions. Firstly, we extended the AVC packet prioritisation schemes to reflect the three-dimensional granularity of SVC. Secondly, we designed a mechanism for evaluating the effects of different streamer 'read ahead window' sizes on real-time performance. Thirdly, we took account of the previously unconsidered path switching and mobile networks tunnelling overheads encountered in real-world deployments. Finally, we implemented a path condition monitoring and reporting scheme to facilitate the intelligent path switching. The proposed system has been experimentally shown to offer a significant improvement in PSNR of the received stream compared with representative existing algorithms.

  3. Early hominin auditory capacities.

    PubMed

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis

    2015-09-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  4. Early hominin auditory capacities

    PubMed Central

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis

    2015-01-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  5. Early hominin auditory capacities.

    PubMed

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis

    2015-09-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats.

  6. Using Facebook to Reach People Who Experience Auditory Hallucinations

    PubMed Central

    Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience

  7. Clinical assessment of auditory dysfunction.

    PubMed Central

    Thomas, W G

    1982-01-01

    Many drugs, chemical substances and agents are potentially toxic to the human auditory system. The extent of toxicity depends on numerous factors. With few exceptions, toxicity in the auditory system affects various organs or cells within the cochlea or vestibular system, with brain stem and other central nervous system involvement reported with some chemicals and agents. This ototoxicity usually presents as a decrease in auditory sensitivity, tinnitus and/or vertigo or loss of balance. Classical and newer audiological techniques used in clinical assessment are beneficial in specifying the site of lesion in the cochlea, although auditory test results, themselves, give little information regarding possible pathology or etiology within the cochlea. Typically,, ototoxicity results in high frequency hearing loss, progressive as a function of frequency, usually accompanied by tinnitus and occasionally by vertigo or loss of balance. Auditory testing protocols are necessary to document this loss in auditory function. PMID:7044778

  8. Design of time-pulse coded optoelectronic neuronal elements for nonlinear transformation and integration

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.

    2008-03-01

    In the paper the actuality of neurophysiologically motivated neuron arrays with flexibly programmable functions and operations with possibility to select required accuracy and type of nonlinear transformation and learning are shown. We consider neurons design and simulation results of multichannel spatio-time algebraic accumulation - integration of optical signals. Advantages for nonlinear transformation and summation - integration are shown. The offered circuits are simple and can have intellectual properties such as learning and adaptation. The integrator-neuron is based on CMOS current mirrors and comparators. The performance: consumable power - 100...500 μW, signal period- 0.1...1ms, input optical signals power - 0.2...20 μW time delays - less 1μs, the number of optical signals - 2...10, integration time - 10...100 of signal periods, accuracy or integration error - about 1%. Various modifications of the neuron-integrators with improved performance and for different applications are considered in the paper.

  9. Auditory interfaces: The human perceiver

    NASA Technical Reports Server (NTRS)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  10. Bare Code Reader

    NASA Astrophysics Data System (ADS)

    Clair, Jean J.

    1980-05-01

    The Bare code system will be used, in every market and supermarket. The code, which is normalised in US and Europe (code EAN) gives informations on price, storage, nature and allows in real time the gestion of theshop.

  11. Multigroup Time-Independent Neutron Transport Code System for Plane or Spherical Geometry.

    1986-12-01

    Version 00 PALLAS-PL/SP solves multigroup time-independent one-dimensional neutron transport problems in plane or spherical geometry. The problems solved are subject to a variety of boundary conditions or a distributed source. General anisotropic scattering problems are treated for solving deep-penetration problems in which angle-dependent neutron spectra are calculated in detail.

  12. Short Time-Scale Sensory Coding in S1 during Discrimination of Whisker Vibrotactile Sequences

    PubMed Central

    Miyashita, Toshio; Lee, Daniel J.; Smith, Katherine A.; Feldman, Daniel E.

    2016-01-01

    Rodent whisker input consists of dense microvibration sequences that are often temporally integrated for perceptual discrimination. Whether primary somatosensory cortex (S1) participates in temporal integration is unknown. We trained rats to discriminate whisker impulse sequences that varied in single-impulse kinematics (5–20-ms time scale) and mean speed (150-ms time scale). Rats appeared to use the integrated feature, mean speed, to guide discrimination in this task, consistent with similar prior studies. Despite this, 52% of S1 units, including 73% of units in L4 and L2/3, encoded sequences at fast time scales (≤20 ms, mostly 5–10 ms), accurately reflecting single impulse kinematics. 17% of units, mostly in L5, showed weaker impulse responses and a slow firing rate increase during sequences. However, these units did not effectively integrate whisker impulses, but instead combined weak impulse responses with a distinct, slow signal correlated to behavioral choice. A neural decoder could identify sequences from fast unit spike trains and behavioral choice from slow units. Thus, S1 encoded fast time scale whisker input without substantial temporal integration across whisker impulses. PMID:27574970

  13. An Effect of Spatial-Temporal Association of Response Codes: Understanding the Cognitive Representations of Time

    ERIC Educational Resources Information Center

    Vallesi, Antonino; Binns, Malcolm A.; Shallice, Tim

    2008-01-01

    The present study addresses the question of how such an abstract concept as time is represented by our cognitive system. Specifically, the aim was to assess whether temporal information is cognitively represented through left-to-right spatial coordinates, as already shown for other ordered sequences (e.g., numbers). In Experiment 1, the…

  14. Short Time-Scale Sensory Coding in S1 during Discrimination of Whisker Vibrotactile Sequences.

    PubMed

    McGuire, Leah M; Telian, Gregory; Laboy-Juárez, Keven J; Miyashita, Toshio; Lee, Daniel J; Smith, Katherine A; Feldman, Daniel E

    2016-08-01

    Rodent whisker input consists of dense microvibration sequences that are often temporally integrated for perceptual discrimination. Whether primary somatosensory cortex (S1) participates in temporal integration is unknown. We trained rats to discriminate whisker impulse sequences that varied in single-impulse kinematics (5-20-ms time scale) and mean speed (150-ms time scale). Rats appeared to use the integrated feature, mean speed, to guide discrimination in this task, consistent with similar prior studies. Despite this, 52% of S1 units, including 73% of units in L4 and L2/3, encoded sequences at fast time scales (≤20 ms, mostly 5-10 ms), accurately reflecting single impulse kinematics. 17% of units, mostly in L5, showed weaker impulse responses and a slow firing rate increase during sequences. However, these units did not effectively integrate whisker impulses, but instead combined weak impulse responses with a distinct, slow signal correlated to behavioral choice. A neural decoder could identify sequences from fast unit spike trains and behavioral choice from slow units. Thus, S1 encoded fast time scale whisker input without substantial temporal integration across whisker impulses. PMID:27574970

  15. Simulations for Full Unit-memory and Partial Unit-memory Convolutional Codes with Real-time Minimal-byte-error Probability Decoding Algorithm

    NASA Technical Reports Server (NTRS)

    Vo, Q. D.

    1984-01-01

    A program which was written to simulate Real Time Minimal-Byte-Error Probability (RTMBEP) decoding of full unit-memory (FUM) convolutional codes on a 3-bit quantized AWGN channel is described. This program was used to compute the symbol-error probability of FUM codes and to determine the signal to noise (SNR) required to achieve a bit error rate (BER) of 10 to the minus 6th power for corresponding concatenated systems. A (6,6/30) FUM code, 6-bit Reed-Solomon code combination was found to achieve the required BER at a SNR of 1.886 dB. The RTMBEP algorithm was then modified for decoding partial unit-memory (PUM) convolutional codes. A simulation program was also written to simulate the symbol-error probability of these codes.

  16. Subcortical processing in auditory communication.

    PubMed

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2015-10-01

    The voice is a rich source of information, which the human brain has evolved to decode and interpret. Empirical observations have shown that the human auditory system is especially sensitive to the human voice, and that activity within the voice-sensitive regions of the primary and secondary auditory cortex is modulated by the emotional quality of the vocal signal, and may therefore subserve, with frontal regions, the cognitive ability to correctly identify the speaker's affective state. So far, the network involved in the processing of vocal affect has been mainly characterised at the cortical level. However, anatomical and functional evidence suggests that acoustic information relevant to the affective quality of the auditory signal might be processed prior to the auditory cortex. Here we review the animal and human literature on the main subcortical structures along the auditory pathway, and propose a model whereby the distinction between different types of vocal affect in auditory communication begins at very early stages of auditory processing, and relies on the analysis of individual acoustic features of the sound signal. We further suggest that this early feature-based decoding occurs at a subcortical level along the ascending auditory pathway, and provides a preliminary coarse (but fast) characterisation of the affective quality of the auditory signal before the more refined (but slower) cortical processing is completed.

  17. Subcortical processing in auditory communication.

    PubMed

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2015-10-01

    The voice is a rich source of information, which the human brain has evolved to decode and interpret. Empirical observations have shown that the human auditory system is especially sensitive to the human voice, and that activity within the voice-sensitive regions of the primary and secondary auditory cortex is modulated by the emotional quality of the vocal signal, and may therefore subserve, with frontal regions, the cognitive ability to correctly identify the speaker's affective state. So far, the network involved in the processing of vocal affect has been mainly characterised at the cortical level. However, anatomical and functional evidence suggests that acoustic information relevant to the affective quality of the auditory signal might be processed prior to the auditory cortex. Here we review the animal and human literature on the main subcortical structures along the auditory pathway, and propose a model whereby the distinction between different types of vocal affect in auditory communication begins at very early stages of auditory processing, and relies on the analysis of individual acoustic features of the sound signal. We further suggest that this early feature-based decoding occurs at a subcortical level along the ascending auditory pathway, and provides a preliminary coarse (but fast) characterisation of the affective quality of the auditory signal before the more refined (but slower) cortical processing is completed. PMID:26163900

  18. [EFFECT OF HYPOXIA ON THE CHARACTERISTICS OF HUMAN AUDITORY PERCEPTION].

    PubMed

    Ogorodnikova, E A; Stolvaroya, E I; Pak, S P; Bogomolova, G M; Korolev, Yu N; Golubev, V N; Lesova, E M

    2015-12-01

    The effect of normobaric hypoxic hypoxia (single and interval training) on the characteristics of human hearing was investigated. The hearing thresholds (tonal audiograms), reaction time of subjects in psychophysical experiments (pause detection, perception of rhythm and target words), and short-term auditory memory were measured before and after hypoxia. The obtained data revealed improvement of the auditory sensitivity and characteristics of working memory, and increasing of response speed. It was demonstrated that interval hypoxic training had positive effect on the processes of auditory perception. PMID:26987233

  19. Comparison of the LLNL ALE3D and AKTS Thermal Safety Computer Codes for Calculating Times to Explosion in ODTX and STEX Thermal Cookoff Experiments

    SciTech Connect

    Wemhoff, A P; Burnham, A K

    2006-04-05

    Cross-comparison of the results of two computer codes for the same problem provides a mutual validation of their computational methods. This cross-validation exercise was performed for LLNL's ALE3D code and AKTS's Thermal Safety code, using the thermal ignition of HMX in two standard LLNL cookoff experiments: the One-Dimensional Time to Explosion (ODTX) test and the Scaled Thermal Explosion (STEX) test. The chemical kinetics model used in both codes was the extended Prout-Tompkins model, a relatively new addition to ALE3D. This model was applied using ALE3D's new pseudospecies feature. In addition, an advanced isoconversional kinetic approach was used in the AKTS code. The mathematical constants in the Prout-Tompkins code were calibrated using DSC data from hermetically sealed vessels and the LLNL optimization code Kinetics05. The isoconversional kinetic parameters were optimized using the AKTS Thermokinetics code. We found that the Prout-Tompkins model calculations agree fairly well between the two codes, and the isoconversional kinetic model gives very similar results as the Prout-Tompkins model. We also found that an autocatalytic approach in the beta-delta phase transition model does affect the times to explosion for some conditions, especially STEX-like simulations at ramp rates above 100 C/hr, and further exploration of that effect is warranted.

  20. 2-D Time-Dependent Fuel Element, Thermal Analysis Code System.

    2001-09-24

    Version 00 WREM-TOODEE2 is a two dimensional, time-dependent, fuel-element thermal analysis program. Its primary purpose is to evaluate fuel-element thermal response during post-LOCA refill and reflood in a pressurized water reactor (PWR). TOODEE2 calculations are carried out in a two-dimensional mesh region defined in slab or cylindrical geometry by orthogonal grid lines. Coordinates which form order pairs are labeled x-y in slab geometry, and those in cylindrical geometry are labeled r-z for the axisymmetric casemore » and r-theta for the polar case. Conduction and radiation are the only heat transfer mechanisms assumed within the boundaries of the mesh region. Convective and boiling heat transfer mechanisms are assumed at the boundaries. The program numerically solves the two-dimensional, time-dependent, heat conduction equation within the mesh region. KEYWORDS: FUEL MANAGEMENT; HEAT TRANSFER; LOCA; PWR« less

  1. Idealized computational models for auditory receptive fields.

    PubMed

    Lindeberg, Tony; Friberg, Anders

    2015-01-01

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973

  2. Idealized Computational Models for Auditory Receptive Fields

    PubMed Central

    Lindeberg, Tony; Friberg, Anders

    2015-01-01

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973

  3. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise.

    PubMed

    White-Schwoch, Travis; Davies, Evan C; Thompson, Elaine C; Woodruff Carr, Kali; Nicol, Trent; Bradlow, Ann R; Kraus, Nina

    2015-10-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response

  4. SER Performance of Enhanced Spatial Multiplexing Codes with ZF/MRC Receiver in Time-Varying Rayleigh Fading Channels

    PubMed Central

    Lee, In-Ho

    2014-01-01

    We propose enhanced spatial multiplexing codes (E-SMCs) to enable various encoding rates. The symbol error rate (SER) performance of the E-SMC is investigated when zero-forcing (ZF) and maximal-ratio combining (MRC) techniques are used at a receiver. The proposed E-SMC allows a transmitted symbol to be repeated over time to achieve further diversity gain at the cost of the encoding rate. With the spatial correlation between transmit antennas, SER equations for M-ary QAM and PSK constellations are derived by using a moment generating function (MGF) approximation of a signal-to-noise ratio (SNR), based on the assumption of independent zero-forced SNRs. Analytic and simulated results are compared for time-varying and spatially correlated Rayleigh fading channels that are modelled as first-order Markovian channels. Furthermore, we can find an optimal block length for the E-SMC that meets a required SER. PMID:25114969

  5. A New Model for Real-Time Regional Vertical Total Electron Content and Differential Code Bias Estimation Using IGS Real-Time Service (IGS-RTS) Products

    NASA Astrophysics Data System (ADS)

    Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed

    2016-04-01

    The international global navigation satellite system (GNSS) real-time service (IGS-RTS) products have been used extensively for real-time precise point positioning and ionosphere modeling applications. In this study, we develop a regional model for real-time vertical total electron content (RT-VTEC) and differential code bias (RT-DCB) estimation over Europe using the IGS-RTS satellite orbit and clock products. The developed model has a spatial and temporal resolution of 1°×1° and 15 minutes, respectively. GPS observations from a regional network consisting of 60 IGS and EUREF reference stations are processed in the zero-difference mode using the Bernese-5.2 software package in order to extract the geometry-free linear combination of the smoothed code observations. The spherical harmonic expansion function is used to model the VTEC, the receiver and the satellite DCBs. To validate the proposed model, the RT-VTEC values are computed and compared with the final IGS-global ionospheric map (IGS-GIM) counterparts in three successive days under high solar activity including one of an extreme geomagnetic activity. The real-time satellite DCBs are also estimated and compared with the IGS-GIM counterparts. Moreover, the real-time receiver DCB for six IGS stations are obtained and compared with the IGS-GIM counterparts. The examined stations are located in different latitudes with different receiver types. The findings reveal that the estimated RT-VTEC values show agreement with the IGS-GIM counterparts with root mean-square-errors (RMSEs) values less than 2 TEC units. In addition, RMSEs of both the satellites and receivers DCBs are less than 0.85 ns and 0.65 ns, respectively in comparison with the IGS-GIM.

  6. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  7. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    ERIC Educational Resources Information Center

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  8. Using light to tell the time of day: sensory coding in the mammalian circadian visual network

    PubMed Central

    2016-01-01

    ABSTRACT Circadian clocks are a near-ubiquitous feature of biology, allowing organisms to optimise their physiology to make the most efficient use of resources and adjust behaviour to maximise survival over the solar day. To fulfil this role, circadian clocks require information about time in the external world. This is most reliably obtained by measuring the pronounced changes in illumination associated with the earth's rotation. In mammals, these changes are exclusively detected in the retina and are relayed by direct and indirect neural pathways to the master circadian clock in the hypothalamic suprachiasmatic nuclei. Recent work reveals a surprising level of complexity in this sensory control of the circadian system, including the participation of multiple photoreceptive pathways conveying distinct aspects of visual and/or time-of-day information. In this Review, I summarise these important recent advances, present hypotheses as to the functions and neural origins of these sensory signals, highlight key challenges for future research and discuss the implications of our current knowledge for animals and humans in the modern world. PMID:27307539

  9. Using light to tell the time of day: sensory coding in the mammalian circadian visual network.

    PubMed

    Brown, Timothy M

    2016-06-15

    Circadian clocks are a near-ubiquitous feature of biology, allowing organisms to optimise their physiology to make the most efficient use of resources and adjust behaviour to maximise survival over the solar day. To fulfil this role, circadian clocks require information about time in the external world. This is most reliably obtained by measuring the pronounced changes in illumination associated with the earth's rotation. In mammals, these changes are exclusively detected in the retina and are relayed by direct and indirect neural pathways to the master circadian clock in the hypothalamic suprachiasmatic nuclei. Recent work reveals a surprising level of complexity in this sensory control of the circadian system, including the participation of multiple photoreceptive pathways conveying distinct aspects of visual and/or time-of-day information. In this Review, I summarise these important recent advances, present hypotheses as to the functions and neural origins of these sensory signals, highlight key challenges for future research and discuss the implications of our current knowledge for animals and humans in the modern world. PMID:27307539

  10. Twins' reactions to delayed auditory feedback.

    PubMed

    Timmons, B A

    1985-10-01

    10 pairs of identical and 10 pairs of fraternal twins, matched by age, spoke under conditions of 0.0-, 100-, 200-, 300-, 400-, and 500-msec. delayed auditory feedback. Length of spoken passages was controlled. Product-moment and intraclass correlations were calculated for speaking times and disfluencies. Significant Pearson rs for times were noted at 0.0 and 300 msec. for both groups and at 100, 200, and 400 msec. for identical twins, while fraternal twins' times were significantly correlated at 500 msec. Difference scores were significantly correlated at 100, 200, 300, and 400 msec. for identical twins. Disfluencies were significantly correlated for identical twins at 400 msec. Data were combined with those of Timmons' (1969) study, increasing subjects to 21 pairs per group. Intraclass correlations supported the contention that responses of identical twin pairs to delayed auditory feedback were more highly correlated than those for fraternal twin pairs.

  11. Auditory spatial localization: Developmental delay in children with visual impairments.

    PubMed

    Cappagli, Giulia; Gori, Monica

    2016-01-01

    For individuals with visual impairments, auditory spatial localization is one of the most important features to navigate in the environment. Many works suggest that blind adults show similar or even enhanced performance for localization of auditory cues compared to sighted adults (Collignon, Voss, Lassonde, & Lepore, 2009). To date, the investigation of auditory spatial localization in children with visual impairments has provided contrasting results. Here we report, for the first time, that contrary to visually impaired adults, children with low vision or total blindness show a significant impairment in the localization of static sounds. These results suggest that simple auditory spatial tasks are compromised in children, and that this capacity recovers over time. PMID:27002960

  12. Ion channel noise can explain firing correlation in auditory nerves.

    PubMed

    Moezzi, Bahar; Iannella, Nicolangelo; McDonnell, Mark D

    2016-10-01

    Neural spike trains are commonly characterized as a Poisson point process. However, the Poisson assumption is a poor model for spiking in auditory nerve fibres because it is known that interspike intervals display positive correlation over long time scales and negative correlation over shorter time scales. We have therefore developed a biophysical model based on the well-known Meddis model of the peripheral auditory system, to produce simulated auditory nerve fibre spiking statistics that more closely match the firing correlations observed in empirical data. We achieve this by introducing biophysically realistic ion channel noise to an inner hair cell membrane potential model that includes fractal fast potassium channels and deterministic slow potassium channels. We succeed in producing simulated spike train statistics that match empirically observed firing correlations. Our model thus replicates macro-scale stochastic spiking statistics in the auditory nerve fibres due to modeling stochasticity at the micro-scale of potassium channels. PMID:27480847

  13. Developmental correlation of diffusion anisotropy with auditory-evoked response.

    PubMed

    Roberts, Timothy P L; Khan, Sarah Y; Blaskey, Lisa; Dell, John; Levy, Susan E; Zarnow, Deborah M; Edgar, J Christopher

    2009-12-01

    White matter diffusion anisotropy in the acoustic radiations of the auditory pathway was characterized as a function of development in children and adolescents. Auditory-evoked neuromagnetic fields were also recorded from the same individuals, and the latency of the left and right superior temporal gyrus auditory response of approximately 100 ms was also obtained. White matter diffusion anisotropy increased with age. There was a commensurate shortening of the auditory-evoked response latency with increased age as well as with increased white matter diffusion anisotropy. The significant negative correlation between structural integrity of white matter pathways and electrophysiological function (response timing) of distal cortex supports a biophysical model of developmental changes in white matter myelination, conduction velocity, and cortical response timing.

  14. Blind and semi-blind ML detection for space-time block-coded OFDM wireless systems

    NASA Astrophysics Data System (ADS)

    Zaib, Alam; Al-Naffouri, Tareq Y.

    2014-12-01

    This paper investigates the joint maximum likelihood (ML) data detection and channel estimation problem for Alamouti space-time block-coded (STBC) orthogonal frequency-division multiplexing (OFDM) wireless systems. The joint ML estimation and data detection is generally considered a hard combinatorial optimization problem. We propose an efficient low-complexity algorithm based on branch-estimate-bound strategy that renders exact joint ML solution. However, the computational complexity of blind algorithm becomes critical at low signal-to-noise ratio (SNR) as the number of OFDM carriers and constellation size are increased especially in multiple-antenna systems. To overcome this problem, a semi-blind algorithm based on a new framework for reducing the complexity is proposed by relying on subcarrier reordering and decoding the carriers with different levels of confidence using a suitable reliability criterion. In addition, it is shown that by utilizing the inherent structure of Alamouti coding, the estimation performance improvement or the complexity reduction can be achieved. The proposed algorithms can reliably track the wireless Rayleigh fading channel without requiring any channel statistics. Simulation results presented against the perfect coherent detection demonstrate the effectiveness of blind and semi-blind algorithms over frequency-selective channels with different fading characteristics.

  15. Time-of-flights and traps: from the Histone Code to Mars*

    PubMed Central

    Swatkoski, Stephen; Becker, Luann; Evans-Nguyen, Theresa

    2011-01-01

    Two very different analytical instruments are featured in this perspective paper on mass spectrometer design and development. The first instrument, based upon the curved-field reflectron developed in the Johns Hopkins Middle Atlantic Mass Spectrometry Laboratory, is a tandem time-of-flight mass spectrometer whose performance and practicality are illustrated by applications to a series of research projects addressing the acetylation, deacetylation and ADP-ribosylation of histone proteins. The chemical derivatization of lysine-rich, hyperacetylated histones as their deuteroacetylated analogs enables one to obtain an accurate quantitative assessment of the extent of acetylation at each site. Chemical acetylation of histone mixtures is also used to determine the lysine targets of sirtuins, an important class of histone deacetylases (HDACs), by replacing the deacetylated residues with biotin. Histone deacetylation by sirtuins requires the co-factor NAD+, as does the attachment of ADP-ribose. The second instrument, a low voltage and low power ion trap mass spectrometer known as the Mars Organic Mass Analyzer (MOMA), is a prototype for an instrument expected to be launched in 2018. Like the tandem mass spectrometer, it is also expected to have applicability to environmental and biological analyses and, ultimately, to clinical care. PMID:20530839

  16. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    PubMed

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human

  17. Issues in Human Auditory Development

    ERIC Educational Resources Information Center

    Werner, Lynne A.

    2007-01-01

    The human auditory system is often portrayed as precocious in its development. In fact, many aspects of basic auditory processing appear to be adult-like by the middle of the first year of postnatal life. However, processes such as attention and sound source determination take much longer to develop. Immaturity of higher-level processes limits the…

  18. Word Recognition in Auditory Cortex

    ERIC Educational Resources Information Center

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  19. Auditory neglect and related disorders.

    PubMed

    Gutschalk, Alexander; Dykstra, Andrew

    2015-01-01

    Neglect is a neurologic disorder, typically associated with lesions of the right hemisphere, in which patients are biased towards their ipsilesional - usually right - side of space while awareness for their contralesional - usually left - side is reduced or absent. Neglect is a multimodal disorder that often includes deficits in the auditory domain. Classically, auditory extinction, in which left-sided sounds that are correctly perceived in isolation are not detected in the presence of synchronous right-sided stimulation, has been considered the primary sign of auditory neglect. However, auditory extinction can also be observed after unilateral auditory cortex lesions and is thus not specific for neglect. Recent research has shown that patients with neglect are also impaired in maintaining sustained attention, on both sides, a fact that is reflected by an impairment of auditory target detection in continuous stimulation conditions. Perhaps the most impressive auditory symptom in full-blown neglect is alloacusis, in which patients mislocalize left-sided sound sources to their right, although even patients with less severe neglect still often show disturbance of auditory spatial perception, most commonly a lateralization bias towards the right. We discuss how these various disorders may be explained by a single model of neglect and review emerging interventions for patient rehabilitation.

  20. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    SciTech Connect

    O'Brien, M J; Procassini, R J; Joy, K I

    2009-03-09

    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

  1. Study of ITER plasma position reflectometer using a two-dimensional full-wave finite-difference time domain code

    SciTech Connect

    Silva, F. da

    2008-10-15

    The EU will supply the plasma position reflectometer for ITER. The system will have channels located at different poloidal positions, some of them obliquely viewing a plasma which has a poloidal density divergence and curvature, both adverse conditions for profile measurements. To understand the impact of such topology in the reconstruction of density profiles a full-wave two-dimensional finite-difference time domain O-mode code with the capability for frequency sweep was used. Simulations show that the reconstructed density profiles still meet the ITER radial accuracy specifications for plasma position (1 cm), except for the highest densities. Other adverse effects such as multireflections induced by the blanket, density fluctuations, and MHD activity were considered and a first understanding on their impact obtained.

  2. ADESSA: A Real-Time Decision Support Service for Delivery of Semantically Coded Adverse Drug Event Data.

    PubMed

    Duke, Jon D; Friedlin, Jeff

    2010-11-13

    Evaluating medications for potential adverse events is a time-consuming process, typically involving manual lookup of information by physicians. This process can be expedited by CDS systems that support dynamic retrieval and filtering of adverse drug events (ADE's), but such systems require a source of semantically-coded ADE data. We created a two-component system that addresses this need. First we created a natural language processing application which extracts adverse events from Structured Product Labels and generates a standardized ADE knowledge base. We then built a decision support service that consumes a Continuity of Care Document and returns a list of patient-specific ADE's. Our database currently contains 534,125 ADE's from 5602 product labels. An NLP evaluation of 9529 ADE's showed recall of 93% and precision of 95%. On a trial set of 30 CCD's, the system provided adverse event data for 88% of drugs and returned these results in an average of 620ms.

  3. Space-time block code based MIMO encoding for large core step index plastic optical fiber transmission systems.

    PubMed

    Raptis, Nikos; Grivas, Evangelos; Pikasis, Evangelos; Syvridis, Dimitris

    2011-05-23

    The performance of Space-Time Block Codes combined with Discrete MultiTone modulation applied in a Large Core Step-Index POF link is examined theoretically. A comparative study is performed considering several schemes that employ multiple transmitters/receivers and a fiber span of 100 m. The performance enhancement of the higher diversity order configurations is revealed by application of a Margin Adaptive Bit Loading technique that employs Chow's algorithm. Simulations results of the above schemes, in terms of Bit Error Rate as a function of the received Signal to Noise Ratio, are provided. An improvement of more than 6 dB for the required electrical SNR is observed for a 3 × 1 configuration, in order to achieve a 10(-3) BER value, as compared to a conventional Single Input Single output scheme.

  4. Effect of auditory training on the middle latency response in children with (central) auditory processing disorder.

    PubMed

    Schochat, E; Musiek, F E; Alonso, R; Ogata, J

    2010-08-01

    The purpose of this study was to determine the middle latency response (MLR) characteristics (latency and amplitude) in children with (central) auditory processing disorder [(C)APD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (C)APD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (C)APD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (C)APD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 microV (mean), 0.39 (SD--standard deviation) for the (C)APD group and 1.18 microV (mean), 0.65 (SD) for the control group; C3-A2, 0.69 microV (mean), 0.31 (SD) for the (C)APD group and 1.00 microV (mean), 0.46 (SD) for the control group]. After training, the MLR C3-A1 [1.59 microV (mean), 0.82 (SD)] and C3-A2 [1.24 microV (mean), 0.73 (SD)] wave amplitudes of the (C)APD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (C)APD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (C)APD.

  5. Training in rapid auditory processing ameliorates auditory comprehension in aphasic patients: a randomized controlled pilot study.

    PubMed

    Szelag, Elzbieta; Lewandowska, Monika; Wolak, Tomasz; Seniow, Joanna; Poniatowska, Renata; Pöppel, Ernst; Szymaszek, Aneta

    2014-03-15

    Experimental studies have often reported close associations between rapid auditory processing and language competency. The present study was aimed at improving auditory comprehension in aphasic patients following specific training in the perception of temporal order (TO) of events. We tested 18 aphasic patients showing both comprehension and TO perception deficits. Auditory comprehension was assessed by the Token Test, phonemic awareness and Voice-Onset-Time Test. The TO perception was assessed using auditory Temporal-Order-Threshold, defined as the shortest interval between two consecutive stimuli, necessary to report correctly their before-after relation. Aphasic patients participated in eight 45-minute sessions of either specific temporal training (TT, n=11) aimed to improve sequencing abilities, or control non-temporal training (NT, n=7) focussed on volume discrimination. The TT yielded improved TO perception; moreover, a transfer of improvement was observed from the time domain to the language domain, which was untrained during the training. The NT did not improve either the TO perception or comprehension in any language test. These results are in agreement with previous literature studies which proved ameliorated language competency following the TT in language-learning-impaired or dyslexic children. Our results indicated for the first time such benefits also in aphasic patients. PMID:24388435

  6. Neuromechanistic Model of Auditory Bistability

    PubMed Central

    Rankin, James; Sussman, Elyse; Rinzel, John

    2015-01-01

    Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1). Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept—a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition. PMID:26562507

  7. Auditory Model: Effects on Learning under Blocked and Random Practice Schedules

    ERIC Educational Resources Information Center

    Han, Dong-Wook; Shea, Charles H.

    2008-01-01

    An experiment was conducted to determine the impact of an auditory model on blocked, random, and mixed practice schedules of three five-segment timing sequences (relative time constant). We were interested in whether or not the auditory model differentially affected the learning of relative and absolute timing under blocked and random practice.…

  8. Electroencephalographic measures of auditory perception in dynamic acoustic environments

    NASA Astrophysics Data System (ADS)

    McMullan, Amanda R.

    We are capable of effortlessly parsing a complex scene presented to us. In order to do this, we must segregate objects from each other and from the background. While this process has been extensively studied in vision science, it remains relatively less understood in auditory science. This thesis sought to characterize the neuroelectric correlates of auditory scene analysis using electroencephalography. Chapter 2 determined components evoked by first-order energy boundaries and second-order pitch boundaries. Chapter 3 determined components evoked by first-order and second-order discontinuous motion boundaries. Both of these chapters focused on analysis of event-related potential (ERP) waveforms and time-frequency analysis. In addition, these chapters investigated the contralateral nature of a negative ERP component. These results extend the current knowledge of auditory scene analysis by providing a starting point for discussing and characterizing first-order and second-order boundaries in an auditory scene.

  9. Efficient population coding of naturalistic whisker motion in the ventro-posterior medial thalamus based on precise spike timing

    PubMed Central

    Bale, Michael R.; Ince, Robin A. A.; Santagata, Greta; Petersen, Rasmus S.

    2015-01-01

    The rodent whisker-associated thalamic nucleus (VPM) contains a somatotopic map where whisker representation is divided into distinct neuronal sub-populations, called “barreloids”. Each barreloid projects to its associated cortical barrel column and so forms a gateway for incoming sensory stimuli to the barrel cortex. We aimed to determine how the population of neurons within one barreloid encodes naturalistic whisker motion. In rats, we recorded the extracellular activity of up to nine single neurons within a single barreloid, by implanting silicon probes parallel to the longitudinal axis of the barreloids. We found that play-back of texture-induced whisker motion evoked sparse responses, timed with millisecond precision. At the population level, there was synchronous activity: however, different subsets of neurons were synchronously active at different times. Mutual information between population responses and whisker motion increased near linearly with population size. When normalized to factor out firing rate differences, we found that texture was encoded with greater informational-efficiency than white noise. These results indicate that, within each VPM barreloid, there is a rich and efficient population code for naturalistic whisker motion based on precisely timed, population spike patterns. PMID:26441549

  10. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  11. A real-time photoacoustic and ultrasound dual-modality imaging system facilitated with GPU and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2014-03-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The backprojection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel was conducted to verify the performance of this system for imaging fast biological events. The GPU based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/pat realtime .

  12. Fast turnover of genome transcription across evolutionary time exposes entire non-coding DNA to de novo gene emergence

    PubMed Central

    Neme, Rafik; Tautz, Diethard

    2016-01-01

    Deep sequencing analyses have shown that a large fraction of genomes is transcribed, but the significance of this transcription is much debated. Here, we characterize the phylogenetic turnover of poly-adenylated transcripts in a comprehensive sampling of taxa of the mouse (genus Mus), spanning a phylogenetic distance of 10 Myr. Using deep RNA sequencing we find that at a given sequencing depth transcriptome coverage becomes saturated within a taxon, but keeps extending when compared between taxa, even at this very shallow phylogenetic level. Our data show a high turnover of transcriptional states between taxa and that no major transcript-free islands exist across evolutionary time. This suggests that the entire genome can be transcribed into poly-adenylated RNA when viewed at an evolutionary time scale. We conclude that any part of the non-coding genome can potentially become subject to evolutionary functionalization via de novo gene evolution within relatively short evolutionary time spans. DOI: http://dx.doi.org/10.7554/eLife.09977.001 PMID:26836309

  13. Object continuity enhances selective auditory attention.

    PubMed

    Best, Virginia; Ozmeral, Erol J; Kopco, Norbert; Shinn-Cunningham, Barbara G

    2008-09-01

    In complex scenes, the identity of an auditory object can build up across seconds. Given that attention operates on perceptual objects, this perceptual buildup may alter the efficacy of selective auditory attention over time. Here, we measured identification of a sequence of spoken target digits presented with distracter digits from other directions to investigate the dynamics of selective attention. Performance was better when the target location was fixed rather than changing between digits, even when listeners were cued as much as 1 s in advance about the position of each subsequent digit. Spatial continuity not only avoided well known costs associated with switching the focus of spatial attention, but also produced refinements in the spatial selectivity of attention across time. Continuity of target voice further enhanced this buildup of selective attention. Results suggest that when attention is sustained on one auditory object within a complex scene, attentional selectivity improves over time. Similar effects may come into play when attention is sustained on an object in a complex visual scene, especially in cases where visual object formation requires sustained attention.

  14. Visual change detection recruits auditory cortices in early deafness.

    PubMed

    Bottari, Davide; Heimler, Benedetta; Caclin, Anne; Dalmolin, Anna; Giard, Marie-Hélène; Pavani, Francesco

    2014-07-01

    Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300 ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation.

  15. Time-dependent Multi-group Multi-dimensional Relativistic RadiativeTransfer Code Based on the Spherical Harmonic Discrete Ordinate Method

    NASA Astrophysics Data System (ADS)

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I.

    2015-08-01

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source function is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.

  16. Auditory hallucinations treated by radio headphones.

    PubMed

    Feder, R

    1982-09-01

    A young man with chronic auditory hallucinations was treated according to the principle that increasing external auditory stimulation decreases the likelihood of auditory hallucinations. Listening to a radio through stereo headphones in conditions of low auditory stimulation eliminated the patient's hallucinations.

  17. Language Development Activities through the Auditory Channel.

    ERIC Educational Resources Information Center

    Fitzmaurice, Peggy, Comp.; And Others

    Presented primarily for use with educable mentally retarded and learning disabled children are approximately 100 activities for language development through the auditory channel. Activities are grouped under the following three areas: receptive skills (auditory decoding, auditory memory, and auditory discrimination); expressive skills (auditory…

  18. Cortical auditory disorders: clinical and psychoacoustic features.

    PubMed Central

    Mendez, M F; Geehan, G R

    1988-01-01

    The symptoms of two patients with bilateral cortical auditory lesions evolved from cortical deafness to other auditory syndromes: generalised auditory agnosia, amusia and/or pure word deafness, and a residual impairment of temporal sequencing. On investigation, both had dysacusis, absent middle latency evoked responses, acoustic errors in sound recognition and matching, inconsistent auditory behaviours, and similarly disturbed psychoacoustic discrimination tasks. These findings indicate that the different clinical syndromes caused by cortical auditory lesions form a spectrum of related auditory processing disorders. Differences between syndromes may depend on the degree of involvement of a primary cortical processing system, the more diffuse accessory system, and possibly the efferent auditory system. Images PMID:2450968

  19. Monaural Auditory Cue Affects the Process of Choosing the Initial Swing Leg in Gait Initiation.

    PubMed

    Hiraoka, Koichi; Ae, Minori; Ogura, Nana; Sano, Chisa; Shiomi, Keigo; Morita, Yuji; Yokoyama, Haruka; Iwata, Yasuyuki; Jono, Yasutomo; Nomura, Yoshifumi; Tani, Keisuke; Chujo, Yuta

    2015-01-01

    The authors investigated the effect of an auditory cue on the choice of the initial swing leg in gait initiation. Healthy humans initiated a gait in response to a monaural or binaural auditory cue. When the auditory cue was given in the ear ipsilateral to the preferred leg side, the participants consistently initiated their gait with the preferred leg. In the session in which the side of the monaural auditory cue was altered trial by trial randomly, the probability of initiating the gait with the nonpreferred leg increased when the auditory cue was given in the ear contralateral to the preferred leg side. The probability of choosing the nonpreferred leg did not increase significantly when the auditory cue was given in the ear contralateral to the preferred leg side in the session in which the auditory cue was constantly given in the ear contralateral to the preferred leg side. The reaction time of anticipatory postural adjustment was shortened, but the probability of choosing the nonpreferred leg was not significantly increased when the gait was initiated in response to a binaural auditory cue. An auditory cue in the ear contralateral to the preferred leg side weakens the preference for choosing the preferred leg as the initial swing leg in gait initiation when the side of the auditory cue is unpredictable.

  20. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise

    PubMed Central

    White-Schwoch, Travis; Davies, Evan C.; Thompson, Elaine C.; Carr, Kali Woodruff; Nicol, Trent; Bradlow, Ann R.; Kraus, Nina

    2015-01-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But learning rarely occurs under ideal listening conditions—children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3–5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features—even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response properties

  1. Assessment of ionization chamber correction factors in photon beams using a time saving strategy with PENELOPE code.

    PubMed

    Reis, C Q M; Nicolucci, P

    2016-02-01

    The purpose of this study was to investigate Monte Carlo-based perturbation and beam quality correction factors for ionization chambers in photon beams using a saving time strategy with PENELOPE code. Simulations for calculating absorbed doses to water using full spectra of photon beams impinging the whole water phantom and those using a phase-space file previously stored around the point of interest were performed and compared. The widely used NE2571 ionization chamber was modeled with PENELOPE using data from the literature in order to calculate absorbed doses to the air cavity of the chamber. Absorbed doses to water at reference depth were also calculated for providing the perturbation and beam quality correction factors for that chamber in high energy photon beams. Results obtained in this study show that simulations with phase-space files appropriately stored can be up to ten times shorter than using a full spectrum of photon beams in the input-file. Values of kQ and its components for the NE2571 ionization chamber showed good agreement with published values in the literature and are provided with typical statistical uncertainties of 0.2%. Comparisons to kQ values published in current dosimetry protocols such as the AAPM TG-51 and IAEA TRS-398 showed maximum percentage differences of 0.1% and 0.6% respectively. The proposed strategy presented a significant efficiency gain and can be applied for a variety of ionization chambers and clinical photon beams.

  2. Real-time high-resolution downsampling algorithm on many-core processor for spatially scalable video coding

    NASA Astrophysics Data System (ADS)

    Buhari, Adamu Muhammad; Ling, Huo-Chong; Baskaran, Vishnu Monn; Wong, KokSheik

    2015-01-01

    The progression toward spatially scalable video coding (SVC) solutions for ubiquitous endpoint systems introduces challenges to sustain real-time frame rates in downsampling high-resolution videos into multiple layers. In addressing these challenges, we put forward a hardware accelerated downsampling algorithm on a parallel computing platform. First, we investigate the principal architecture of a serial downsampling algorithm in the Joint-Scalable-Video-Model reference software to identify the performance limitations for spatially SVC. Then, a parallel multicore-based downsampling algorithm is studied as a benchmark. Experimental results for this algorithm using an 8-core processor exhibit performance speedup of 5.25× against the serial algorithm in downsampling a quantum extended graphics array at 1536p video resolution into three lower resolution layers (i.e., Full-HD at 1080p, HD at 720p, and Quarter-HD at 540p). However, the achieved speedup here does not translate into the minimum required frame rate of 15 frames per second (fps) for real-time video processing. To improve the speedup, a many-core based downsampling algorithm using the compute unified device architecture parallel computing platform is proposed. The proposed algorithm increases the performance speedup to 26.14× against the serial algorithm. Crucially, the proposed algorithm exceeds the target frame rate of 15 fps, which in turn is advantageous to the overall performance of the video encoding process.

  3. Auditory-olfactory synesthesia coexisting with auditory-visual synesthesia.

    PubMed

    Jackson, Thomas E; Sandramouli, Soupramanien

    2012-09-01

    Synesthesia is an unusual condition in which stimulation of one sensory modality causes an experience in another sensory modality or when a sensation in one sensory modality causes another sensation within the same modality. We describe a previously unreported association of auditory-olfactory synesthesia coexisting with auditory-visual synesthesia. Given that many types of synesthesias involve vision, it is important that the clinician provide these patients with the necessary information and support that is available.

  4. Response recovery in the locust auditory pathway.

    PubMed

    Wirtssohn, Sarah; Ronacher, Bernhard

    2016-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period.

  5. The plastic ear and perceptual relearning in auditory spatial perception.

    PubMed

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5-10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  6. The plastic ear and perceptual relearning in auditory spatial perception

    PubMed Central

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  7. Auditory short-term memory activation during score reading.

    PubMed

    Simoens, Veerle L; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  8. Corticofugal regulation of auditory sensitivity in the bat inferior colliculus.

    PubMed

    Jen, P H; Chen, Q C; Sun, X D

    1998-12-01

    Under free-field stimulation conditions, corticofugal regulation of auditory sensitivity of neurons in the central nucleus of the inferior colliculus of the big brown bat, Eptesicus fuscus, was studied by blocking activities of auditory cortical neurons with Lidocaine or by electrical stimulation in auditory cortical neuron recording sites. The corticocollicular pathway regulated the number of impulses, the auditory spatial response areas and the frequency-tuning curves of inferior colliculus neurons through facilitation or inhibition. Corticofugal regulation was most effective at low sound intensity and was dependent upon the time interval between acoustic and electrical stimuli. At optimal inter-stimulus intervals, inferior colliculus neurons had the smallest number of impulses and the longest response latency during corticofugal inhibition. The opposite effects were observed during corticofugal facilitation. Corticofugal inhibitory latency was longer than corticofugal facilitatory latency. Iontophoretic application of gamma-aminobutyric acid and bicuculline to inferior colliculus recording sites produced effects similar to what were observed during corticofugal inhibition and facilitation. We suggest that corticofugal regulation of central auditory sensitivity can provide an animal with a mechanism to regulate acoustic signal processing in the ascending auditory pathway.

  9. Auditory temporal processing skills in musicians with dyslexia.

    PubMed

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. PMID:25044949

  10. Auditory function in individuals within Leber's hereditary optic neuropathy pedigrees.

    PubMed

    Rance, Gary; Kearns, Lisa S; Tan, Johanna; Gravina, Anthony; Rosenfeld, Lisa; Henley, Lauren; Carew, Peter; Graydon, Kelley; O'Hare, Fleur; Mackey, David A

    2012-03-01

    The aims of this study are to investigate whether auditory dysfunction is part of the spectrum of neurological abnormalities associated with Leber's hereditary optic neuropathy (LHON) and to determine the perceptual consequences of auditory neuropathy (AN) in affected listeners. Forty-eight subjects confirmed by genetic testing as having one of four mitochondrial mutations associated with LHON (mt11778, mtDNA14484, mtDNA14482 and mtDNA3460) participated. Thirty-two of these had lost vision, and 16 were asymptomatic at the point of data collection. While the majority of individuals showed normal sound detection, >25% (of both symptomatic and asymptomatic participants) showed electrophysiological evidence of AN with either absent or severely delayed auditory brainstem potentials. Abnormalities were observed for each of the mutations, but subjects with the mtDNA11778 type were the most affected. Auditory perception was also abnormal in both symptomatic and asymptomatic subjects, with >20% of cases showing impaired detection of auditory temporal (timing) cues and >30% showing abnormal speech perception both in quiet and in the presence of background noise. The findings of this study indicate that a relatively high proportion of individuals with the LHON genetic profile may suffer functional hearing difficulties due to neural abnormality in the central auditory pathways.

  11. Temporal coding in rhythm tasks revealed by modality effects.

    PubMed

    Glenberg, A M; Jona, M

    1991-09-01

    Temporal coding has been studied by examining the perception and reproduction of rhythms and by examining memory for the order of events in a list. We attempt to link these research programs both empirically and theoretically. Glenberg and Swanson (1986) proposed that the superior recall of auditory material, compared with visual material, reflects more accurate temporal coding for the auditory material. In this paper, we demonstrate that a similar modality effect can be produced in a rhythm task. Auditory rhythms composed of stimuli of two durations are reproduced more accurately than are visual rhythms. Furthermore, it appears that the auditory superiority reflects enhanced chunking of the auditory material rather than better identification of durations. PMID:1956312

  12. Auditory Processing Disorder in Children

    MedlinePlus

    ... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick ... NIH… Turning Discovery Into Health ® National Institute on Deafness and Other Communication Disorders 31 Center Drive, MSC ...

  13. Leiomyoma of External Auditory Canal.

    PubMed

    George, M V; Puthiyapurayil, Jamsheeda

    2016-09-01

    This article reports a case of piloleiomyoma of external auditory canal, which is the 7th case of leiomyoma of the external auditory canal being reported and the 2nd case of leiomyoma arising from arrectores pilorum muscles, all the other five cases were angioleiomyomas, arising from blood vessels. A 52 years old male presented with a mass in the right external auditory canal and decreased hearing of 6 months duration. Tumor excision done by end aural approach. Histopathological examination report was leiomyoma. It is extremely rare for leiomyoma to occur in the external auditory canal because of the non-availability of smooth muscles in the external canal. So it should be considered as a very rare differential diagnosis for any tumor or polyp in the ear canal. PMID:27508144

  14. Classroom Demonstrations of Auditory Perception.

    ERIC Educational Resources Information Center

    Haws, LaDawn; Oppy, Brian J.

    2002-01-01

    Presents activities to help students gain understanding about auditory perception. Describes demonstrations that cover topics, such as sound localization, wave cancellation, frequency/pitch variation, and the influence of media on sound propagation. (CMK)

  15. Maps of the Auditory Cortex.

    PubMed

    Brewer, Alyssa A; Barton, Brian

    2016-07-01

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration. PMID:27145914

  16. The influence of auditory-motor coupling on fractal dynamics in human gait.

    PubMed

    Hunt, Nathaniel; McGrath, Denise; Stergiou, Nicholas

    2014-01-01

    Humans exhibit an innate ability to synchronize their movements to music. The field of gait rehabilitation has sought to capitalize on this phenomenon by invoking patients to walk in time to rhythmic auditory cues with a view to improving pathological gait. However, the temporal structure of the auditory cue, and hence the temporal structure of the target behavior has not been sufficiently explored. This study reveals the plasticity of auditory-motor coupling in human walking in relation to 'complex' auditory cues. The authors demonstrate that auditory-motor coupling can be driven by different coloured auditory noise signals (e.g. white, brown), shifting the fractal temporal structure of gait dynamics towards the statistical properties of the signals used. This adaptive capability observed in whole-body movement, could potentially be harnessed for targeted neuromuscular rehabilitation in patient groups, depending on the specific treatment goal. PMID:25080936

  17. The influence of auditory-motor coupling on fractal dynamics in human gait

    PubMed Central

    Hunt, Nathaniel; McGrath, Denise; Stergiou, Nicholas

    2014-01-01

    Humans exhibit an innate ability to synchronize their movements to music. The field of gait rehabilitation has sought to capitalize on this phenomenon by invoking patients to walk in time to rhythmic auditory cues with a view to improving pathological gait. However, the temporal structure of the auditory cue, and hence the temporal structure of the target behavior has not been sufficiently explored. This study reveals the plasticity of auditory-motor coupling in human walking in relation to ‘complex' auditory cues. The authors demonstrate that auditory-motor coupling can be driven by different coloured auditory noise signals (e.g. white, brown), shifting the fractal temporal structure of gait dynamics towards the statistical properties of the signals used. This adaptive capability observed in whole-body movement, could potentially be harnessed for targeted neuromuscular rehabilitation in patient groups, depending on the specific treatment goal. PMID:25080936

  18. Space-time trellis coding with transmit laser selection for FSO links over strong atmospheric turbulence channels.

    PubMed

    García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2010-03-15

    Atmospheric turbulence produces fluctuations in the irradiance of the transmitted optical beam, which is known as atmospheric scintillation, severely degrading the link performance. In this paper, a scheme combining transmit laser selection (TLS) and space-time trellis code (STTC) for multiple-input-single-output (MISO) free-space optical (FSO) communication systems with intensity modulation and direct detection (IM/DD) over strong atmospheric turbulence channels is analyzed. Assuming channel state information at the transmitter and receiver, we propose the transmit diversity technique based on the selection of two out of the available L lasers corresponding to the optical paths with greater values of scintillation to transmit the baseline STTCs designed for two transmit antennas. Based on a pairwise error probability (PEP) analysis, results in terms of bit error rate are presented when the scintillation follows negative exponential and K distributions, which cover a wide range of strong atmospheric turbulence conditions. Obtained results show a diversity order of 2L-1 when L transmit lasers are available and a simple two-state STTC with rate 1 bit/(s .Hz) is used. Simulation results are further demonstrated to confirm the analytical results.

  19. Unidirectional transparent signal injection in finite-difference time-domain electromagnetic codes -application to reflectometry simulations

    SciTech Connect

    Silva, F. da; Hacquin, S.

    2005-03-01

    We present a novel numerical signal injection technique allowing unidirectional injection of a wave in a wave-guiding structure, applicable to 2D finite-difference time-domain electromagnetic codes, both Maxwell and wave-equation. It is particularly suited to continuous wave radar-like simulations. The scheme gives an unidirectional injection of a signal while being transparent to waves propagating in the opposite direction (directional coupling). The reflected or backscattered waves (returned) are separated from the probing waves allowing direct access to the information on amplitude and phase of the returned wave. It also facilitates the signal processing used to extract the phase derivative (or group delay) when simulating radar systems. Although general, the technique is particularly suited to swept frequency sources (frequency modulated) in the context of reflectometry, a fusion plasma diagnostic. The UTS applications presented here are restricted to fusion plasma reflectometry simulations for different physical situations. This method can, nevertheless, also be used in other dispersive media such as dielectrics, being useful, for example, in the simulation of plasma filled waveguides or directional couplers.

  20. ADESSA: A Real-Time Decision Support Service for Delivery of Semantically Coded Adverse Drug Event Data

    PubMed Central

    Duke, Jon D.; Friedlin, Jeff

    2010-01-01

    Evaluating medications for potential adverse events is a time-consuming process, typically involving manual lookup of information by physicians. This process can be expedited by CDS systems that support dynamic retrieval and filtering of adverse drug events (ADE’s), but such systems require a source of semantically-coded ADE data. We created a two-component system that addresses this need. First we created a natural language processing application which extracts adverse events from Structured Product Labels and generates a standardized ADE knowledge base. We then built a decision support service that consumes a Continuity of Care Document and returns a list of patient-specific ADE’s. Our database currently contains 534,125 ADE’s from 5602 product labels. An NLP evaluation of 9529 ADE’s showed recall of 93% and precision of 95%. On a trial set of 30 CCD’s, the system provided adverse event data for 88% of drugs and returned these results in an average of 620ms. PMID:21346964

  1. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    SciTech Connect

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.

  2. Mathematical Modeling of the Auditory Periphery.

    NASA Astrophysics Data System (ADS)

    Koshigoe, Shozo

    The auditory periphery is conventionally divided into three parts, namely, the outer, middle, and inner ear (or cochlea). Mathematical modeling of the auditory periphery has been used for increasing our understanding of its mechanics via the simulation of experimental results, and for estimating unknown parameters. The various techniques used in this study for modeling the auditory periphery are: (1) Green function methods for investigation of the external ear directional filter functions; (2) finite difference methods in cochlear mechanical model calculations; (3) dispersion relation tests of the consistency of model calculations; (4) dispersion relation checks of experimental cochlear response data for approximate consistency with the implications of causality, linearity, time translation invariance, and minimum phase behavior; (5) dispersion relation tests of the stability of the linear cochlear models with active elements; (6) the introduction of viscosity effects in cochlear mechanics in order to account for data on the low frequency cochlear input impedance; and (7) the incorporation of a non-linear feedback outer-hair-cell model into a cochlear model in order to account for the physiological and psychological data (such as spontaneous and induced acoustic emissions from human ears and their active non-linear interactions with external stimuli).

  3. Auditory cortical detection and discrimination correlates with communicative significance.

    PubMed

    Liu, Robert C; Schreiner, Christoph E

    2007-07-01

    Plasticity studies suggest that behavioral relevance can change the cortical processing of trained or conditioned sensory stimuli. However, whether this occurs in the context of natural communication, where stimulus significance is acquired through social interaction, has not been well investigated, perhaps because neural responses to species-specific vocalizations can be difficult to interpret within a systematic framework. The ultrasonic communication system between isolated mouse pups and adult females that either do or do not recognize the calls' significance provides an opportunity to explore this issue. We applied an information-based analysis to multi- and single unit data collected from anesthetized mothers and pup-naïve females to quantify how the communicative significance of pup calls affects their encoding in the auditory cortex. The timing and magnitude of information that cortical responses convey (at a 2-ms resolution) for pup call detection and discrimination was significantly improved in mothers compared to naïve females, most likely because of changes in call frequency encoding. This was not the case for a non-natural sound ensemble outside the mouse vocalization repertoire. The results demonstrate that a sensory cortical change in the timing code for communication sounds is correlated with the vocalizations' behavioral relevance, potentially enhancing functional processing by improving its signal to noise ratio. PMID:17564499

  4. Disruption of hierarchical predictive coding during sleep

    PubMed Central

    Strauss, Melanie; Sitt, Jacobo D.; King, Jean-Remi; Elbaz, Maxime; Azizi, Leila; Buiatti, Marco; Naccache, Lionel; van Wassenhove, Virginie; Dehaene, Stanislas

    2015-01-01

    When presented with an auditory sequence, the brain acts as a predictive-coding device that extracts regularities in the transition probabilities between sounds and detects unexpected deviations from these regularities. Does such prediction require conscious vigilance, or does it continue to unfold automatically in the sleeping brain? The mismatch negativity and P300 components of the auditory event-related potential, reflecting two steps of auditory novelty detection, have been inconsistently observed in the various sleep stages. To clarify whether these steps remain during sleep, we recorded simultaneous electroencephalographic and magnetoencephalographic signals during wakefulness and during sleep in normal subjects listening to a hierarchical auditory paradigm including short-term (local) and long-term (global) regularities. The global response, reflected in the P300, vanished during sleep, in line with the hypothesis that it is a correlate of high-level conscious error detection. The local mismatch response remained across all sleep stages (N1, N2, and REM sleep), but with an incomplete structure; compared with wakefulness, a specific peak reflecting prediction error vanished during sleep. Those results indicate that sleep leaves initial auditory processing and passive sensory response adaptation intact, but specifically disrupts both short-term and long-term auditory predictive coding. PMID:25737555

  5. Left auditory cortex gamma synchronization and auditory hallucination symptoms in schizophrenia

    PubMed Central

    Spencer, Kevin M; Niznikiewicz, Margaret A; Nestor, Paul G; Shenton, Martha E; McCarley, Robert W

    2009-01-01

    Background Oscillatory electroencephalogram (EEG) abnormalities may reflect neural circuit dysfunction in neuropsychiatric disorders. Previously we have found positive correlations between the phase synchronization of beta and gamma oscillations and hallucination symptoms in schizophrenia patients. These findings suggest that the propensity for hallucinations is associated with an increased tendency for neural circuits in sensory cortex to enter states of oscillatory synchrony. Here we tested this hypothesis by examining whether the 40 Hz auditory steady-state response (ASSR) generated in the left primary auditory cortex is positively correlated with auditory hallucination symptoms in schizophrenia. We also examined whether the 40 Hz ASSR deficit in schizophrenia was associated with cross-frequency interactions. Sixteen healthy control subjects (HC) and 18 chronic schizophrenia patients (SZ) listened to 40 Hz binaural click trains. The EEG was recorded from 60 electrodes and average-referenced offline. A 5-dipole model was fit from the HC grand average ASSR, with 2 pairs of superior temporal dipoles and a deep midline dipole. Time-frequency decomposition was performed on the scalp EEG and source data. Results Phase locking factor (PLF) and evoked power were reduced in SZ at fronto-central electrodes, replicating prior findings. PLF was reduced in SZ for non-homologous right and left hemisphere sources. Left hemisphere source PLF in SZ was positively correlated with auditory hallucination symptoms, and was modulated by delta phase. Furthermore, the correlations between source evoked power and PLF found in HC was reduced in SZ for the LH sources. Conclusion These findings suggest that differential neural circuit abnormalities may be present in the left and right auditory cortices in schizophrenia. In addition, they provide further support for the hypothesis that hallucinations are related to cortical hyperexcitability, which is manifested by an increased propensity

  6. External auditory osteoma.

    PubMed

    Carbone, Peter N; Nelson, Brenda L

    2012-06-01

    External auditory canal (EAC) osteomas are rare, benign bony neoplasms that occur in wide range of patients. While chronic irritation and inflammation have been suggested as causal factors in several cases, significant data is lacking to support these suspicions. Symptoms are rare and can include hearing loss, vertigo, pain and tinnitus. Diagnosis is made based on a combination of clinical history and examination, radiographic imaging, and histopathology. Osteomas of the EAC are usually found incidentally and are unilateral and solitary. Computed tomography reveals a hyperdense, pedunculated mass arising from the tympanosquamous suture and lateral of the isthmus. Histopathologically, EAC osteomas are covered with periosteum and squamous epithelium, and consist of lamalleted bone surrounding fibrovascular channels with minimal osteocysts. Osteomas have historically been compared and contrasted with exostoses of the EAC. While they share similarities, more often than not it is possible to distinguish the two bony neoplasms based on clinical history and radiographic studies. Debate remains in the medical literature as to whether basic histopathology can distinguish osteomas of the EAC from exostoses. Surgical excision is the standard treatment for EAC osteomas, however close observation is considered acceptable in asymptomatic patients.

  7. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    PubMed

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  8. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable

    PubMed Central

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition. PMID:26950589

  9. Auditory brainstem response in dolphins.

    PubMed

    Ridgway, S H; Bullock, T H; Carder, D A; Seeley, R L; Woods, D; Galambos, R

    1981-03-01

    We recorded the auditory brainstem response (ABR) in four dolphins (Tursiops truncatus and Delphinus delphis). The ABR evoked by clicks consists of seven waves within 10 msec; two waves often contain dual peaks. The main waves can be identified with those of humans and laboratory mammals; in spite of a much longer path, the latencies of the peaks are almost identical to those of the rat. The dolphin ABR waves increase in latency as the intensity of a sound decreases by only 4 microseconds/decibel(dB) (for clicks with peak power at 66 kHz) compared to 40 microseconds/dB in humans (for clicks in the sonic range). Low-frequency clicks (6-kHz peak power) show a latency increase about 3 times (12 microseconds/dB) as great. Although the dolphin brainstem tracks individual clicks to at least 600 per sec, the latency increases and amplitude decreases with increasing click rates. This effect varies among different waves of the ABR; it is around one-fifth the effect seen in man. The dolphin brain is specialized for handling brief, frequent clicks. A small latency difference is seen between clicks 180 degrees different in phase--i.e., with initial compression vs. initial rarefaction. The ABR can be used to test theories of dolphin sonar signal processing. Hearing thresholds can be evaluated rapidly. Cetaceans that have not been investigated can now be examined, including the great whales, a group for which data are now completely lacking.

  10. Axon Guidance in the Auditory System: Multiple Functions of Eph Receptors

    PubMed Central

    Cramer, Karina S.; Gabriele, Mark L.

    2014-01-01

    The neural pathways of the auditory system underlie our ability to detect sounds and to transform amplitude and frequency information into rich and meaningful perception. While it shares some organizational features with other sensory systems, the auditory system has some unique functions that impose special demands on precision in circuit assembly. In particular, the cochlear epithelium creates a frequency map rather than a space map, and specialized pathways extract information on interaural time and intensity differences to permit sound source localization. The assembly of auditory circuitry requires the coordinated function of multiple molecular cues. Eph receptors and their ephrin ligands constitute a large family of axon guidance molecules with developmentally regulated expression throughout the auditory system. Functional studies of Eph/ephrin signaling have revealed important roles at multiple levels of the auditory pathway, from the cochlea to the auditory cortex. These proteins provide graded cues used in establishing tonotopically ordered connections between auditory areas, as well as discrete cues that enable axons to form connections with appropriate postsynaptic partners within a target area. Throughout the auditory system, Eph proteins help to establish patterning in neural pathways during early development. This early targeting, which is further refined with neuronal activity, establishes the precision needed for auditory perception. PMID:25010398

  11. Development of an efficient computer code to solve the time-dependent Navier-Stokes equations. [for predicting viscous flow fields about lifting bodies

    NASA Technical Reports Server (NTRS)

    Harp, J. L., Jr.; Oatway, T. P.

    1975-01-01

    A research effort was conducted with the goal of reducing computer time of a Navier Stokes Computer Code for prediction of viscous flow fields about lifting bodies. A two-dimensional, time-dependent, laminar, transonic computer code (STOKES) was modified to incorporate a non-uniform timestep procedure. The non-uniform time-step requires updating of a zone only as often as required by its own stability criteria or that of its immediate neighbors. In the uniform timestep scheme each zone is updated as often as required by the least stable zone of the finite difference mesh. Because of less frequent update of program variables it was expected that the nonuniform timestep would result in a reduction of execution time by a factor of five to ten. Available funding was exhausted prior to successful demonstration of the benefits to be derived from the non-uniform time-step method.

  12. Application of power time-projection on the operator-splitting coupling scheme of the TRACE/S3K coupled code

    SciTech Connect

    Wicaksono, D.; Zerkak, O.; Nikitin, K.; Ferroukhi, H.; Chawla, R.

    2013-07-01

    This paper reports refinement studies on the temporal coupling scheme and time-stepping management of TRACE/S3K, a dynamically coupled code version of the thermal-hydraulics system code TRACE and the 3D core simulator Simulate-3K. The studies were carried out for two test cases, namely a PWR rod ejection accident and the Peach Bottom 2 Turbine Trip Test 2. The solution of the coupled calculation, especially the power peak, proves to be very sensitive to the time-step size with the currently employed conventional operator-splitting. Furthermore, a very small time-step size is necessary to achieve decent accuracy. This degrades the trade-off between accuracy and performance. A simple and computationally cheap implementation of time-projection of power has been shown to be able to improve the convergence of the coupled calculation. This scheme is able to achieve a prescribed accuracy with a larger time-step size. (authors)

  13. Auditory hedonic phenotypes in dementia: A behavioural and neuroanatomical analysis.

    PubMed

    Fletcher, Phillip D; Downey, Laura E; Golden, Hannah L; Clark, Camilla N; Slattery, Catherine F; Paterson, Ross W; Schott, Jonathan M; Rohrer, Jonathan D; Rossor, Martin N; Warren, Jason D

    2015-06-01

    Patients with dementia may exhibit abnormally altered liking for environmental sounds and music but such altered auditory hedonic responses have not been studied systematically. Here we addressed this issue in a cohort of 73 patients representing major canonical dementia syndromes (behavioural variant frontotemporal dementia (bvFTD), semantic dementia (SD), progressive nonfluent aphasia (PNFA) amnestic Alzheimer's disease (AD)) using a semi-structured caregiver behavioural questionnaire and voxel-based morphometry (VBM) of patients' brain MR images. Behavioural responses signalling abnormal aversion to environmental sounds, aversion to music or heightened pleasure in music ('musicophilia') occurred in around half of the cohort but showed clear syndromic and genetic segregation, occurring in most patients with bvFTD but infrequently in PNFA and more commonly in association with MAPT than C9orf72 mutations. Aversion to sounds was the exclusive auditory phenotype in AD whereas more complex phenotypes including musicophilia were common in bvFTD and SD. Auditory hedonic alterations correlated with grey matter loss in a common, distributed, right-lateralised network including antero-mesial temporal lobe, insula, anterior cingulate and nucleus accumbens. Our findings suggest that abnormalities of auditory hedonic processing are a significant issue in common dementias. Sounds may constitute a novel probe of brain mechanisms for emotional salience coding that are targeted by neurodegenerative disease.

  14. Auditory and non-auditory effects of noise on health

    PubMed Central

    Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen

    2014-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-auditory effects of environmental noise exposure on public health is growing. Observational and experimental studies have shown that noise exposure leads to annoyance, disturbs sleep and causes daytime sleepiness, affects patient outcomes and staff performance in hospitals, increases the occurrence of hypertension and cardiovascular disease, and impairs cognitive performance in schoolchildren. In this Review, we stress the importance of adequate noise prevention and mitigation strategies for public health. PMID:24183105

  15. Multimodal Lexical Processing in Auditory Cortex Is Literacy Skill Dependent

    PubMed Central

    McNorgan, Chris; Awati, Neha; Desroches, Amy S.; Booth, James R.

    2014-01-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8–14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. PMID:23588185

  16. Occipital γ response to auditory stimulation in patients with schizophrenia.

    PubMed

    Basar-Eroglu, Canan; Mathes, Birgit; Brand, Andreas; Schmiedt-Fehr, Christina

    2011-01-01

    This study investigated changes in gamma oscillations during auditory sensory processing (auditory-evoked gamma responses, AEGR) and target detection (auditory event-related gamma responses, AERGR) in healthy controls (n=10) and patients with schizophrenia (n=10) using both single-trial and averaged time-frequency data analysis. The results show that single-trial gamma responses in patients were altered in magnitude and topographic pattern for both the AEGR and the AERGR experimental conditions, whereas no differences were found for the averaged evoked gamma response. At the single-trial level, auditory stimuli elicited higher gamma responses at both anterior and occipital sites in patients with schizophrenia compared to controls. Furthermore, in patients with schizophrenia target detection compared to passive listening to stimuli was related to increased single-trial gamma power at frontal sites. In controls enhancement of the gamma response was only apparent for the averaged gamma response, with a distribution largely restricted to anterior sites. The differences in oscillatory activity between healthy controls and patients with schizophrenia were not reflected in the behavioral measure (i.e., counting targets). We conclude that gamma activity triggered by auditory stimuli in schizophrenic patients might have less selectivity in timing and alterations in topography and may show changes in amplitude modulation with task demands. The present study may indicate that in patients with schizophrenia neuronal information is not adequately transferred, possibly due to an over-excitability of neuronal networks and excessive pruning of local connections in association cortex. PMID:21056599

  17. Multimodal lexical processing in auditory cortex is literacy skill dependent.

    PubMed

    McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R

    2014-09-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. PMID:23588185

  18. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  19. Real-time joint source-channel coding of multiple correlated substream progressive sources for multiple-antenna Rayleigh channels

    NASA Astrophysics Data System (ADS)

    Farshchian, Masoud; Pearlman, William A.

    2008-01-01

    Recently, several methods that divide the original bitstream of an image/video progressive wavelet based source coders into multiple correlated substreams have been proposed. The principle behind transmitting independent multiple substream is to generate multiple descriptions of the source such that the graceful degradation is achieved when transmitted over severe fading channels and lossy packet networks since some of the streams may be recovered. Noting that multiple substream can benefit from multiple independent channel paths, we naturally consider Multi-Input Multi-Output communication systems where we obtain multiple independent fading channels. Depending on several factors including: the number of antennas employed, the transmission energy, the Doppler shift (due to the motion between the transmitter antenna and receiver antennas), the total transmission rate and the distortion-rate (D-R) of the source-there exists an optimal number of balanced substreams and an optimal joint source-channel coding policy such that the expected distortion at the receiver is minimized. In this paper we derive an expected distortion function at the receiver based on all of these parameters and provide a fast real-time numerical technique to find the optimal or near optimal number of balanced substreams to be transmitted. This expected distortion is based on our derivation of the probabilistic loss patterns of a balanced multiple substream progressive source coder. The accuracy of the derived expected distortion estimator is confirmed by Monte-Carlo simulation employing Dent's modification of Jakes' model. By accurately estimating the optimal number of balanced substreams to be transmitted, a substantial gain in visual quality at low and intermediate signal-to-noise ratio (SNR) is obtained over severely fading channels. At high SNR, the single stream source coder's source efficiency makes it slightly better than the multiple substream source coder. Overall, using our analytic

  20. Spatiotemporal dynamics of auditory attention synchronize with speech

    PubMed Central

    Wöstmann, Malte; Herrmann, Björn; Maess, Burkhard

    2016-01-01

    Attention plays a fundamental role in selectively processing stimuli in our environment despite distraction. Spatial attention induces increasing and decreasing power of neural alpha oscillations (8–12 Hz) in brain regions ipsilateral and contralateral to the locus of attention, respectively. This study tested whether the hemispheric lateralization of alpha power codes not just the spatial location but also the temporal structure of the stimulus. Participants attended to spoken digits presented to one ear and ignored tightly synchronized distracting digits presented to the other ear. In the magnetoencephalogram, spatial attention induced lateralization of alpha power in parietal, but notably also in auditory cortical regions. This alpha power lateralization was not maintained steadily but fluctuated in synchrony with the speech rate and lagged the time course of low-frequency (1–5 Hz) sensory synchronization. Higher amplitude of alpha power modulation at the speech rate was predictive of a listener’s enhanced performance of stream-specific speech comprehension. Our findings demonstrate that alpha power lateralization is modulated in tune with the sensory input and acts as a spatiotemporal filter controlling the read-out of sensory content. PMID:27001861

  1. Spatiotemporal dynamics of auditory attention synchronize with speech.

    PubMed

    Wöstmann, Malte; Herrmann, Björn; Maess, Burkhard; Obleser, Jonas

    2016-04-01

    Attention plays a fundamental role in selectively processing stimuli in our environment despite distraction. Spatial attention induces increasing and decreasing power of neural alpha oscillations (8-12 Hz) in brain regions ipsilateral and contralateral to the locus of attention, respectively. This study tested whether the hemispheric lateralization of alpha power codes not just the spatial location but also the temporal structure of the stimulus. Participants attended to spoken digits presented to one ear and ignored tightly synchronized distracting digits presented to the other ear. In the magnetoencephalogram, spatial attention induced lateralization of alpha power in parietal, but notably also in auditory cortical regions. This alpha power lateralization was not maintained steadily but fluctuated in synchrony with the speech rate and lagged the time course of low-frequency (1-5 Hz) sensory synchronization. Higher amplitude of alpha power modulation at the speech rate was predictive of a listener's enhanced performance of stream-specific speech comprehension. Our findings demonstrate that alpha power lateralization is modulated in tune with the sensory input and acts as a spatiotemporal filter controlling the read-out of sensory content. PMID:27001861

  2. Hyperacute directional hearing in a microscale auditory system.

    PubMed

    Mason, A C; Oshinsky, M L; Hoy, R R

    2001-04-01

    The physics of sound propagation imposes fundamental constraints on sound localization: for a given frequency, the smaller the receiver, the smaller the available cues. Thus, the creation of nanoscale acoustic microphones with directional sensitivity is very difficult. The fly Ormia ochracea possesses an unusual 'ear' that largely overcomes these physical constraints; attempts to exploit principles derived from O. ochracea for improved hearing aids are now in progress. Here we report that O. ochracea can behaviourally localize a salient sound source with a precision equal to that of humans. Despite its small size and minuscule interaural cues, the fly localizes sound sources to within 2 degrees azimuth. As the fly's eardrums are less than 0.5 mm apart, localization cues are around 50 ns. Directional information is represented in the auditory system by the relative timing of receptor responses in the two ears. Low-jitter, phasic receptor responses are pooled to achieve hyperacute timecoding. These results demonstrate that nanoscale/microscale directional microphones patterned after O. ochracea have the potential for highly accurate directional sensitivity, independent of their size. Notably, in the fly itself this performance is dependent on a newly discovered set of specific coding strategies employed by the nervous system.

  3. Intertrial auditory neural stability supports beat synchronization in preschoolers.

    PubMed

    Carr, Kali Woodruff; Tierney, Adam; White-Schwoch, Travis; Kraus, Nina

    2016-02-01

    The ability to synchronize motor movements along with an auditory beat places stringent demands on the temporal processing and sensorimotor integration capabilities of the nervous system. Links between millisecond-level precision of auditory processing and the consistency of sensorimotor beat synchronization implicate fine auditory neural timing as a mechanism for forming stable internal representations of, and behavioral reactions to, sound. Here, for the first time, we demonstrate a systematic relationship between consistency of beat synchronization and trial-by-trial stability of subcortical speech processing in preschoolers (ages 3 and 4 years old). We conclude that beat synchronization might provide a useful window into millisecond-level neural precision for encoding sound in early childhood, when speech processing is especially important for language acquisition and development. PMID:26760457

  4. Intertrial auditory neural stability supports beat synchronization in preschoolers

    PubMed Central

    Carr, Kali Woodruff; Tierney, Adam; White-Schwoch, Travis; Kraus, Nina

    2016-01-01

    The ability to synchronize motor movements along with an auditory beat places stringent demands on the temporal processing and sensorimotor integration capabilities of the nervous system. Links between millisecond-level precision of auditory processing and the consistency of sensorimotor beat synchronization implicate fine auditory neural timing as a mechanism for forming stable internal representations of, and behavioral reactions to, sound. Here, for the first time, we demonstrate a systematic relationship between consistency of beat synchronization and trial-by-trial stability of subcortical speech processing in preschoolers (ages 3 and 4 years old). We conclude that beat synchronization might provide a useful window into millisecond-level neural precision for encoding sound in early childhood, when speech processing is especially important for language acquisition and development. PMID:26760457

  5. Speech disfluencies under normal and delayed auditory feedback conditions.

    PubMed

    Timmons, B A

    1983-04-01

    20 male and 20 female adults, matched by age, read under conditions of normal and 113-, 152-, 200-, 253-, 307-, and 347-msec. delayed auditory feedback. Disfluency counts were correlated with delayed auditory feedback reactions which were changes in disfluencies under delay conditions. Pearson product-moment and Spearman's rbos were negative and significant for delay times of 113, 153, 200, and 253 msec. The Pearson product-moment correlation for 307 msec. was also negative and significant. Two groups of 11 adults were selected from the original sample on the basis of high and low initial disfluency counts. Their reactions to delayed auditory feedback were compared, using a 2-way analysis of variance with repeated measures (groups X delay times). Both main effects were significant but not their interaction.

  6. Code System for Real-Time Prediction of Radiation Dose to the Public Due to an Accidental Release from a Nuclear Power Plant.

    1987-01-20

    Version 00 The suite of computer codes, SPEEDI, predicts the dose to the public from a plume released from a nuclear power plant. The main codes comprising SPEEDI are: WIND04, PRWDA, and CIDE. WIND04 calculates three-dimensional mass-conservative windfields. PRWDA calculates concentration distributions, and CIDE estimates the external and internal doses. These models can take into account the spatial and temporal variation of wind, variable topography, deposition and variable source intensity for use in real-time assessment.more » We recommend that you also review the emergency response supporting system CCC-661/ EXPRESS documentation.« less

  7. MCNP code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids.

  8. Context effects on auditory distraction

    PubMed Central

    Chen, Sufen; Sussman, Elyse S.

    2014-01-01

    The purpose of the study was to test the hypothesis that sound context modulates the magnitude of auditory distraction, indexed by behavioral and electrophysiological measures. Participants were asked to identify tone duration, while irrelevant changes occurred in tone frequency, tone intensity, and harmonic structure. Frequency deviants were randomly intermixed with standards (Uni-Condition), with intensity deviants (Bi-Condition), and with both intensity and complex deviants (Tri-Condition). Only in the Tri-Condition did the auditory distraction effect reflect the magnitude difference among the frequency and intensity deviants. The mixture of the different types of deviants in the Tri-Condition modulated the perceived level of distraction, demonstrating that the sound context can modulate the effect of deviance level on processing irrelevant acoustic changes in the environment. These findings thus indicate that perceptual contrast plays a role in change detection processes that leads to auditory distraction. PMID:23886958

  9. Recirculating photonic filter: a wavelength-selective time delay for phased-array antennas and wavelength code-division multiple access.

    PubMed

    Yegnanarayanan, S; Trinh, P D; Jalali, B

    1996-05-15

    A novel wavelength-selective photonic time-delay filter is proposed and demonstrated. The device consists of an optical phased-array waveguide grating in a recirculating feedback configuration. It can function as a true-time-delay generator for squint-free beam steering in optically controlled phased-array antennas and as an encoding-decoding filter for wavelength code-division multiple access.

  10. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  11. Effect of encoder--decoder mismatch due to wavelength and time misalignments on the performance of two-dimensional wavelength--time optical code-division multiple access systems.

    PubMed

    Adams, Rhys; Chen, Lawrence R

    2005-07-10

    We examine the effects of encoder and decoder mismatch due to wavelength and time chip misalignments on the bit-error rate (BER) performance of two-dimensional (2D) wavelength--time optical code-division multiple access systems. We investigate several instances of misalignment in the desired user encoder and decoder as well as in the interfering user encoders. Our simulation methodology can be used to analyze any type of 2D wavelength--time code family as well as probability distribution for misalignment. For illustration purposes, we consider codes generated by use of the depth-first search algorithm and a Gaussian distribution for the misalignment. Our simulation results show that, in the case of a misalignment in either wavelength or time chip, the variance of the distribution for the misalignment must be below 0.01 for the corresponding degradation in the BER system's performance to be less than 1 order of magnitude compared with that when there is no mismatch between the encoders and decoders. The tolerances become even more strict when misalignments in both wavelength and time chips are considered. Furthermore, our results show that the effect of misalignment in wavelength (time chips) is the same regardless of the number of wavelengths (time chips) used in the codes.

  12. Temporal tuning in the bat auditory cortex is sharper when studied with natural echolocation sequences

    PubMed Central

    Beetz, M. Jerome; Hechavarría, Julio C.; Kössl, Manfred

    2016-01-01

    Precise temporal coding is necessary for proper acoustic analysis. However, at cortical level, forward suppression appears to limit the ability of neurons to extract temporal information from natural sound sequences. Here we studied how temporal processing can be maintained in the bats’ cortex in the presence of suppression evoked by natural echolocation streams that are relevant to the bats’ behavior. We show that cortical neurons tuned to target-distance actually profit from forward suppression induced by natural echolocation sequences. These neurons can more precisely extract target distance information when they are stimulated with natural echolocation sequences than during stimulation with isolated call-echo pairs. We conclude that forward suppression does for time domain tuning what lateral inhibition does for selectivity forms such as auditory frequency tuning and visual orientation tuning. When talking about cortical processing, suppression should be seen as a mechanistic tool rather than a limiting element. PMID:27357230

  13. Temporal tuning in the bat auditory cortex is sharper when studied with natural echolocation sequences.

    PubMed

    Beetz, M Jerome; Hechavarría, Julio C; Kössl, Manfred

    2016-01-01

    Precise temporal coding is necessary for proper acoustic analysis. However, at cortical level, forward suppression appears to limit the ability of neurons to extract temporal information from natural sound sequences. Here we studied how temporal processing can be maintained in the bats' cortex in the presence of suppression evoked by natural echolocation streams that are relevant to the bats' behavior. We show that cortical neurons tuned to target-distance actually profit from forward suppression induced by natural echolocation sequences. These neurons can more precisely extract target distance information when they are stimulated with natural echolocation sequences than during stimulation with isolated call-echo pairs. We conclude that forward suppression does for time domain tuning what lateral inhibition does for selectivity forms such as auditory frequency tuning and visual orientation tuning. When talking about cortical processing, suppression should be seen as a mechanistic tool rather than a limiting element. PMID:27357230

  14. Loudspeaker equalization for auditory research.

    PubMed

    MacDonald, Justin A; Tran, Phuong K

    2007-02-01

    The equalization of loudspeaker frequency response is necessary to conduct many types of well-controlled auditory experiments. This article introduces a program that includes functions to measure a loudspeaker's frequency response, design equalization filters, and apply the filters to a set of stimuli to be used in an auditory experiment. The filters can compensate for both magnitude and phase distortions introduced by the loudspeaker. A MATLAB script is included in the Appendix to illustrate the details of the equalization algorithm used in the program.

  15. Sensitivity to Auditory Velocity Contrast.

    PubMed

    Locke, Shannon M; Leung, Johahn; Carlile, Simon

    2016-06-13

    A natural auditory scene often contains sound moving at varying velocities. Using a velocity contrast paradigm, we compared sensitivity to velocity changes between continuous and discontinuous trajectories. Subjects compared the velocities of two stimulus intervals that moved along a single trajectory, with and without a 1 second inter stimulus interval (ISI). We found thresholds were threefold larger for velocity increases in the instantaneous velocity change condition, as compared to instantaneous velocity decreases or thresholds for the delayed velocity transition condition. This result cannot be explained by the current static "snapshot" model of auditory motion perception and suggest a continuous process where the percept of velocity is influenced by previous history of stimulation.

  16. Effects of Auditory Input in Individuation Tasks

    ERIC Educational Resources Information Center

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2008-01-01

    Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre-familiarized sounds, etc.). The current study extends this research by examining how…

  17. Pre-Attentive Auditory Processing of Lexicality

    ERIC Educational Resources Information Center

    Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan

    2004-01-01

    The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…

  18. Effects of the audiovisual conflict on auditory early processes.

    PubMed

    Scannella, Sébastien; Causse, Mickaël; Chauveau, Nicolas; Pastor, Josette; Dehais, Frédéric

    2013-07-01

    Auditory alarm misperception is one of the critical events that lead aircraft pilots to an erroneous flying decision. The rarity of these alarms associated with their possible unreliability may play a role in this misperception. In order to investigate this hypothesis, we manipulated both audiovisual conflict and sound rarity in a simplified landing task. Behavioral data and event related potentials (ERPs) of thirteen healthy participants were analyzed. We found that the presentation of a rare auditory signal (i.e., an alarm), incongruent with visual information, led to a smaller amplitude of the auditory N100 (i.e., less negative) compared to the condition in which both signals were congruent. Moreover, the incongruity between the visual information and the rare sound did not significantly affect reaction times, suggesting that the rare sound was neglected. We propose that the lower N100 amplitude reflects an early visual-to-auditory gating that depends on the rarity of the sound. In complex aircraft environments, this early effect might be partly responsible for auditory alarm insensitivity. Our results provide a new basis for future aeronautic studies and the development of countermeasures.

  19. Auditory Detection of the Human Brainstem Auditory Evoked Response.

    ERIC Educational Resources Information Center

    Kidd, Gerald, Jr.; And Others

    1993-01-01

    This study evaluated whether listeners can distinguish human brainstem auditory evoked responses elicited by acoustic clicks from control waveforms obtained with no acoustic stimulus when the waveforms are presented auditorily. Detection performance for stimuli presented visually was slightly, but consistently, superior to that which occurred for…

  20. Predicting multiprocessing efficiency on the Cray multiprocessors in a (CTSS) time-sharing environment/application to a 3-D magnetohydrodynamics code

    SciTech Connect

    Mirin, A.A.

    1988-07-01

    A formula is derived for predicting multiprocessing efficiency on Cray supercomputers equipped with the Cray Time-Sharing System (CTSS). The model is applicable to an intensive time-sharing environment. The actual efficiency estimate depends on three factors: the code size, task length, and job mix. The implementation of multitasking in a three-dimensional plasma magnetohydrodynamics (MHD) code, TEMCO, is discussed. TEMCO solves the primitive one-fluid compressible MHD equations and includes resistive and Hall effects in Ohm's law. Virtually all segments of the main time-integration loop are multitasked. The multiprocessing efficiency model is applied to TEMCO. Excellent agreement is obtained between the actual multiprocessing efficiency and the theoretical prediction.