Sample records for auditory filter shape

  1. Rapid estimation of high-parameter auditory-filter shapes

    PubMed Central

    Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.

    2014-01-01

    A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086

  2. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    PubMed

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. A human auditory tuning curves matched wavelet function.

    PubMed

    Abolhassani, Mohammad D; Salimpour, Yousef

    2008-01-01

    This paper proposes a new quantitative approach to the problem of matching a wavelet function to a human auditory tuning curves. The auditory filter shapes were derived from the psychophysical measurements in normal-hearing listeners using the variant of the notched-noise method for brief signals in forward and simultaneous masking. These filters were used as templates for the designing a wavelet function that has the maximum matching to a tuning curve. The scaling function was calculated from the matched wavelet function and by using these functions, low pass and high pass filters were derived for the implementation of a filter bank. Therefore, new wavelet families were derived.

  4. Relating the variability of tone-burst otoacoustic emission and auditory brainstem response latencies to the underlying cochlear mechanics

    NASA Astrophysics Data System (ADS)

    Verhulst, Sarah; Shera, Christopher A.

    2015-12-01

    Forward and reverse cochlear latency and its relation to the frequency tuning of the auditory filters can be assessed using tone bursts (TBs). Otoacoustic emissions (TBOAEs) estimate the cochlear roundtrip time, while auditory brainstem responses (ABRs) to the same stimuli aim at measuring the auditory filter buildup time. Latency ratios are generally close to two and controversy exists about the relationship of this ratio to cochlear mechanics. We explored why the two methods provide different estimates of filter buildup time, and ratios with large inter-subject variability, using a time-domain model for OAEs and ABRs. We compared latencies for twenty models, in which all parameters but the cochlear irregularities responsible for reflection-source OAEs were identical, and found that TBOAE latencies were much more variable than ABR latencies. Multiple reflection-sources generated within the evoking stimulus bandwidth were found to shape the TBOAE envelope and complicate the interpretation of TBOAE latency and TBOAE/ABR ratios in terms of auditory filter tuning.

  5. Temporal processing and adaptation in the songbird auditory forebrain.

    PubMed

    Nagel, Katherine I; Doupe, Allison J

    2006-09-21

    Songbird auditory neurons must encode the dynamics of natural sounds at many volumes. We investigated how neural coding depends on the distribution of stimulus intensities. Using reverse-correlation, we modeled responses to amplitude-modulated sounds as the output of a linear filter and a nonlinear gain function, then asked how filters and nonlinearities depend on the stimulus mean and variance. Filter shape depended strongly on mean amplitude (volume): at low mean, most neurons integrated sound over many milliseconds, while at high mean, neurons responded more to local changes in amplitude. Increasing the variance (contrast) of amplitude modulations had less effect on filter shape but decreased the gain of firing in most cells. Both filter and gain changes occurred rapidly after a change in statistics, suggesting that they represent nonlinearities in processing. These changes may permit neurons to signal effectively over a wider dynamic range and are reminiscent of findings in other sensory systems.

  6. A Dynamic Compressive Gammachirp Auditory Filterbank

    PubMed Central

    Irino, Toshio; Patterson, Roy D.

    2008-01-01

    It is now common to use knowledge about human auditory processing in the development of audio signal processors. Until recently, however, such systems were limited by their linearity. The auditory filter system is known to be level-dependent as evidenced by psychophysical data on masking, compression, and two-tone suppression. However, there were no analysis/synthesis schemes with nonlinear filterbanks. This paper describe18300060s such a scheme based on the compressive gammachirp (cGC) auditory filter. It was developed to extend the gammatone filter concept to accommodate the changes in psychophysical filter shape that are observed to occur with changes in stimulus level in simultaneous, tone-in-noise masking. In models of simultaneous noise masking, the temporal dynamics of the filtering can be ignored. Analysis/synthesis systems, however, are intended for use with speech sounds where the glottal cycle can be long with respect to auditory time constants, and so they require specification of the temporal dynamics of auditory filter. In this paper, we describe a fast-acting level control circuit for the cGC filter and show how psychophysical data involving two-tone suppression and compression can be used to estimate the parameter values for this dynamic version of the cGC filter (referred to as the “dcGC” filter). One important advantage of analysis/synthesis systems with a dcGC filterbank is that they can inherit previously refined signal processing algorithms developed with conventional short-time Fourier transforms (STFTs) and linear filterbanks. PMID:19330044

  7. The temporal representation of speech in a nonlinear model of the guinea pig cochlea

    NASA Astrophysics Data System (ADS)

    Holmes, Stephen D.; Sumner, Christian J.; O'Mard, Lowel P.; Meddis, Ray

    2004-12-01

    The temporal representation of speechlike stimuli in the auditory-nerve output of a guinea pig cochlea model is described. The model consists of a bank of dual resonance nonlinear filters that simulate the vibratory response of the basilar membrane followed by a model of the inner hair cell/auditory nerve complex. The model is evaluated by comparing its output with published physiological auditory nerve data in response to single and double vowels. The evaluation includes analyses of individual fibers, as well as ensemble responses over a wide range of best frequencies. In all cases the model response closely follows the patterns in the physiological data, particularly the tendency for the temporal firing pattern of each fiber to represent the frequency of a nearby formant of the speech sound. In the model this behavior is largely a consequence of filter shapes; nonlinear filtering has only a small contribution at low frequencies. The guinea pig cochlear model produces a useful simulation of the measured physiological response to simple speech sounds and is therefore suitable for use in more advanced applications including attempts to generalize these principles to the response of human auditory system, both normal and impaired. .

  8. Brainstem origins for cortical 'what' and 'where' pathways in the auditory system.

    PubMed

    Kraus, Nina; Nicol, Trent

    2005-04-01

    We have developed a data-driven conceptual framework that links two areas of science: the source-filter model of acoustics and cortical sensory processing streams. The source-filter model describes the mechanics behind speech production: the identity of the speaker is carried largely in the vocal cord source and the message is shaped by the ever-changing filters of the vocal tract. Sensory processing streams, popularly called 'what' and 'where' pathways, are well established in the visual system as a neural scheme for separately carrying different facets of visual objects, namely their identity and their position/motion, to the cortex. A similar functional organization has been postulated in the auditory system. Both speaker identity and the spoken message, which are simultaneously conveyed in the acoustic structure of speech, can be disentangled into discrete brainstem response components. We argue that these two response classes are early manifestations of auditory 'what' and 'where' streams in the cortex. This brainstem link forges a new understanding of the relationship between the acoustics of speech and cortical processing streams, unites two hitherto separate areas in science, and provides a model for future investigations of auditory function.

  9. A Pole-Zero Filter Cascade Provides Good Fits to Human Masking Data and to Basilar Membrane and Neural Data

    NASA Astrophysics Data System (ADS)

    Lyon, Richard F.

    2011-11-01

    A cascade of two-pole-two-zero filters with level-dependent pole and zero dampings, with few parameters, can provide a good match to human psychophysical and physiological data. The model has been fitted to data on detection threshold for tones in notched-noise masking, including bandwidth and filter shape changes over a wide range of levels, and has been shown to provide better fits with fewer parameters compared to other auditory filter models such as gammachirps. Originally motivated as an efficient machine implementation of auditory filtering related to the WKB analysis method of cochlear wave propagation, such filter cascades also provide good fits to mechanical basilar membrane data, and to auditory nerve data, including linear low-frequency tail response, level-dependent peak gain, sharp tuning curves, nonlinear compression curves, level-independent zero-crossing times in the impulse response, realistic instantaneous frequency glides, and appropriate level-dependent group delay even with minimum-phase response. As part of exploring different level-dependent parameterizations of such filter cascades, we have identified a simple sufficient condition for stable zero-crossing times, based on the shifting property of the Laplace transform: simply move all the s-domain poles and zeros by equal amounts in the real-s direction. Such pole-zero filter cascades are efficient front ends for machine hearing applications, such as music information retrieval, content identification, speech recognition, and sound indexing.

  10. Measuring the effects of spectral smearing and enhancement on speech recognition in noise for adults and children

    PubMed Central

    Nittrouer, Susan; Tarr, Eric; Wucinich, Taylor; Moberly, Aaron C.; Lowenstein, Joanna H.

    2015-01-01

    Broadened auditory filters associated with sensorineural hearing loss have clearly been shown to diminish speech recognition in noise for adults, but far less is known about potential effects for children. This study examined speech recognition in noise for adults and children using simulated auditory filters of different widths. Specifically, 5 groups (20 listeners each) of adults or children (5 and 7 yrs), were asked to recognize sentences in speech-shaped noise. Seven-year-olds listened at 0 dB signal-to-noise ratio (SNR) only; 5-yr-olds listened at +3 or 0 dB SNR; and adults listened at 0 or −3 dB SNR. Sentence materials were processed both to smear the speech spectrum (i.e., simulate broadened filters), and to enhance the spectrum (i.e., simulate narrowed filters). Results showed: (1) Spectral smearing diminished recognition for listeners of all ages; (2) spectral enhancement did not improve recognition, and in fact diminished it somewhat; and (3) interactions were observed between smearing and SNR, but only for adults. That interaction made age effects difficult to gauge. Nonetheless, it was concluded that efforts to diagnose the extent of broadening of auditory filters and to develop techniques to correct this condition could benefit patients with hearing loss, especially children. PMID:25920851

  11. Auditory Evoked Potentials for the Evaluation of Hearing Sensitivity in Navy Dolphins. Modification P00002: Assessment of Hearing Sensitivity in Adult Male Elephant Seals

    DTIC Science & Technology

    2006-12-30

    hearing in the potential and underwater behavioral hearing thresholds in four bottlenose beluga Delphinapterus leucas ," Dokl. Akad. Nauk SSSR 294...313, "Auditory filter shapes for the bottlenose dolphin (Tursiops truncatus) and 238-241. the white whale ( Delphinapterus leucas ) derived with...Rickards, F. W., Cohen, L. T., De Vidi, S., and Clark, G. M. of a beluga whale, Delphinapterus leucas ," Aquat. Mamm. 26, 212-228. (1995). "The

  12. Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.

    PubMed

    Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie

    2016-12-07

    A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.

  13. Proportional spike-timing precision and firing reliability underlie efficient temporal processing of periodicity and envelope shape cues

    PubMed Central

    Zheng, Y.

    2013-01-01

    Temporal sound cues are essential for sound recognition, pitch, rhythm, and timbre perception, yet how auditory neurons encode such cues is subject of ongoing debate. Rate coding theories propose that temporal sound features are represented by rate tuned modulation filters. However, overwhelming evidence also suggests that precise spike timing is an essential attribute of the neural code. Here we demonstrate that single neurons in the auditory midbrain employ a proportional code in which spike-timing precision and firing reliability covary with the sound envelope cues to provide an efficient representation of the stimulus. Spike-timing precision varied systematically with the timescale and shape of the sound envelope and yet was largely independent of the sound modulation frequency, a prominent cue for pitch. In contrast, spike-count reliability was strongly affected by the modulation frequency. Spike-timing precision extends from sub-millisecond for brief transient sounds up to tens of milliseconds for sounds with slow-varying envelope. Information theoretic analysis further confirms that spike-timing precision depends strongly on the sound envelope shape, while firing reliability was strongly affected by the sound modulation frequency. Both the information efficiency and total information were limited by the firing reliability and spike-timing precision in a manner that reflected the sound structure. This result supports a temporal coding strategy in the auditory midbrain where proportional changes in spike-timing precision and firing reliability can efficiently signal shape and periodicity temporal cues. PMID:23636724

  14. The Width of the Auditory Filter in Children.

    ERIC Educational Resources Information Center

    Irwin, R. J.; And Others

    1986-01-01

    Because young children have poorer auditory temporal resolution than older children, a study measured the auditory filters of two 6-year-olds, two 10-year-olds, and two adults by having them detect a 400-ms sinusoid centered in a spectral notch in a band of noise. (HOD)

  15. Comparison of bandwidths in the inferior colliculus and the auditory nerve. II: Measurement using a temporally manipulated stimulus.

    PubMed

    Mc Laughlin, Myles; Chabwine, Joelle Nsimire; van der Heijden, Marcel; Joris, Philip X

    2008-10-01

    To localize low-frequency sounds, humans rely on an interaural comparison of the temporally encoded sound waveform after peripheral filtering. This process can be compared with cross-correlation. For a broadband stimulus, after filtering, the correlation function has a damped oscillatory shape where the periodicity reflects the filter's center frequency and the damping reflects the bandwidth (BW). The physiological equivalent of the correlation function is the noise delay (ND) function, which is obtained from binaural cells by measuring response rate to broadband noise with varying interaural time delays (ITDs). For monaural neurons, delay functions are obtained by counting coincidences for varying delays across spike trains obtained to the same stimulus. Previously, we showed that BWs in monaural and binaural neurons were similar. However, earlier work showed that the damping of delay functions differs significantly between these two populations. Here, we address this paradox by looking at the role of sensitivity to changes in interaural correlation. We measured delay and correlation functions in the cat inferior colliculus (IC) and auditory nerve (AN). We find that, at a population level, AN and IC neurons with similar characteristic frequencies (CF) and BWs can have different responses to changes in correlation. Notably, binaural neurons often show compression, which is not found in the AN and which makes the shape of delay functions more invariant with CF at the level of the IC than at the AN. We conclude that binaural sensitivity is more dependent on correlation sensitivity than has hitherto been appreciated and that the mechanisms underlying correlation sensitivity should be addressed in future studies.

  16. Loudspeaker equalization for auditory research.

    PubMed

    MacDonald, Justin A; Tran, Phuong K

    2007-02-01

    The equalization of loudspeaker frequency response is necessary to conduct many types of well-controlled auditory experiments. This article introduces a program that includes functions to measure a loudspeaker's frequency response, design equalization filters, and apply the filters to a set of stimuli to be used in an auditory experiment. The filters can compensate for both magnitude and phase distortions introduced by the loudspeaker. A MATLAB script is included in the Appendix to illustrate the details of the equalization algorithm used in the program.

  17. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  18. The Essential Complexity of Auditory Receptive Fields

    PubMed Central

    Thorson, Ivar L.; Liénard, Jean; David, Stephen V.

    2015-01-01

    Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models. PMID:26683490

  19. Modulation rate transfer functions from four species of stranded odontocete (Stenella longirostris, Feresa attenuata, Globicephala melas, and Mesoplodon densirostris).

    PubMed

    Smith, Adam B; Pacini, Aude F; Nachtigall, Paul E

    2018-04-01

    Odontocete marine mammals explore the environment by rapidly producing echolocation signals and receiving the corresponding echoes, which likewise return at very rapid rates. Thus, it is important that the auditory system has a high temporal resolution to effectively process and extract relevant information from click echoes. This study used auditory evoked potential methods to investigate auditory temporal resolution of individuals from four different odontocete species, including a spinner dolphin (Stenella longirostris), pygmy killer whale (Feresa attenuata), long-finned pilot whale (Globicephala melas), and Blainville's beaked whale (Mesoplodon densirostris). Each individual had previously stranded and was undergoing rehabilitation. Auditory Brainstem Responses (ABRs) were elicited via acoustic stimuli consisting of a train of broadband tone pulses presented at rates between 300 and 2000 Hz. Similar to other studied species, modulation rate transfer functions (MRTFs) of the studied individuals followed the shape of a low-pass filter, with the ability to process acoustic stimuli at presentation rates up to and exceeding 1250 Hz. Auditory integration times estimated from the bandwidths of the MRTFs ranged between 250 and 333 µs. The results support the hypothesis that high temporal resolution is conserved throughout the diverse range of odontocete species.

  20. Consequences of broad auditory filters for identification of multichannel-compressed vowels

    PubMed Central

    Souza, Pamela; Wright, Richard; Bor, Stephanie

    2012-01-01

    Purpose In view of previous findings (Bor, Souza & Wright, 2008) that some listeners are more susceptible to spectral changes from multichannel compression (MCC) than others, this study addressed the extent to which differences in effects of MCC were related to differences in auditory filter width. Method Listeners were recruited in three groups: listeners with flat sensorineural loss, listeners with sloping sensorineural loss, and a control group of listeners with normal hearing. Individual auditory filter measurements were obtained at 500 and 2000 Hz. The filter widths were related to identification of vowels processed with 16-channel MCC and with a control (linear) condition. Results Listeners with flat loss had broader filters at 500 Hz but not at 2000 Hz, compared to listeners with sloping loss. Vowel identification was poorer for MCC compared to linear amplification. Listeners with flat loss made more errors than listeners with sloping loss, and there was a significant relationship between filter width and the effects of MCC. Conclusions Broadened auditory filters can reduce the ability to process amplitude-compressed vowel spectra. This suggests that individual frequency selectivity is one factor which influences benefit of MCC, when a high number of compression channels are used. PMID:22207696

  1. Effect of High-Pass Filtering on the Neonatal Auditory Brainstem Response to Air- and Bone-Conducted Clicks.

    ERIC Educational Resources Information Center

    Stuart, Andrew; Yang, Edward Y.

    1994-01-01

    Simultaneous 3- channel recorded auditory brainstem responses (ABR) were obtained from 20 neonates with various high-pass filter settings and low intensity levels. Results support the advocacy of less restrictive high-pass filtering for neonatal and infant ABR screening to air-conducted and bone-conducted clicks. (Author/JDD)

  2. Establishing the Response of Low Frequency Auditory Filters

    NASA Technical Reports Server (NTRS)

    Rafaelof, Menachem; Christian, Andrew; Shepherd, Kevin; Rizzi, Stephen; Stephenson, James

    2017-01-01

    The response of auditory filters is central to frequency selectivity of sound by the human auditory system. This is true especially for realistic complex sounds that are often encountered in many applications such as modeling the audibility of sound, voice recognition, noise cancelation, and the development of advanced hearing aid devices. The purpose of this study was to establish the response of low frequency (below 100Hz) auditory filters. Two experiments were designed and executed; the first was to measure subject's hearing threshold for pure tones (at 25, 31.5, 40, 50, 63 and 80 Hz), and the second was to measure the Psychophysical Tuning Curves (PTCs) at two signal frequencies (Fs= 40 and 63Hz). Experiment 1 involved 36 subjects while experiment 2 used 20 subjects selected from experiment 1. Both experiments were based on a 3-down 1-up 3AFC adaptive staircase test procedure using either a variable level narrow-band noise masker or a tone. A summary of the results includes masked threshold data in form of PTCs, the response of auditory filters, their distribution, and comparison with similar recently published data.

  3. Does attention play a role in dynamic receptive field adaptation to changing acoustic salience in A1?

    PubMed

    Fritz, Jonathan B; Elhilali, Mounya; David, Stephen V; Shamma, Shihab A

    2007-07-01

    Acoustic filter properties of A1 neurons can dynamically adapt to stimulus statistics, classical conditioning, instrumental learning and the changing auditory attentional focus. We have recently developed an experimental paradigm that allows us to view cortical receptive field plasticity on-line as the animal meets different behavioral challenges by attending to salient acoustic cues and changing its cortical filters to enhance performance. We propose that attention is the key trigger that initiates a cascade of events leading to the dynamic receptive field changes that we observe. In our paradigm, ferrets were initially trained, using conditioned avoidance training techniques, to discriminate between background noise stimuli (temporally orthogonal ripple combinations) and foreground tonal target stimuli. They learned to generalize the task for a wide variety of distinct background and foreground target stimuli. We recorded cortical activity in the awake behaving animal and computed on-line spectrotemporal receptive fields (STRFs) of single neurons in A1. We observed clear, predictable task-related changes in STRF shape while the animal performed spectral tasks (including single tone and multi-tone detection, and two-tone discrimination) with different tonal targets. A different set of task-related changes occurred when the animal performed temporal tasks (including gap detection and click-rate discrimination). Distinctive cortical STRF changes may constitute a "task-specific signature". These spectral and temporal changes in cortical filters occur quite rapidly, within 2min of task onset, and fade just as quickly after task completion, or in some cases, persisted for hours. The same cell could multiplex by differentially changing its receptive field in different task conditions. On-line dynamic task-related changes, as well as persistent plastic changes, were observed at a single-unit, multi-unit and population level. Auditory attention is likely to be pivotal in mediating these task-related changes since the magnitude of STRF changes correlated with behavioral performance on tasks with novel targets. Overall, these results suggest the presence of an attention-triggered plasticity algorithm in A1 that can swiftly change STRF shape by transforming receptive fields to enhance figure/ground separation, by using a contrast matched filter to filter out the background, while simultaneously enhancing the salient acoustic target in the foreground. These results favor the view of a nimble, dynamic, attentive and adaptive brain that can quickly reshape its sensory filter properties and sensori-motor links on a moment-to-moment basis, depending upon the current challenges the animal faces. In this review, we summarize our results in the context of a broader survey of the field of auditory attention, and then consider neuronal networks that could give rise to this phenomenon of attention-driven receptive field plasticity in A1.

  4. Consequences of Broad Auditory Filters for Identification of Multichannel-Compressed Vowels

    ERIC Educational Resources Information Center

    Souza, Pamela; Wright, Richard; Bor, Stephanie

    2012-01-01

    Purpose: In view of previous findings (Bor, Souza, & Wright, 2008) that some listeners are more susceptible to spectral changes from multichannel compression (MCC) than others, this study addressed the extent to which differences in effects of MCC were related to differences in auditory filter width. Method: Listeners were recruited in 3 groups:…

  5. A quantitative analysis of spectral mechanisms involved in auditory detection of coloration by a single wall reflection.

    PubMed

    Buchholz, Jörg M

    2011-07-01

    Coloration detection thresholds (CDTs) were measured for a single reflection as a function of spectral content and reflection delay for diotic stimulus presentation. The direct sound was a 320-ms long burst of bandpass-filtered noise with varying lower and upper cut-off frequencies. The resulting threshold data revealed that: (1) sensitivity decreases with decreasing bandwidth and increasing reflection delay and (2) high-frequency components contribute less to detection than low-frequency components. The auditory processes that may be involved in coloration detection (CD) are discussed in terms of a spectrum-based auditory model, which is conceptually similar to the pattern-transformation model of pitch (Wightman, 1973). Hence, the model derives an auto-correlation function of the input stimulus by applying a frequency analysis to an auditory representation of the power spectrum. It was found that, to successfully describe the quantitative behavior of the CDT data, three important mechanisms need to be included: (1) auditory bandpass filters with a narrower bandwidth than classic Gammatone filters, the increase in spectral resolution was here linked to cochlear suppression, (2) a spectral contrast enhancement process that reflects neural inhibition mechanisms, and (3) integration of information across auditory frequency bands. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. The effect of viewing speech on auditory speech processing is different in the left and right hemispheres.

    PubMed

    Davis, Chris; Kislyuk, Daniel; Kim, Jeesun; Sams, Mikko

    2008-11-25

    We used whole-head magnetoencephalograpy (MEG) to record changes in neuromagnetic N100m responses generated in the left and right auditory cortex as a function of the match between visual and auditory speech signals. Stimuli were auditory-only (AO) and auditory-visual (AV) presentations of /pi/, /ti/ and /vi/. Three types of intensity matched auditory stimuli were used: intact speech (Normal), frequency band filtered speech (Band) and speech-shaped white noise (Noise). The behavioural task was to detect the /vi/ syllables which comprised 12% of stimuli. N100m responses were measured to averaged /pi/ and /ti/ stimuli. Behavioural data showed that identification of the stimuli was faster and more accurate for Normal than for Band stimuli, and for Band than for Noise stimuli. Reaction times were faster for AV than AO stimuli. MEG data showed that in the left hemisphere, N100m to both AO and AV stimuli was largest for the Normal, smaller for Band and smallest for Noise stimuli. In the right hemisphere, Normal and Band AO stimuli elicited N100m responses of quite similar amplitudes, but N100m amplitude to Noise was about half of that. There was a reduction in N100m for the AV compared to the AO conditions. The size of this reduction for each stimulus type was same in the left hemisphere but graded in the right (being largest to the Normal, smaller to the Band and smallest to the Noise stimuli). The N100m decrease for the Normal stimuli was significantly larger in the right than in the left hemisphere. We suggest that the effect of processing visual speech seen in the right hemisphere likely reflects suppression of the auditory response based on AV cues for place of articulation.

  7. The spectrotemporal filter mechanism of auditory selective attention

    PubMed Central

    Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.

    2013-01-01

    SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126

  8. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  9. JND measurements of the speech formants parameters and its implication in the LPC pole quantization

    NASA Astrophysics Data System (ADS)

    Orgad, Yaakov

    1988-08-01

    The inherent sensitivity of auditory perception is explicitly used with the objective of designing an efficient speech encoder. Speech can be modelled by a filter representing the vocal tract shape that is driven by an excitation signal representing glottal air flow. This work concentrates on the filter encoding problem, assuming that excitation signal encoding is optimal. Linear predictive coding (LPC) techniques were used to model a short speech segment by an all-pole filter; each pole was directly related to the speech formants. Measurements were made of the auditory just noticeable difference (JND) corresponding to the natural speech formants, with the LPC filter poles as the best candidates to represent the speech spectral envelope. The JND is the maximum precision required in speech quantization; it was defined on the basis of the shift of one pole parameter of a single frame of a speech segment, necessary to induce subjective perception of the distortion, with .75 probability. The average JND in LPC filter poles in natural speech was found to increase with increasing pole bandwidth and, to a lesser extent, frequency. The JND measurements showed a large spread of the residuals around the average values, indicating that inter-formant coupling and, perhaps, other, not yet fully understood, factors were not taken into account at this stage of the research. A future treatment should consider these factors. The average JNDs obtained in this work were used to design pole quantization tables for speech coding and provided a better bit-rate than the standard quantizer of reflection coefficient; a 30-bits-per-frame pole quantizer yielded a speech quality similar to that obtained with a standard 41-bits-per-frame reflection coefficient quantizer. Owing to the complexity of the numerical root extraction system, the practical implementation of the pole quantization approach remains to be proved.

  10. P50 Suppression in Children with Selective Mutism: A Preliminary Report

    ERIC Educational Resources Information Center

    Henkin, Yael; Feinholz, Maya; Arie, Miri; Bar-Haim, Yair

    2010-01-01

    Evidence suggests that children with selective mutism (SM) display significant aberrations in auditory efferent activity at the brainstem level that may underlie inefficient auditory processing during vocalization, and lead to speech avoidance. The objective of the present study was to explore auditory filtering processes at the cortical level in…

  11. Relations Among Central Auditory Abilities, Socio-Economic Factors, Speech Delay, Phonic Abilities and Reading Achievement: A Longitudinal Study.

    ERIC Educational Resources Information Center

    Flowers, Arthur; Crandell, Edwin W.

    Three auditory perceptual processes (resistance to distortion, selective listening in the form of auditory dedifferentiation, and binaural synthesis) were evaluated by five assessment techniques: (1) low pass filtered speech, (2) accelerated speech, (3) competing messages, (4) accelerated plus competing messages, and (5) binaural synthesis.…

  12. Computational principles underlying recognition of acoustic signals in grasshoppers and crickets.

    PubMed

    Ronacher, Bernhard; Hennig, R Matthias; Clemens, Jan

    2015-01-01

    Grasshoppers and crickets independently evolved hearing organs and acoustic communication. They differ considerably in the organization of their auditory pathways, and the complexity of their songs, which are essential for mate attraction. Recent approaches aimed at describing the behavioral preference functions of females in both taxa by a simple modeling framework. The basic structure of the model consists of three processing steps: (1) feature extraction with a bank of 'LN models'-each containing a linear filter followed by a nonlinearity, (2) temporal integration, and (3) linear combination. The specific properties of the filters and nonlinearities were determined using a genetic learning algorithm trained on a large set of different song features and the corresponding behavioral response scores. The model showed an excellent prediction of the behavioral responses to the tested songs. Most remarkably, in both taxa the genetic algorithm found Gabor-like functions as the optimal filter shapes. By slight modifications of Gabor filters several types of preference functions could be modeled, which are observed in different cricket species. Furthermore, this model was able to explain several so far enigmatic results in grasshoppers. The computational approach offered a remarkably simple framework that can account for phenotypically rather different preference functions across several taxa.

  13. Short-term plasticity in auditory cognition.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  14. Effects of analog and digital filtering on auditory middle latency responses in adults and young children.

    PubMed

    Suzuki, T; Hirabayashi, M; Kobayashi, K

    1984-01-01

    Effects of analog high pass (HP) filtering were compared with those of zero phase-shift digital filtering on the auditory middle latency responses (MLR) from nine adults and 16 young children with normal hearing. Analog HP filtering exerted several prominent effects on the MLR waveforms in both adults and young children, such as suppression of Po (ABR), enhancement of Nb, enhancement or emergence of Pb, and latency decrements for Pa and the later components. Analog HP filtering at 20 Hz produced more pronounced waveform distortions in the responses from young children than from adults. Much greater latency decrements for Pa and Nb were observed for young children than for adults in the analog HP-filtered responses at 20 Hz. A large positive peak (Pb) emerged at about 65 ms after the stimulus onset. From these results, the use of digital HP filtering at 20 Hz is strongly recommended for obtaining unbiased and stable MLR in young children.

  15. Selective Attention to Visual Stimuli Using Auditory Distractors Is Altered in Alpha-9 Nicotinic Receptor Subunit Knock-Out Mice.

    PubMed

    Terreros, Gonzalo; Jorratt, Pascal; Aedo, Cristian; Elgoyhen, Ana Belén; Delano, Paul H

    2016-07-06

    During selective attention, subjects voluntarily focus their cognitive resources on a specific stimulus while ignoring others. Top-down filtering of peripheral sensory responses by higher structures of the brain has been proposed as one of the mechanisms responsible for selective attention. A prerequisite to accomplish top-down modulation of the activity of peripheral structures is the presence of corticofugal pathways. The mammalian auditory efferent system is a unique neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear bundle, and it has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we trained wild-type and α-9 nicotinic receptor subunit knock-out (KO) mice, which lack cholinergic transmission between medial olivocochlear neurons and outer hair cells, in a two-choice visual discrimination task and studied the behavioral consequences of adding different types of auditory distractors. In addition, we evaluated the effects of contralateral noise on auditory nerve responses as a measure of the individual strength of the olivocochlear reflex. We demonstrate that KO mice have a reduced olivocochlear reflex strength and perform poorly in a visual selective attention paradigm. These results confirm that an intact medial olivocochlear transmission aids in ignoring auditory distraction during selective attention to visual stimuli. The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear system. It has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we studied the behavioral consequences of adding different types of auditory distractors in a visual selective attention task in wild-type and α-9 nicotinic receptor knock-out (KO) mice. We demonstrate that KO mice perform poorly in the selective attention paradigm and that an intact medial olivocochlear transmission aids in ignoring auditory distractors during attention. Copyright © 2016 the authors 0270-6474/16/367198-12$15.00/0.

  16. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1994-01-01

    A spatial auditory display was designed for separating the multiple communication channels usually heard over one ear to different virtual auditory positions. The single 19 foot rack mount device utilizes digital filtering algorithms to separate up to four communication channels. The filters use four different binaural transfer functions, synthesized from actual outer ear measurements, to impose localization cues on the incoming sound. Hardware design features include 'fail-safe' operation in the case of power loss, and microphone/headset interfaces to the mobile launch communication system in use at KSC. An experiment designed to verify the intelligibility advantage of the display used 130 different call signs taken from the communications protocol used at NASA KSC. A 6 to 7 dB intelligibility advantage was found when multiple channels were spatially displayed, compared to monaural listening. The findings suggest that the use of a spatial auditory display could enhance both occupational and operational safety and efficiency of NASA operations.

  17. Idealized Computational Models for Auditory Receptive Fields

    PubMed Central

    Lindeberg, Tony; Friberg, Anders

    2015-01-01

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973

  18. Listening to Filtered Music as a Treatment Option for Tinnitus: A Review

    PubMed Central

    Wilson, E. Courtenay; Schlaug, Gottfried; Pantev, Christo

    2010-01-01

    TINNITUS IS THE PERCEPTION OF A SOUND IN THE absence of an external acoustic stimulus and it affects roughly 10-15% of the population. This review will discuss the different types of tinnitus and the current research on the underlying neural substrates of subjective tinnitus. Specific focus will be paid to the plasticity of the auditory cortex, the inputs from non-auditory centers in the central nervous system and how these are affected by tinnitus. We also will discuss several therapies that utilize music as a treatment for tinnitus and highlight a novel method that filters out the tinnitus frequency from the music, leveraging the plasticity in the auditory cortex as a means of reducing the impact of tinnitus. PMID:21170296

  19. Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex

    PubMed Central

    Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie

    2013-01-01

    Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225

  20. Operator Performance Measures for Assessing Voice Communication Effectiveness

    DTIC Science & Technology

    1989-07-01

    performance and work- load assessment techniques have been based.I Broadbent (1958) described a limited capacity filter model of human information...INFORMATION PROCESSING 20 3.1.1. Auditory Attention 20 3.1.2. Auditory Memory 24 3.2. MODELS OF INFORMATION PROCESSING 24 3.2.1. Capacity Theories 25...Learning 0 Attention * Language Specialization • Decision Making• Problem Solving Auditory Information Processing Models of Processing Ooemtor

  1. A masking level difference due to harmonicity.

    PubMed

    Treurniet, W C; Boucher, D R

    2001-01-01

    The role of harmonicity in masking was studied by comparing the effect of harmonic and inharmonic maskers on the masked thresholds of noise probes using a three-alternative, forced-choice method. Harmonic maskers were created by selecting sets of partials from a harmonic series with an 88-Hz fundamental and 45 consecutive partials. Inharmonic maskers differed in that the partial frequencies were perturbed to nearby values that were not integer multiples of the fundamental frequency. Average simultaneous-masked thresholds were as much as 10 dB lower with the harmonic masker than with the inharmonic masker, and this difference was unaffected by masker level. It was reduced or eliminated when the harmonic partials were separated by more than 176 Hz, suggesting that the effect is related to the extent to which the harmonics are resolved by auditory filters. The threshold difference was not observed in a forward-masking experiment. Finally, an across-channel mechanism was implicated when the threshold difference was found between a harmonic masker flanked by harmonic bands and a harmonic masker flanked by inharmonic bands. A model developed to explain the observed difference recognizes that an auditory filter output envelope is modulated when the filter passes two or more sinusoids, and that the modulation rate depends on the differences among the input frequencies. For a harmonic masker, the frequency differences of adjacent partials are identical, and all auditory filters have the same dominant modulation rate. For an inharmonic masker, however, the frequency differences are not constant and the envelope modulation rate varies across filters. The model proposes that a lower variability facilitates detection of a probe-induced change in the variability, thus accounting for the masked threshold difference. The model was supported by significantly improved predictions of observed thresholds when the predictor variables included envelope modulation rate variance measured using simulated auditory filters.

  2. EEG-based auditory attention decoding using unprocessed binaural signals in reverberant and noisy conditions?

    PubMed

    Aroudi, Ali; Doclo, Simon

    2017-07-01

    To decode auditory attention from single-trial EEG recordings in an acoustic scenario with two competing speakers, a least-squares method has been recently proposed. This method however requires the clean speech signals of both the attended and the unattended speaker to be available as reference signals. Since in practice only the binaural signals consisting of a reverberant mixture of both speakers and background noise are available, in this paper we explore the potential of using these (unprocessed) signals as reference signals for decoding auditory attention in different acoustic conditions (anechoic, reverberant, noisy, and reverberant-noisy). In addition, we investigate whether it is possible to use these signals instead of the clean attended speech signal for filter training. The experimental results show that using the unprocessed binaural signals for filter training and for decoding auditory attention is feasible with a relatively large decoding performance, although for most acoustic conditions the decoding performance is significantly lower than when using the clean speech signals.

  3. Novel Spectro-Temporal Codes and Computations for Auditory Signal Representation and Separation

    DTIC Science & Technology

    2013-02-01

    responses are shown). Bottom right panel (c) shows the Frequency responses of the tunable bandpass filter ( BPF ) triplets that adapt to the incoming...signal. One BPF triplet is associated with each fixed filter, such that coarse filtering of the fixed gammatone filters is followed by additional, finer...is achieved using a second layer of narrower bandpass filters ( BPFs , Q=8) that emulate the filtering functions of outer hair cells (OHCs). In the

  4. Low power adder based auditory filter architecture.

    PubMed

    Rahiman, P F Khaleelur; Jayanthi, V S

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  5. Evaluating the articulation index for auditory-visual input.

    PubMed

    Grant, K W; Braida, L D

    1991-06-01

    An investigation of the auditory-visual (AV) articulation index (AI) correction procedure outlined in the ANSI standard [ANSI S3.5-1969 (R1986)] was made by evaluating auditory (A), visual (V), and auditory-visual sentence identification for both wideband speech degraded by additive noise and a variety of bandpass-filtered speech conditions presented in quiet and in noise. When the data for each of the different listening conditions were averaged across talkers and subjects, the procedure outlined in the standard was fairly well supported, although deviations from the predicted AV score were noted for individual subjects as well as individual talkers. For filtered speech signals with AIA less than 0.25, there was a tendency for the standard to underpredict AV scores. Conversely, for signals with AIA greater than 0.25, the standard consistently overpredicted AV scores. Additionally, synergistic effects, where the AIA obtained from the combination of different bandpass-filtered conditions was greater than the sum of the individual AIA's, were observed for all nonadjacent filter-band combinations (e.g., the addition of a low-pass band with a 630-Hz cutoff and a high-pass band with a 3150-Hz cutoff). These latter deviations from the standard violate the basic assumption of additivity stated by Articulation Theory, but are consistent with earlier reports by Pollack [I. Pollack, J. Acoust. Soc. Am. 20, 259-266 (1948)], Licklider [J. C. R. Licklider, Psychology: A Study of a Science, Vol. 1, edited by S. Koch (McGraw-Hill, New York, 1959), pp. 41-144], and Kryter [K. D. Kryter, J. Acoust. Soc. Am. 32, 547-556 (1960)].

  6. Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance.

    PubMed

    Yahata, Izumi; Kawase, Tetsuaki; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio

    2017-01-01

    The effects of visual speech (the moving image of the speaker's face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker's face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker's face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information.

  7. Two-Stage Processing of Sounds Explains Behavioral Performance Variations due to Changes in Stimulus Contrast and Selective Attention: An MEG Study

    PubMed Central

    Kauramäki, Jaakko; Jääskeläinen, Iiro P.; Hänninen, Jarno L.; Auranen, Toni; Nummenmaa, Aapo; Lampinen, Jouko; Sams, Mikko

    2012-01-01

    Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally () replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at 100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300–400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds. PMID:23071654

  8. Sparse gammatone signal model optimized for English speech does not match the human auditory filters.

    PubMed

    Strahl, Stefan; Mertins, Alfred

    2008-07-18

    Evidence that neurosensory systems use sparse signal representations as well as improved performance of signal processing algorithms using sparse signal models raised interest in sparse signal coding in the last years. For natural audio signals like speech and environmental sounds, gammatone atoms have been derived as expansion functions that generate a nearly optimal sparse signal model (Smith, E., Lewicki, M., 2006. Efficient auditory coding. Nature 439, 978-982). Furthermore, gammatone functions are established models for the human auditory filters. Thus far, a practical application of a sparse gammatone signal model has been prevented by the fact that deriving the sparsest representation is, in general, computationally intractable. In this paper, we applied an accelerated version of the matching pursuit algorithm for gammatone dictionaries allowing real-time and large data set applications. We show that a sparse signal model in general has advantages in audio coding and that a sparse gammatone signal model encodes speech more efficiently in terms of sparseness than a sparse modified discrete cosine transform (MDCT) signal model. We also show that the optimal gammatone parameters derived for English speech do not match the human auditory filters, suggesting for signal processing applications to derive the parameters individually for each applied signal class instead of using psychometrically derived parameters. For brain research, it means that care should be taken with directly transferring findings of optimality for technical to biological systems.

  9. Differential relationships of impulsivity or antisocial symptoms on P50, N100, or P200 auditory sensory gating in controls and antisocial personality disorder

    PubMed Central

    Lijffijt, Marijn; Cox, Blake; Acas, Michelle D.; Lane, Scott D.; Moeller, F. Gerard; Swann, Alan C.

    2013-01-01

    Limited information is available on the relationship between antisocial personality disorder (ASPD) and early filtering, or gating, of information, even though this could contribute to the repeatedly reported impairment in ASPD of higher-order information processing. In order to investigate early filtering in ASPD, we compared electrophysiological measures of auditory sensory gating assessed by the paired-click paradigm in males with ASPD (n = 37) to healthy controls (n = 28). Stimulus encoding was measured by P50, N100, and P200 auditory evoked potentials; auditory sensory gating (ASG) was measured by a reduction in amplitude of evoked potentials following click repetition. Effects were studied of co-existing past alcohol or drug use disorders, ASPD symptom counts, and trait impulsivity. Controls and ASPD did not differ in P50, N100, or P200 amplitude or ASG. Past alcohol or drug use disorders had no effect. In controls, impulsivity related to improved P50 and P200 gating. In ASPD, P50 or N100 gating was impaired with more symptoms or increased impulsivity, respectively, suggesting impaired early filtering of irrelevant information. In controls the relationship between P50 and P200 gating and impulsivity was reversed, suggesting better gating with higher impulsivity scores. This could reflect different roles of ASG in behavioral regulation in controls versus ASPD. PMID:22464943

  10. Towards an understanding of the mechanisms of weak central coherence effects: experiments in visual configural learning and auditory perception.

    PubMed

    Plaisted, Kate; Saksida, Lisa; Alcántara, José; Weisblatt, Emma

    2003-02-28

    The weak central coherence hypothesis of Frith is one of the most prominent theories concerning the abnormal performance of individuals with autism on tasks that involve local and global processing. Individuals with autism often outperform matched nonautistic individuals on tasks in which success depends upon processing of local features, and underperform on tasks that require global processing. We review those studies that have been unable to identify the locus of the mechanisms that may be responsible for weak central coherence effects and those that show that local processing is enhanced in autism but not at the expense of global processing. In the light of these studies, we propose that the mechanisms which can give rise to 'weak central coherence' effects may be perceptual. More specifically, we propose that perception operates to enhance the representation of individual perceptual features but that this does not impact adversely on representations that involve integration of features. This proposal was supported in the two experiments we report on configural and feature discrimination learning in high-functioning children with autism. We also examined processes of perception directly, in an auditory filtering task which measured the width of auditory filters in individuals with autism and found that the width of auditory filters in autism were abnormally broad. We consider the implications of these findings for perceptual theories of the mechanisms underpinning weak central coherence effects.

  11. EyeMusic: Introducing a "visual" colorful experience for the blind using auditory sensory substitution.

    PubMed

    Abboud, Sami; Hanassy, Shlomi; Levy-Tzedek, Shelly; Maidenbaum, Shachar; Amedi, Amir

    2014-01-01

    Sensory-substitution devices (SSDs) provide auditory or tactile representations of visual information. These devices often generate unpleasant sensations and mostly lack color information. We present here a novel SSD aimed at addressing these issues. We developed the EyeMusic, a novel visual-to-auditory SSD for the blind, providing both shape and color information. Our design uses musical notes on a pentatonic scale generated by natural instruments to convey the visual information in a pleasant manner. A short behavioral protocol was utilized to train the blind to extract shape and color information, and test their acquired abilities. Finally, we conducted a survey and a comparison task to assess the pleasantness of the generated auditory stimuli. We show that basic shape and color information can be decoded from the generated auditory stimuli. High performance levels were achieved by all participants following as little as 2-3 hours of training. Furthermore, we show that users indeed found the stimuli pleasant and potentially tolerable for prolonged use. The novel EyeMusic algorithm provides an intuitive and relatively pleasant way for the blind to extract shape and color information. We suggest that this might help facilitating visual rehabilitation because of the added functionality and enhanced pleasantness.

  12. [Thalamus and Attention].

    PubMed

    Tokoro, Kazuhiko; Sato, Hironobu; Yamamoto, Mayumi; Nagai, Yoshiko

    2015-12-01

    Attention is the process by which information and selection occurs, the thalamus plays an important role in the selective attention of visual and auditory information. Selective attention is a conscious effort; however, it occurs subconsciously, as well. The lateral geniculate body (LGB) filters visual information before it reaches the cortex (bottom-up attention). The thalamic reticular nucleus (TRN) provides a strong inhibitory input to both the LGB and pulvinar. This regulation involves focusing a spotlight on important information, as well as inhibiting unnecessary background information. Behavioral contexts more strongly modulate activity of the TRN and pulvinar influencing feedforward and feedback information transmission between the frontal, temporal, parietal and occipital cortical areas (top-down attention). The medial geniculate body (MGB) filters auditory information the TRN inhibits the MGB. Attentional modulation occurring in the auditory pathway among the cochlea, cochlear nucleus, superior olivary complex, and inferior colliculus is more important than that of the MGB and TRN. We also discuss the attentional consequence of thalamic hemorrhage.

  13. Characteristics of spectro-temporal modulation frequency selectivity in humans.

    PubMed

    Oetjen, Arne; Verhey, Jesko L

    2017-03-01

    There is increasing evidence that the auditory system shows frequency selectivity for spectro-temporal modulations. A recent study of the authors has shown spectro-temporal modulation masking patterns that were in agreement with the hypothesis of spectro-temporal modulation filters in the human auditory system [Oetjen and Verhey (2015). J. Acoust. Soc. Am. 137(2), 714-723]. In the present study, that experimental data and additional data were used to model this spectro-temporal frequency selectivity. The additional data were collected to investigate to what extent the spectro-temporal modulation-frequency selectivity results from a combination of a purely temporal amplitude-modulation filter and a purely spectral amplitude-modulation filter. In contrast to the previous study, thresholds were measured for masker and target modulations with opposite directions, i.e., an upward pointing target modulation and a downward pointing masker modulation. The comparison of this data set with previous corresponding data with the same direction from target and masker modulations indicate that a specific spectro-temporal modulation filter is required to simulate all aspects of spectro-temporal modulation frequency selectivity. A model using a modified Gabor filter with a purely temporal and a purely spectral filter predicts the spectro-temporal modulation masking data.

  14. Auditory color constancy: calibration to reliable spectral properties across nonspeech context and targets.

    PubMed

    Stilp, Christian E; Alexander, Joshua M; Kiefte, Michael; Kluender, Keith R

    2010-02-01

    Brief experience with reliable spectral characteristics of a listening context can markedly alter perception of subsequent speech sounds, and parallels have been drawn between auditory compensation for listening context and visual color constancy. In order to better evaluate such an analogy, the generality of acoustic context effects for sounds with spectral-temporal compositions distinct from speech was investigated. Listeners identified nonspeech sounds-extensively edited samples produced by a French horn and a tenor saxophone-following either resynthesized speech or a short passage of music. Preceding contexts were "colored" by spectral envelope difference filters, which were created to emphasize differences between French horn and saxophone spectra. Listeners were more likely to report hearing a saxophone when the stimulus followed a context filtered to emphasize spectral characteristics of the French horn, and vice versa. Despite clear changes in apparent acoustic source, the auditory system calibrated to relatively predictable spectral characteristics of filtered context, differentially affecting perception of subsequent target nonspeech sounds. This calibration to listening context and relative indifference to acoustic sources operates much like visual color constancy, for which reliable properties of the spectrum of illumination are factored out of perception of color.

  15. Temporal binding of neural responses for focused attention in biosonar

    PubMed Central

    Simmons, James A.

    2014-01-01

    Big brown bats emit biosonar sounds and perceive their surroundings from the delays of echoes received by the ears. Broadcasts are frequency modulated (FM) and contain two prominent harmonics sweeping from 50 to 25 kHz (FM1) and from 100 to 50 kHz (FM2). Individual frequencies in each broadcast and each echo evoke single-spike auditory responses. Echo delay is encoded by the time elapsed between volleys of responses to broadcasts and volleys of responses to echoes. If echoes have the same spectrum as broadcasts, the volley of neural responses to FM1 and FM2 is internally synchronized for each sound, which leads to sharply focused delay images. Because of amplitude–latency trading, disruption of response synchrony within the volleys occurs if the echoes are lowpass filtered, leading to blurred, defocused delay images. This effect is consistent with the temporal binding hypothesis for perceptual image formation. Bats perform inexplicably well in cluttered surroundings where echoes from off-side objects ought to cause masking. Off-side echoes are lowpass filtered because of the shape of the broadcast beam, and they evoke desynchronized auditory responses. The resulting defocused images of clutter do not mask perception of focused images for targets. Neural response synchronization may select a target to be the focus of attention, while desynchronization may impose inattention on the surroundings by defocusing perception of clutter. The formation of focused biosonar images from synchronized neural responses, and the defocusing that occurs with disruption of synchrony, quantitatively demonstrates how temporal binding may control attention and bring a perceptual object into existence. PMID:25122915

  16. Linguistic Profiles of Children with CI as Compared with Children with Hearing or Specific Language Impairment

    ERIC Educational Resources Information Center

    Hoog, Brigitte E.; Langereis, Margreet C.; Weerdenburg, Marjolijn; Knoors, Harry E. T.; Verhoeven, Ludo

    2016-01-01

    Background: The spoken language difficulties of children with moderate or severe to profound hearing loss are mainly related to limited auditory speech perception. However, degraded or filtered auditory input as evidenced in children with cochlear implants (CIs) may result in less efficient or slower language processing as well. To provide insight…

  17. Towards an understanding of the mechanisms of weak central coherence effects: experiments in visual configural learning and auditory perception.

    PubMed Central

    Plaisted, Kate; Saksida, Lisa; Alcántara, José; Weisblatt, Emma

    2003-01-01

    The weak central coherence hypothesis of Frith is one of the most prominent theories concerning the abnormal performance of individuals with autism on tasks that involve local and global processing. Individuals with autism often outperform matched nonautistic individuals on tasks in which success depends upon processing of local features, and underperform on tasks that require global processing. We review those studies that have been unable to identify the locus of the mechanisms that may be responsible for weak central coherence effects and those that show that local processing is enhanced in autism but not at the expense of global processing. In the light of these studies, we propose that the mechanisms which can give rise to 'weak central coherence' effects may be perceptual. More specifically, we propose that perception operates to enhance the representation of individual perceptual features but that this does not impact adversely on representations that involve integration of features. This proposal was supported in the two experiments we report on configural and feature discrimination learning in high-functioning children with autism. We also examined processes of perception directly, in an auditory filtering task which measured the width of auditory filters in individuals with autism and found that the width of auditory filters in autism were abnormally broad. We consider the implications of these findings for perceptual theories of the mechanisms underpinning weak central coherence effects. PMID:12639334

  18. Cutaneous sensory nerve as a substitute for auditory nerve in solving deaf-mutes’ hearing problem: an innovation in multi-channel-array skin-hearing technology

    PubMed Central

    Li, Jianwen; Li, Yan; Zhang, Ming; Ma, Weifang; Ma, Xuezong

    2014-01-01

    The current use of hearing aids and artificial cochleas for deaf-mute individuals depends on their auditory nerve. Skin-hearing technology, a patented system developed by our group, uses a cutaneous sensory nerve to substitute for the auditory nerve to help deaf-mutes to hear sound. This paper introduces a new solution, multi-channel-array skin-hearing technology, to solve the problem of speech discrimination. Based on the filtering principle of hair cells, external voice signals at different frequencies are converted to current signals at corresponding frequencies using electronic multi-channel bandpass filtering technology. Different positions on the skin can be stimulated by the electrode array, allowing the perception and discrimination of external speech signals to be determined by the skin response to the current signals. Through voice frequency analysis, the frequency range of the band-pass filter can also be determined. These findings demonstrate that the sensory nerves in the skin can help to transfer the voice signal and to distinguish the speech signal, suggesting that the skin sensory nerves are good candidates for the replacement of the auditory nerve in addressing deaf-mutes’ hearing problems. Scientific hearing experiments can be more safely performed on the skin. Compared with the artificial cochlea, multi-channel-array skin-hearing aids have lower operation risk in use, are cheaper and are more easily popularized. PMID:25317171

  19. Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance

    PubMed Central

    Yahata, Izumi; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio

    2017-01-01

    The effects of visual speech (the moving image of the speaker’s face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker’s face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker’s face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information. PMID:28141836

  20. Dolphin biosonar target detection in noise: wrap up of a past experiment.

    PubMed

    Au, Whitlow W L

    2014-07-01

    The target detection capability of bottlenose dolphins in the presence of artificial masking noise was first studied by Au and Penner [J. Acoust. Soc. Am. 70, 687-693 (1981)] in which the dolphins' target detection threshold was determined as a function of the ratio of the echo energy flux density and the estimated received noise spectral density. Such a metric was commonly used in human psychoacoustics despite the fact that the echo energy flux density is not compatible with noise spectral density which is averaged intensity per Hz. Since the earlier detection in noise studies, two important parameters, the dolphin integration time applicable to broadband clicks and the dolphin's auditory filter shape, were determined. The inclusion of these two parameters allows for the estimation of the received energy flux density of the masking noise so that the dolphin target detection can now be determined as a function of the ratio of the received energy of the echo over the received noise energy. Using an integration time of 264 μs and an auditory bandwidth of 16.7 kHz, the ratio of the echo energy to noise energy at the target detection threshold is approximately 1 dB.

  1. Statistics of natural binaural sounds.

    PubMed

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  2. Statistics of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658

  3. Do informal musical activities shape auditory skill development in preschool-age children?

    PubMed

    Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari

    2013-08-29

    The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children.

  4. Do informal musical activities shape auditory skill development in preschool-age children?

    PubMed Central

    Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari

    2013-01-01

    The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children. PMID:24009597

  5. [Auditory and corporal laterality, logoaudiometry, and monaural hearing aid gain].

    PubMed

    Benavides, Mariela; Peñaloza-López, Yolanda R; de la Sancha-Jiménez, Sabino; García Pedroza, Felipe; Gudiño, Paula K

    2007-12-01

    To identify the auditory or clinical test that has the best correlation with the ear in which we apply the monaural hearing aid in symmetric bilateral hearing loss. A total of 37 adult patients with symmetric bilateral hearing loss were examined regarding the correlation between the best score in speech discrimination test, corporal laterality, auditory laterality with dichotic digits in Spanish and score for filtered words with monaural hearing aid. The best correlation was obtained between auditory laterality and gain with hearing aid (0.940). The dichotic test for auditory laterality is a good tool for identifying the best ear in which to apply a monaural hearing aid. The results of this paper suggest the necessity to apply this test in patients before a hearing aid is indicated.

  6. Optimal resource allocation for novelty detection in a human auditory memory.

    PubMed

    Sinkkonen, J; Kaski, S; Huotilainen, M; Ilmoniemi, R J; Näätänen, R; Kaila, K

    1996-11-04

    A theory of resource allocation for neuronal low-level filtering is presented, based on an analysis of optimal resource allocation in simple environments. A quantitative prediction of the theory was verified in measurements of the magnetic mismatch response (MMR), an auditory event-related magnetic response of the human brain. The amplitude of the MMR was found to be directly proportional to the information conveyed by the stimulus. To the extent that the amplitude of the MMR can be used to measure resource usage by the auditory cortex, this finding supports our theory that, at least for early auditory processing, energy resources are used in proportion to the information content of incoming stimulus flow.

  7. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting

    PubMed Central

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J.

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface. PMID:29515494

  8. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting.

    PubMed

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama , a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio-visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  9. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    PubMed

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source. Copyright © 2018 the authors 0270-6474/18/383252-13$15.00/0.

  10. [Comparison of tone burst evoked auditory brainstem responses with different filter settings for referral infants after hearing screening].

    PubMed

    Diao, Wen-wen; Ni, Dao-feng; Li, Feng-rong; Shang, Ying-ying

    2011-03-01

    Auditory brainstem responses (ABR) evoked by tone burst is an important method of hearing assessment in referral infants after hearing screening. The present study was to compare the thresholds of tone burst ABR with filter settings of 30 - 1500 Hz and 30 - 3000 Hz at each frequency, figure out the characteristics of ABR thresholds with the two filter settings and the effect of the waveform judgement, so as to select a more optimal frequency specific ABR test parameter. Thresholds with filter settings of 30 - 1500 Hz and 30 - 3000 Hz in children aged 2 - 33 months were recorded by click, tone burst ABR. A total of 18 patients (8 male/10 female), 22 ears were included. The thresholds of tone burst ABR with filter settings of 30 - 3000 Hz were higher than that with filter settings of 30 - 1500 Hz. Significant difference was detected for that at 0.5 kHz and 2.0 kHz (t values were 2.238 and 2.217, P < 0.05), no significant difference between the two filter settings was detected at the rest frequencies tone evoked ABR thresholds. The waveform of ABR with filter settings of 30 - 1500 Hz was smoother than that with filter settings of 30 - 3000 Hz at the same stimulus intensity. Response curve of the latter appeared jagged small interfering wave. The filter setting of 30 - 1500 Hz may be a more optimal parameter of frequency specific ABR to improve the accuracy of frequency specificity ABR for infants' hearing assessment.

  11. Auditory brainstem responses in the Eastern Screech Owl: An estimate of auditory thresholds

    USGS Publications Warehouse

    Brittan-Powell, E.F.; Lohr, B.; Hahn, D.C.; Dooling, R.J.

    2005-01-01

    The auditory brainstem response (ABR), a measure of neural synchrony, was used to estimate auditory sensitivity in the eastern screech owl (Megascops asio). The typical screech owl ABR waveform showed two to three prominent peaks occurring within 5 ms of stimulus onset. As sound pressure levels increased, the ABR peak amplitude increased and latency decreased. With an increasing stimulus presentation rate, ABR peak amplitude decreased and latency increased. Generally, changes in the ABR waveform to stimulus intensity and repetition rate are consistent with the pattern found in several avian families. The ABR audiogram shows that screech owls hear best between 1.5 and 6.4 kHz with the most acute sensitivity between 4?5.7 kHz. The shape of the average screech owl ABR audiogram is similar to the shape of the behaviorally measured audiogram of the barn owl, except at the highest frequencies. Our data also show differences in overall auditory sensitivity between the color morphs of screech owls.

  12. Temporal binding of neural responses for focused attention in biosonar.

    PubMed

    Simmons, James A

    2014-08-15

    Big brown bats emit biosonar sounds and perceive their surroundings from the delays of echoes received by the ears. Broadcasts are frequency modulated (FM) and contain two prominent harmonics sweeping from 50 to 25 kHz (FM1) and from 100 to 50 kHz (FM2). Individual frequencies in each broadcast and each echo evoke single-spike auditory responses. Echo delay is encoded by the time elapsed between volleys of responses to broadcasts and volleys of responses to echoes. If echoes have the same spectrum as broadcasts, the volley of neural responses to FM1 and FM2 is internally synchronized for each sound, which leads to sharply focused delay images. Because of amplitude-latency trading, disruption of response synchrony within the volleys occurs if the echoes are lowpass filtered, leading to blurred, defocused delay images. This effect is consistent with the temporal binding hypothesis for perceptual image formation. Bats perform inexplicably well in cluttered surroundings where echoes from off-side objects ought to cause masking. Off-side echoes are lowpass filtered because of the shape of the broadcast beam, and they evoke desynchronized auditory responses. The resulting defocused images of clutter do not mask perception of focused images for targets. Neural response synchronization may select a target to be the focus of attention, while desynchronization may impose inattention on the surroundings by defocusing perception of clutter. The formation of focused biosonar images from synchronized neural responses, and the defocusing that occurs with disruption of synchrony, quantitatively demonstrates how temporal binding may control attention and bring a perceptual object into existence. © 2014. Published by The Company of Biologists Ltd.

  13. Visual recognition memory and auditory brainstem response in infant rhesus monkeys exposed perinatally to environmental tobacco smoke.

    PubMed

    Golub, Mari S; Slotkin, Theodore A; Tarantal, Alice F; Pinkerton, Kent E

    2007-06-02

    The impact of perinatal exposure to environmental tobacco smoke (ETS) on cognitive development is controversial. We exposed rhesus monkeys to ETS or filtered air (5 animals per group) beginning in utero on day 50 of pregnancy and continuing throughout postnatal testing. In infancy, we evaluated both groups for visual recognition memory and auditory function (auditory brainstem response). The ETS group showed significantly less novelty preference in the visual recognition task whereas no effects on auditory function were detected. These preliminary results support the view that perinatal ETS exposure has adverse effects on cognitive function and indicate further that rhesus monkeys may provide a valuable nonhuman primate model for investigating this link.

  14. ERP Correlates of Language-Specific Processing of Auditory Pitch Feedback during Self-Vocalization

    ERIC Educational Resources Information Center

    Chen, Zhaocong; Liu, Peng; Wang, Emily Q.; Larson, Charles R.; Huang, Dongfeng; Liu, Hanjun

    2012-01-01

    The present study investigated whether the neural correlates for auditory feedback control of vocal pitch can be shaped by tone language experience. Event-related potentials (P2/N1) were recorded from adult native speakers of Mandarin and Cantonese who heard their voice auditory feedback shifted in pitch by -50, -100, -200, or -500 cents when they…

  15. Emergence of band-pass filtering through adaptive spiking in the owl's cochlear nucleus

    PubMed Central

    MacLeod, Katrina M.; Lubejko, Susan T.; Steinberg, Louisa J.; Köppl, Christine; Peña, Jose L.

    2014-01-01

    In the visual, auditory, and electrosensory modalities, stimuli are defined by first- and second-order attributes. The fast time-pressure signal of a sound, a first-order attribute, is important, for instance, in sound localization and pitch perception, while its slow amplitude-modulated envelope, a second-order attribute, can be used for sound recognition. Ascending the auditory pathway from ear to midbrain, neurons increasingly show a preference for the envelope and are most sensitive to particular envelope modulation frequencies, a tuning considered important for encoding sound identity. The level at which this tuning property emerges along the pathway varies across species, and the mechanism of how this occurs is a matter of debate. In this paper, we target the transition between auditory nerve fibers and the cochlear nucleus angularis (NA). While the owl's auditory nerve fibers simultaneously encode the fast and slow attributes of a sound, one synapse further, NA neurons encode the envelope more efficiently than the auditory nerve. Using in vivo and in vitro electrophysiology and computational analysis, we show that a single-cell mechanism inducing spike threshold adaptation can explain the difference in neural filtering between the two areas. We show that spike threshold adaptation can explain the increased selectivity to modulation frequency, as input level increases in NA. These results demonstrate that a spike generation nonlinearity can modulate the tuning to second-order stimulus features, without invoking network or synaptic mechanisms. PMID:24790170

  16. Steady-state signatures of visual perceptual load, multimodal distractor filtering, and neural competition.

    PubMed

    Parks, Nathan A; Hilimire, Matthew R; Corballis, Paul M

    2011-05-01

    The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were "tagged" by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.

  17. Re-examining the upper limit of temporal pitch

    PubMed Central

    Macherey, Olivier; Carlyon, Robert P.

    2015-01-01

    Five normally-hearing listeners pitch-ranked harmonic complexes of different fundamental frequencies (F0s) filtered in three different frequency regions. Harmonics were summed either in sine, alternating sine-cosine (ALT), or pulse-spreading (PSHC) phase. The envelopes of ALT and PSHC complexes repeated at rates of 2F0 and 4F0. Pitch corresponded to those rates at low F0s, but, as F0 increased, there was a range of F0s over which pitch remained constant or dropped. Gammatone-filterbank simulations showed that, as F0 increased and the number of harmonics interacting in a filter dropped, the output of that filter switched from repeating at 2F0 or 4F0 to repeating at F0. A model incorporating this phenomenon accounted well for the data, except for complexes filtered into the highest frequency region (7800-10800 Hz). To account for the data in that region it was necessary to assume either that auditory filters at very high frequencies are sharper than traditionally believed, and/or that the auditory system applies smaller weights to filters whose outputs repeat at high rates. The results also provide new evidence on the highest pitch that can be derived from purely temporal cues, and corroborate recent reports that a complex pitch can be derived from very-high-frequency resolved harmonics. PMID:25480066

  18. Psychoacoustics

    NASA Astrophysics Data System (ADS)

    Moore, Brian C. J.

    Psychoacoustics psychological is concerned with the relationships between the physical characteristics of sounds and their perceptual attributes. This chapter describes: the absolute sensitivity of the auditory system for detecting weak sounds and how that sensitivity varies with frequency; the frequency selectivity of the auditory system (the ability to resolve or hear out the sinusoidal components in a complex sound) and its characterization in terms of an array of auditory filters; the processes that influence the masking of one sound by another; the range of sound levels that can be processed by the auditory system; the perception and modeling of loudness; level discrimination; the temporal resolution of the auditory system (the ability to detect changes over time); the perception and modeling of pitch for pure and complex tones; the perception of timbre for steady and time-varying sounds; the perception of space and sound localization; and the mechanisms underlying auditory scene analysis that allow the construction of percepts corresponding to individual sounds sources when listening to complex mixtures of sounds.

  19. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response

    PubMed Central

    Nuttall, Helen E.; Moore, David R.; Barry, Johanna G.; Krumbholz, Katrin

    2015-01-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. PMID:25787954

  20. Time-frequency model for echo-delay resolution in wideband biosonar.

    PubMed

    Neretti, Nicola; Sanderson, Mark I; Intrator, Nathan; Simmons, James A

    2003-04-01

    A time/frequency model of the bat's auditory system was developed to examine the basis for the fine (approximately 2 micros) echo-delay resolution of big brown bats (Eptesicus fuscus), and its performance at resolving closely spaced FM sonar echoes in the bat's 20-100-kHz band at different signal-to-noise ratios was computed. The model uses parallel bandpass filters spaced over this band to generate envelopes that individually can have much lower bandwidth than the bat's ultrasonic sonar sounds and still achieve fine delay resolution. Because fine delay separations are inside the integration time of the model's filters (approximately 250-300 micros), resolving them means using interference patterns along the frequency dimension (spectral peaks and notches). The low bandwidth content of the filter outputs is suitable for relay of information to higher auditory areas that have intrinsically poor temporal response properties. If implemented in fully parallel analog-digital hardware, the model is computationally extremely efficient and would improve resolution in military and industrial sonar receivers.

  1. Spatial band-pass filtering aids decoding musical genres from auditory cortex 7T fMRI.

    PubMed

    Sengupta, Ayan; Pollmann, Stefan; Hanke, Michael

    2018-01-01

    Spatial filtering strategies, combined with multivariate decoding analysis of BOLD images, have been used to investigate the nature of the neural signal underlying the discriminability of brain activity patterns evoked by sensory stimulation -- primarily in the visual cortex. Reported evidence indicates that such signals are spatially broadband in nature, and are not primarily comprised of fine-grained activation patterns. However, it is unclear whether this is a general property of the BOLD signal, or whether it is specific to the details of employed analyses and stimuli. Here we performed an analysis of publicly available, high-resolution 7T fMRI on the response BOLD response to musical genres in primary auditory cortex that matches a previously conducted study on decoding visual orientation from V1.  The results show that the pattern of decoding accuracies with respect to different types and levels of spatial filtering is comparable to that obtained from V1, despite considerable differences in the respective cortical circuitry.

  2. The effects of alterations in the osseous external auditory canal on perceived sound quality.

    PubMed

    van Spronsen, Erik; Brienesse, Patrick; Ebbens, Fenna A; Waterval, Jerome J; Dreschler, Wouter A

    2015-10-01

    To evaluate the perceptual effect of the altered shape of the osseous external auditory canal (OEAC) on sound quality. Prospective study. Twenty subjects with normal hearing were presented with six simulated sound conditions representing the acoustic properties of six different ear canals (three normal ears and three cavities). The six different real ear unaided responses of these ear canals were used to filter Dutch sentences, resulting in six simulated sound conditions. A seventh unfiltered reference condition was used for comparison. Sound quality was evaluated using paired comparison ratings and a visual analog scale (VAS). Significant differences in sound quality were found between the normal and cavity conditions (all P < .001) using both the seven-point paired comparison rating and the VAS. No significant differences were found between the reference and normal conditions. Sound quality deteriorates when the OEAC is altered into a cavity. This proof of concept study shows that the altered acoustic quality of the OEAC after radical cavity surgery may lead to a clearly perceived deterioration in sound quality. Nevertheless, some questions remain about the extent to which these changes are affected by habituation and by other changes in middle ear anatomy and functionality. 4 © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  3. A nonlinear filter-bank model of the guinea-pig cochlear nerve: Rate responses

    NASA Astrophysics Data System (ADS)

    Sumner, Christian J.; O'Mard, Lowel P.; Lopez-Poveda, Enrique A.; Meddis, Ray

    2003-06-01

    The aim of this study is to produce a functional model of the auditory nerve (AN) response of the guinea-pig that reproduces a wide range of important responses to auditory stimulation. The model is intended for use as an input to larger scale models of auditory processing in the brain-stem. A dual-resonance nonlinear filter architecture is used to reproduce the mechanical tuning of the cochlea. Transduction to the activity on the AN is accomplished with a recently proposed model of the inner-hair-cell. Together, these models have been shown to be able to reproduce the response of high-, medium-, and low-spontaneous rate fibers from the guinea-pig AN at high best frequencies (BFs). In this study we generate parameters that allow us to fit the AN model to data from a wide range of BFs. By varying the characteristics of the mechanical filtering as a function of the BF it was possible to reproduce the BF dependence of frequency-threshold tuning curves, AN rate-intensity functions at and away from BF, compression of the basilar membrane at BF as inferred from AN responses, and AN iso-intensity functions. The model is a convenient computational tool for the simulation of the range of nonlinear tuning and rate-responses found across the length of the guinea-pig cochlear nerve.

  4. Body movement selectively shapes the neural representation of musical rhythms.

    PubMed

    Chemin, Baptiste; Mouraux, André; Nozaradan, Sylvie

    2014-12-01

    It is increasingly recognized that motor routines dynamically shape the processing of sensory inflow (e.g., when hand movements are used to feel a texture or identify an object). In the present research, we captured the shaping of auditory perception by movement in humans by taking advantage of a specific context: music. Participants listened to a repeated rhythmical sequence before and after moving their bodies to this rhythm in a specific meter. We found that the brain responses to the rhythm (as recorded with electroencephalography) after body movement were significantly enhanced at frequencies related to the meter to which the participants had moved. These results provide evidence that body movement can selectively shape the subsequent internal representation of auditory rhythms. © The Author(s) 2014.

  5. Calculation of selective filters of a device for primary analysis of speech signals

    NASA Astrophysics Data System (ADS)

    Chudnovskii, L. S.; Ageev, V. M.

    2014-07-01

    The amplitude-frequency responses of filters for primary analysis of speech signals, which have a low quality factor and a high rolloff factor in the high-frequency range, are calculated using the linear theory of speech production and psychoacoustic measurement data. The frequency resolution of the filter system for a sinusoidal signal is 40-200 Hz. The modulation-frequency resolution of amplitude- and frequency-modulated signals is 3-6 Hz. The aforementioned features of the calculated filters are close to the amplitudefrequency responses of biological auditory systems at the level of the eighth nerve.

  6. Software-defined microwave photonic filter with high reconfigurable resolution

    PubMed Central

    Wei, Wei; Yi, Lilin; Jaouën, Yves; Hu, Weisheng

    2016-01-01

    Microwave photonic filters (MPFs) are of great interest in radio frequency systems since they provide prominent flexibility on microwave signal processing. Although filter reconfigurability and tunability have been demonstrated repeatedly, it is still difficult to control the filter shape with very high precision. Thus the MPF application is basically limited to signal selection. Here we present a polarization-insensitive single-passband arbitrary-shaped MPF with ~GHz bandwidth based on stimulated Brillouin scattering (SBS) in optical fibre. For the first time the filter shape, bandwidth and central frequency can all be precisely defined by software with ~MHz resolution. The unprecedented multi-dimensional filter flexibility offers new possibilities to process microwave signals directly in optical domain with high precision thus enhancing the MPF functionality. Nanosecond pulse shaping by implementing precisely defined filters is demonstrated to prove the filter superiority and practicability. PMID:27759062

  7. Software-defined microwave photonic filter with high reconfigurable resolution.

    PubMed

    Wei, Wei; Yi, Lilin; Jaouën, Yves; Hu, Weisheng

    2016-10-19

    Microwave photonic filters (MPFs) are of great interest in radio frequency systems since they provide prominent flexibility on microwave signal processing. Although filter reconfigurability and tunability have been demonstrated repeatedly, it is still difficult to control the filter shape with very high precision. Thus the MPF application is basically limited to signal selection. Here we present a polarization-insensitive single-passband arbitrary-shaped MPF with ~GHz bandwidth based on stimulated Brillouin scattering (SBS) in optical fibre. For the first time the filter shape, bandwidth and central frequency can all be precisely defined by software with ~MHz resolution. The unprecedented multi-dimensional filter flexibility offers new possibilities to process microwave signals directly in optical domain with high precision thus enhancing the MPF functionality. Nanosecond pulse shaping by implementing precisely defined filters is demonstrated to prove the filter superiority and practicability.

  8. Unmasking of spiral ganglion neuron firing dynamics by membrane potential and neurotrophin-3.

    PubMed

    Crozier, Robert A; Davis, Robin L

    2014-07-16

    Type I spiral ganglion neurons have a unique role relative to other sensory afferents because, as a single population, they must convey the richness, complexity, and precision of auditory information as they shape signals transmitted to the brain. To understand better the sophistication of spiral ganglion response properties, we compared somatic whole-cell current-clamp recordings from basal and apical neurons obtained during the first 2 postnatal weeks from CBA/CaJ mice. We found that during this developmental time period neuron response properties changed from uniformly excitable to differentially plastic. Low-frequency, apical and high-frequency basal neurons at postnatal day 1 (P1)-P3 were predominantly slowly accommodating (SA), firing at low thresholds with little alteration in accommodation response mode induced by changes in resting membrane potential (RMP) or added neurotrophin-3 (NT-3). In contrast, P10-P14 apical and basal neurons were predominately rapidly accommodating (RA), had higher firing thresholds, and responded to elevation of RMP and added NT-3 by transitioning to the SA category without affecting the instantaneous firing rate. Therefore, older neurons appeared to be uniformly less excitable under baseline conditions yet displayed a previously unrecognized capacity to change response modes dynamically within a remarkably stable accommodation framework. Because the soma is interposed in the signal conduction pathway, these specializations can potentially lead to shaping and filtering of the transmitted signal. These results suggest that spiral ganglion neurons possess electrophysiological mechanisms that enable them to adapt their response properties to the characteristics of incoming stimuli and thus have the capacity to encode a wide spectrum of auditory information. Copyright © 2014 the authors 0270-6474/14/349688-15$15.00/0.

  9. Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model

    PubMed Central

    Marsh, John E.; Campbell, Tom A.

    2016-01-01

    The rostral brainstem receives both “bottom-up” input from the ascending auditory system and “top-down” descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory. PMID:27242396

  10. Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model.

    PubMed

    Marsh, John E; Campbell, Tom A

    2016-01-01

    The rostral brainstem receives both "bottom-up" input from the ascending auditory system and "top-down" descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory.

  11. Novel programmable microwave photonic filter with arbitrary filtering shape and linear phase.

    PubMed

    Zhu, Xiaoqi; Chen, Feiya; Peng, Huanfa; Chen, Zhangyuan

    2017-04-17

    We propose and demonstrate a novel optical frequency comb (OFC) based microwave photonic filter which is able to realize arbitrary filtering shape with linear phase response. The shape of filter response is software programmable using finite impulse response (FIR) filter design method. By shaping the OFC spectrum using a programmable waveshaper, we can realize designed amplitude of FIR taps. Positive and negative sign of FIR taps are achieved by balanced photo-detection. The double sideband (DSB) modulation and symmetric distribution of filter taps are used to maintain the linear phase condition. In the experiment, we realize a fully programmable filter in the range from DC to 13.88 GHz. Four basic types of filters (lowpass, highpass, bandpass and bandstop) with different bandwidths, cut-off frequencies and central frequencies are generated. Also a triple-passband filter is realized in our experiment. To the best of our knowledge, it is the first demonstration of a programmable multiple passband MPF with linear phase response. The experiment shows good agreement with the theoretical result.

  12. Analysis of the relationship between cognitive skills and unilateral sensory hearing loss.

    PubMed

    Calderón-Leyva, I; Díaz-Leines, S; Arch-Tirado, E; Lino-González, A L

    2018-06-01

    To analyse cognitive skills in patients with severe unilateral hearing loss versus those in subjects with normal hearing. 40 adults participated: 20 patients (10 women and 10 men) with severe unilateral hearing loss and 20 healthy subjects matched to the study group. Cognitive abilities were measured with the Spanish version of the Woodcock Johnson Battery-Revised; central auditory processing was assessed with monaural psychoacoustic tests. Box plots were drawn and t tests were performed for samples with a significance of P≤.05. A comparison of performances on the filtered word testing and time-compressed disyllabic word tests between patients and controls revealed a statistically significant difference (P≤.05) with greater variability among responses by hearing impaired subjects. This same group also showed a better cognitive performance on the numbers reversed, visual auditory learning, analysis synthesis, concept formation, and incomplete words tests. Patients with hearing loss performed more poorly than controls on the filtered word and time-compressed disyllabic word tests, but more competently on memory, reasoning, and auditory processing tasks. Complementary tests, such as those assessing central auditory processes and cognitive ability tests, are important and helpful for designing habilitation/rehabilitation and therapeutic strategies intended to optimise and stimulate cognitive skills in subjects with unilateral hearing impairment. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. TU-CD-207-10: Dedicated Cone-Beam Breast CT: Design of a 3-D Beam-Shaping Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vedantham, S; Shi, L; Karellas, A

    2015-06-15

    Purpose: To design a 3 -D beam-shaping filter for cone-beam breast CT for equalizing x-ray photon fluence incident on the detector along both fan and cone angle directions. Methods: The 3-D beam-shaping filter was designed as the sum of two filters: a bow-tie filter assuming cylindrical breast and a 3D difference filter equivalent to the difference in projected thickness between the cylinder and the real breast. Both filters were designed with breast-equivalent material and converted to Al for the targeted x-ray spectrum. The bow-tie was designed for the largest diameter cylindrical breast by determining the fan-angle dependent path-length and themore » filter thickness needed to equalize the fluence. A total of 23,760 projections (180 projections of 132 binary breast CT volumes) were averaged, scaled for the largest breast, and subtracted from the projection of the largest diameter cylindrical breast to provide the 3D difference filter. The 3 -D beam shaping filter was obtained by summing the two filters. Numerical simulations with semi-ellipsoidal breasts of 10–18 cm diameter (chest-wall to nipple length=0.75 x diameter) were conducted to evaluate beam equalization. Results: The proposed 3-D beam-shaping filter showed a 140% -300% improvement in equalizing the photon fluence along the chest-wall to nipple (cone-angle) direction compared to a bow-tie filter. The improvement over bow-tie filter was larger for breasts with longer chest-wall to nipple length. Along the radial (fan-angle) direction, the performance of the 3-D beam shaping filter was marginally better than the bow-tie filter, with 4%-10% improvement in equalizing the photon fluence. For a ray traversing the chest-wall diameter of the breast, the filter transmission ratio was >0.95. Conclusion: The 3-D beam shaping filter provided substantial advantage over bow-tie filter in equalizing the photon fluence along the cone-angle direction. In conjunction with a 2-axis positioner, the filter can accommodate breasts of varying dimensions and chest-wall inclusion. Supported in part by NIH R01 CA128906 and R21 CA134128. The contents are solely the responsibility of the authors and do not reflect the official views of the NIH or NCI.« less

  14. Temporal properties of responses to sound in the ventral nucleus of the lateral lemniscus.

    PubMed

    Recio-Spinoso, Alberto; Joris, Philip X

    2014-02-01

    Besides the rapid fluctuations in pressure that constitute the "fine structure" of a sound stimulus, slower fluctuations in the sound's envelope represent an important temporal feature. At various stages in the auditory system, neurons exhibit tuning to envelope frequency and have been described as modulation filters. We examine such tuning in the ventral nucleus of the lateral lemniscus (VNLL) of the pentobarbital-anesthetized cat. The VNLL is a large but poorly accessible auditory structure that provides a massive inhibitory input to the inferior colliculus. We test whether envelope filtering effectively applies to the envelope spectrum when multiple envelope components are simultaneously present. We find two broad classes of response with often complementary properties. The firing rate of onset neurons is tuned to a band of modulation frequencies, over which they also synchronize strongly to the envelope waveform. Although most sustained neurons show little firing rate dependence on modulation frequency, some of them are weakly tuned. The latter neurons are usually band-pass or low-pass tuned in synchronization, and a reverse-correlation approach demonstrates that their modulation tuning is preserved to nonperiodic, noisy envelope modulations of a tonal carrier. Modulation tuning to this type of stimulus is weaker for onset neurons. In response to broadband noise, sustained and onset neurons tend to filter out envelope components over a frequency range consistent with their modulation tuning to periodically modulated tones. The results support a role for VNLL in providing temporal reference signals to the auditory midbrain.

  15. Improvement of uniformity of the negative ion beams by tent-shaped magnetic field in the JT-60 negative ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Masafumi, E-mail: yoshida.masafumi@jaea.go.jp; Hanada, Masaya; Kojima, Atsushi

    2014-02-15

    Non-uniformity of the negative ion beams in the JT-60 negative ion source with the world-largest ion extraction area was improved by modifying the magnetic filter in the source from the plasma grid (PG) filter to a tent-shaped filter. The magnetic design via electron trajectory calculation showed that the tent-shaped filter was expected to suppress the localization of the primary electrons emitted from the filaments and created uniform plasma with positive ions and atoms of the parent particles for the negative ions. By modifying the magnetic filter to the tent-shaped filter, the uniformity defined as the deviation from the averaged beammore » intensity was reduced from 14% of the PG filter to ∼10% without a reduction of the negative ion production.« less

  16. Linking prenatal experience to the emerging musical mind.

    PubMed

    Ullal-Gupta, Sangeeta; Vanden Bosch der Nederlanden, Christina M; Tichko, Parker; Lahav, Amir; Hannon, Erin E

    2013-09-03

    The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one's culture begins already within the mother's womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind.

  17. Dislocation of the incus into the external auditory canal after mountain-biking accident.

    PubMed

    Saito, T; Kono, Y; Fukuoka, Y; Yamamoto, H; Saito, H

    2001-01-01

    We report a rare case of incus dislocation to the external auditory canal after a mountain-biking accident. Otoscopy showed ossicular protrusion in the upper part of the left external auditory canal. CT indicated the disappearance of the incus, and an incus-like bone was found in the left external auditory canal. There was another bony and board-like structure in the attic. During the surgery, a square-shaped bony plate (1 x 1 cm) was found in the attic. It was determined that the bony plate had fallen from the tegmen of the attic. The fracture line in the posterosuperior auditory canal extending to the fossa incudis was identified. According to these findings, it was considered that the incus was pushed into the external auditory canal by the impact of skull injury through the fractured posterosuperior auditory canal, which opened widely enough for incus dislocation. Copyright 2001 S. Karger AG, Basel

  18. Techniques and applications for binaural sound manipulation in human-machine interfaces

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1990-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  19. APEX 3: a multi-purpose test platform for auditory psychophysical experiments.

    PubMed

    Francart, Tom; van Wieringen, Astrid; Wouters, Jan

    2008-07-30

    APEX 3 is a software test platform for auditory behavioral experiments. It provides a generic means of setting up experiments without any programming. The supported output devices include sound cards and cochlear implants from Cochlear Corporation and Advanced Bionics Corporation. Many psychophysical procedures are provided and there is an interface to add custom procedures. Plug-in interfaces are provided for data filters and external controllers. APEX 3 is supported under Linux and Windows and is available free of charge.

  20. Techniques and applications for binaural sound manipulation in human-machine interfaces

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1992-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  1. MO-FG-CAMPUS-IeP2-01: Characterization of Beam Shaping Filters and Photon Spectra From HVL Profiles in CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bujila, R; Royal Institute of Technology, Stockholm; Kull, L

    Purpose: Advanced dosimetry in CT (e.g. the Monte Carlo method) requires an accurate characterization of the shaped filter and radiation quality used during a scan. The purpose of this work was to develop a method where half value layer (HVL) profiles along shaped filters could be made. From the HVL profiles the beam shaping properties and effective photon spectrum for a particular scan can be inferred. Methods: A measurement rig was developed to allow determinations of the HVL under a scatter-free narrow-beam geometry and constant focal spot to ionization chamber distance for different fan angles. For each fan angle themore » HVL is obtained by fitting the transmission of radiation through different thicknesses of an Al absorber (type 1100) using an appropriate model. The effective Al thickness of shaped filters and effective photon spectra are estimated using a model of photon emission from a Tungsten anode. This method is used to obtain the effective photon spectra and effective Al thickness of shaped filters for a CT scanner recently introduced to the market. Results: This study resulted in a set of effective photon spectra (central ray) for each kVp along with effective Al thicknesses of the different shaped filters. The effective photon spectra and effective Al thicknesses of shaped filters were used to obtain numerically approximated HVL profiles and compared to measured HVL profiles (mean absolute percentage error = 0.02). The central axis HVL found in the vendor’s technical documentation were compared to approximated HVL values (mean absolute percentage error = 0.03). Conclusion: This work has resulted in a unique method of measuring HVL profiles along shaped filters in CT. Further the effective photon spectra and the effective Al thicknesses of shaped filters that were obtained can be incorporated into Monte Carlo simulations.« less

  2. Loudness of steady sounds - A new theory

    NASA Technical Reports Server (NTRS)

    Howes, W. L.

    1979-01-01

    A new mathematical theory for calculating the loudness of steady sounds from power summation and frequency interaction, based on psychoacoustic and physiological information, assuems that loudness is a subjective measure of the electrical energy transmitted along the auditory nerve to the central nervous system. The auditory system consists of the mechanical part modeled by a bandpass filter with a transfer function dependent on the sound pressure, and the electrical part where the signal is transformed into a half-wave reproduction represented by the electrical power in impulsive discharges transmitted along neurons comprising the auditory nerve. In the electrical part the neurons are distributed among artificial parallel channels with frequency bandwidths equal to 'critical bandwidths for loudness', within which loudness is constant for constant sound pressure. The total energy transmitted to the central nervous system is the sum of the energy transmitted in all channels, and the loudness is proportional to the square root of the total filtered sound energy distributed over all channels. The theory explains many psychoacoustic phenomena such as audible beats resulting from closely spaced tones, interaction of sound stimuli which affect the same neurons affecting loudness, and of individually subliminal sounds becoming audible if they lie within the same critical band.

  3. Taking off the training wheels: Measuring auditory P3 during outdoor cycling using an active wet EEG system.

    PubMed

    Scanlon, Joanna E M; Townsend, Kimberley A; Cormier, Danielle L; Kuziek, Jonathan W P; Mathewson, Kyle E

    2017-12-14

    Mobile EEG allows the investigation of brain activity in increasingly complex environments. In this study, EEG equipment was adapted for use and transportation in a backpack while cycling. Participants performed an auditory oddball task while cycling outside and sitting in an isolated chamber inside the lab. Cycling increased EEG noise and marginally diminished alpha amplitude. However, this increased noise did not influence the ability to measure reliable event related potentials (ERP). The P3 was similar in topography, and morphology when outside on the bike, with a lower amplitude in the outside cycling condition. There was only a minor decrease in the statistical power to measure reliable ERP effects. Unexpectedly, when biking outside significantly decreased P2 and increased N1 amplitude were observed when evoked by both standards and targets compared with sitting in the lab. This may be due to attentional processes filtering the overlapping sounds between the tones used and similar environmental frequencies. This study established methods for mobile recording of ERP signals. Future directions include investigating auditory P2 filtering inside the laboratory. Copyright © 2017. Published by Elsevier B.V.

  4. Classification of single-trial auditory events using dry-wireless EEG during real and motion simulated flight.

    PubMed

    Callan, Daniel E; Durantin, Gautier; Terzibas, Cengiz

    2015-01-01

    Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation. The advantage of this auditory task is that it does not interfere with the perceptual motor processes involved with piloting the plane. Classification was based on identifying the presentation of a chirp sound vs. silent periods. Evaluation of Independent component analysis (ICA) and Kalman filtering to enhance classification performance by extracting brain activity related to the auditory event from other non-task related brain activity and artifacts was assessed. The results of permutation testing revealed that single trial classification of presence or absence of an auditory event was significantly above chance for all conditions on a novel test set. The best performance could be achieved with both ICA and Kalman filtering relative to no processing: Platform Off (83.4% vs. 78.3%), Platform On (73.1% vs. 71.6%), Biplane Engine Off (81.1% vs. 77.4%), and Biplane Engine On (79.2% vs. 66.1%). This experiment demonstrates that dry-wireless EEG can be used in environments with considerable vibration, wind, acoustic noise, and physiological artifacts and achieve good single trial classification performance that is necessary for future successful application of neuro-augmentation technology based on brain-machine interfaces.

  5. Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation

    PubMed Central

    Butler, Blake E.; Trainor, Laurel J.

    2012-01-01

    Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch. PMID:22740836

  6. Pulse Shaped 8-PSK Bandwidth Efficiency and Spectral Spike Elimination

    NASA Technical Reports Server (NTRS)

    Tao, Jian-Ping

    1998-01-01

    The most bandwidth-efficient communication methods are imperative to cope with the congested frequency bands. Pulse shaping methods have excellent effects on narrowing bandwidth and increasing band utilization. The position of the baseband filters for the pulse shaping is crucial. Post-modulation pulse shaping (a low pass filter is located after the modulator) can change signals from constant envelope to non-constant envelope, and non-constant envelope signals through non-linear device (a SSPA or TWT) can further spread the power spectra. Pre-modulation pulse shaping (a filter is located before the modulator) will have constant envelope. These two pulse shaping methods have different effects on narrowing the bandwidth and producing bit errors. This report studied the effect of various pre-modulation pulse shaping filters with respect to bandwidth, spectral spikes and bit error rate. A pre-modulation pulse shaped 8-ary Phase Shift Keying (8PSK) modulation was used throughout the simulations. In addition to traditional pulse shaping filters, such as Bessel, Butterworth and Square Root Raised Cosine (SRRC), other kinds of filters or pulse waveforms were also studied in the pre-modulation pulse shaping method. Simulations were conducted by using the Signal Processing Worksystem (SPW) software package on HP workstations which simulated the power spectral density of pulse shaped 8-PSK signals, end to end system performance and bit error rates (BERS) as a function of Eb/No using pulse shaping in an AWGN channel. These results are compared with the post-modulation pulse shaped 8-PSK results. The simulations indicate traditional pulse shaping filters used in pre-modulation pulse shaping may produce narrower bandwidth, but with worse BER than those in post-modulation pulse shaping. Theory and simulations show pre- modulation pulse shaping could also produce discrete line power spectra (spikes) at regular frequency intervals. These spikes may cause interference with adjacent channel and reduce power efficiency. Some particular pulses (filters), such as trapezoid and pulses with different transits (such as weighted raised cosine transit) were found to reduce bandwidth and not generate spectral spikes. Although a solid state power amplifier (SSPA) was simulated in the non-linear (saturation) region, output power spectra did not spread due to the constant envelope 8-PSK signals.

  7. Computational modeling of the human auditory periphery: Auditory-nerve responses, evoked potentials and hearing loss.

    PubMed

    Verhulst, Sarah; Altoè, Alessandro; Vasilkov, Viacheslav

    2018-03-01

    Models of the human auditory periphery range from very basic functional descriptions of auditory filtering to detailed computational models of cochlear mechanics, inner-hair cell (IHC), auditory-nerve (AN) and brainstem signal processing. It is challenging to include detailed physiological descriptions of cellular components into human auditory models because single-cell data stems from invasive animal recordings while human reference data only exists in the form of population responses (e.g., otoacoustic emissions, auditory evoked potentials). To embed physiological models within a comprehensive human auditory periphery framework, it is important to capitalize on the success of basic functional models of hearing and render their descriptions more biophysical where possible. At the same time, comprehensive models should capture a variety of key auditory features, rather than fitting their parameters to a single reference dataset. In this study, we review and improve existing models of the IHC-AN complex by updating their equations and expressing their fitting parameters into biophysical quantities. The quality of the model framework for human auditory processing is evaluated using recorded auditory brainstem response (ABR) and envelope-following response (EFR) reference data from normal and hearing-impaired listeners. We present a model with 12 fitting parameters from the cochlea to the brainstem that can be rendered hearing impaired to simulate how cochlear gain loss and synaptopathy affect human population responses. The model description forms a compromise between capturing well-described single-unit IHC and AN properties and human population response features. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Auditory perception bias in speech imitation

    PubMed Central

    Postma-Nilsenová, Marie; Postma, Eric

    2013-01-01

    In an experimental study, we explored the role of auditory perception bias in vocal pitch imitation. Psychoacoustic tasks involving a missing fundamental indicate that some listeners are attuned to the relationship between all the higher harmonics present in the signal, which supports their perception of the fundamental frequency (the primary acoustic correlate of pitch). Other listeners focus on the lowest harmonic constituents of the complex sound signal which may hamper the perception of the fundamental. These two listener types are referred to as fundamental and spectral listeners, respectively. We hypothesized that the individual differences in speakers' capacity to imitate F0 found in earlier studies, may at least partly be due to the capacity to extract information about F0 from the speech signal. Participants' auditory perception bias was determined with a standard missing fundamental perceptual test. Subsequently, speech data were collected in a shadowing task with two conditions, one with a full speech signal and one with high-pass filtered speech above 300 Hz. The results showed that perception bias toward fundamental frequency was related to the degree of F0 imitation. The effect was stronger in the condition with high-pass filtered speech. The experimental outcomes suggest advantages for fundamental listeners in communicative situations where F0 imitation is used as a behavioral cue. Future research needs to determine to what extent auditory perception bias may be related to other individual properties known to improve imitation, such as phonetic talent. PMID:24204361

  9. Influence of size, shape, and flexibility on bacterial passage through micropore membrane filters.

    PubMed

    Wang, Yingying; Hammes, Frederik; Düggelin, Marcel; Egli, Thomas

    2008-09-01

    Sterilization of fluids by means of microfiltration is commonly applied in research laboratories as well as in pharmaceutical and industrial processes. Sterile micropore filters are subject to microbiological validation, where Brevundimonas diminuta is used as a standard test organism. However, several recent reports on the ubiquitous presence of filterable bacteria in aquatic environments have cast doubt on the accuracy and validity of the standard filter-testing method. Six different bacterial species of various sizes and shapes (Hylemonella gracilis, Escherichia coli, Sphingopyxis alaskensis, Vibrio cholerae, Legionella pneumophila, and B. diminuta) were tested for their filterability through sterile micropore filters. In all cases, the slender spirillum-shaped Hylemonella gracilis cells showed a superior ability to pass through sterile membrane filters. Our results provide solid evidence that the overall shape (including flexibility), instead of biovolume, is the determining factor for the filterability of bacteria, whereas cultivation conditions also play a crucial role. Furthermore, the filtration volume has a more important effect on the passage percentage in comparison with other technical variables tested (including flux and filter material). Based on our findings, we recommend a re-evaluation of the grading system for sterile filters, and suggest that the species Hylemonella should be considered as an alternative filter-testing organism for the quality assessment of micropore filters.

  10. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.

  11. Classification of underwater target echoes based on auditory perception characteristics

    NASA Astrophysics Data System (ADS)

    Li, Xiukun; Meng, Xiangxia; Liu, Hang; Liu, Mingye

    2014-06-01

    In underwater target detection, the bottom reverberation has some of the same properties as the target echo, which has a great impact on the performance. It is essential to study the difference between target echo and reverberation. In this paper, based on the unique advantage of human listening ability on objects distinction, the Gammatone filter is taken as the auditory model. In addition, time-frequency perception features and auditory spectral features are extracted for active sonar target echo and bottom reverberation separation. The features of the experimental data have good concentration characteristics in the same class and have a large amount of differences between different classes, which shows that this method can effectively distinguish between the target echo and reverberation.

  12. Shaped, lead-loaded acrylic filters for patient exposure reduction and image-quality improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, J.E.; Stears, J.G.; Frank, E.D.

    1983-03-01

    Shaped filters that are constructed of lead-loaded acrylic material for use in patient radiography are discussed. Use of the filters will result in improved overall image quality with significant exposure reduction to the patient (approximately a 2X reduction in breast exposure and a 3X reduction in thyroid gland exposure). Detailed drawings of the shaped filters for scoliosis radiography, cervical spine radiography, and for long film changers in special procedures are provided. The use of the scoliosis filters is detailed and includes phantom and patient radiographs and dose reduction information.

  13. Multichannel signal enhancement

    DOEpatents

    Lewis, Paul S.

    1990-01-01

    A mixed adaptive filter is formulated for the signal processing problem where desired a priori signal information is not available. The formulation generates a least squares problem which enables the filter output to be calculated directly from an input data matrix. In one embodiment, a folded processor array enables bidirectional data flow to solve the recursive problem by back substitution without global communications. In another embodiment, a balanced processor array solves the recursive problem by forward elimination through the array. In a particular application to magnetoencephalography, the mixed adaptive filter enables an evoked response to an auditory stimulus to be identified from only a single trial.

  14. Challenges of recording human fetal auditory-evoked response using magnetoencephalography.

    PubMed

    Eswaran, H; Lowery, C L; Robinson, S E; Wilson, J D; Cheyne, D; McKenzie, D

    2000-01-01

    Our goals were to successfully perform fetal auditory-evoked responses using the magnetoencephalography technique, understand its problems and limitations, and propose instrument design modifications to improve the signal quality and success rate. Fetal auditory-evoked responses were recorded from four fetuses with gestational ages ranging from 33-40+ weeks. The signals were recorded using a gantry-based superconducting quantum interference device. Auditory stimulus was 1 kHz tone burst. The evoked signals were digitized and averaged over an 800 ms window. After several trials of positioning and repositioning the subjects, we were able to record auditory-evoked responses in three out of the four fetuses. Since the superconducting quantum interference device array design was not shaped to fit over the mother's abdomen, we experienced difficulty in positioning the sensors over the fetal head. Based on this pilot study, we propose instrument design that may improve signal quality and success rate of the fetal magnetic auditory-evoked response.

  15. Anthropometry of external auditory canal by non-contactable measurement.

    PubMed

    Yu, Jen-Fang; Lee, Kun-Che; Wang, Ren-Hung; Chen, Yen-Sheng; Fan, Chun-Chieh; Peng, Ying-Chin; Tu, Tsung-Hsien; Chen, Ching-I; Lin, Kuei-Yi

    2015-09-01

    Human ear canals cannot be measured directly with existing general measurement tools. Furthermore, general non-contact optical methods can only conduct simple peripheral measurements of the auricle and cannot obtain the internal ear canal shape-related measurement data. Therefore, this study uses the computed tomography (CT) technology to measure the geometric shape of the ear canal and the shape of the ear canal using a non-invasive method, and to complete the anthropometry of external auditory canal. The results of the study show that the average height and width of ear canal openings, and the average depth of the first bend for men are generally longer, wider and deeper than those for women. In addition, the difference between the height and width of the ear canal opening is about 40% (p < 0.05). Hence, the circular cross-section shape of the earplugs should be replaced with an elliptical cross-section shape during manufacturing for better fitting. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  16. The stabilized, wavelet-Mellin transform for analyzing the size and shape information of vocalized sounds

    NASA Astrophysics Data System (ADS)

    Irino, Toshio; Patterson, Roy

    2005-04-01

    We hear vowels produced by men, women, and children as approximately the same although there is considerable variability in glottal pulse rate and vocal tract length. At the same time, we can identify the speaker group. Recent experiments show that it is possible to identify vowels even when the glottal pulse rate and vocal tract length are condensed or expanded beyond the range of natural vocalization. This suggests that the auditory system has an automatic process to segregate information about shape and size of the vocal tract. Recently we proposed that the auditory system uses some form of Stabilized, Wavelet-Mellin Transform (SWMT) to analyze scale information in bio-acoustic sounds as a general framework for auditory processing from cochlea to cortex. This talk explains the theoretical background of the model and how the vocal information is normalized in the representation. [Work supported by GASR(B)(2) No. 15300061, JSPS.

  17. A Re-examination of the Effect of Masker Phase Curvature on Non-simultaneous Masking.

    PubMed

    Carlyon, Robert P; Flanagan, Sheila; Deeks, John M

    2017-12-01

    Forward masking of a sinusoidal signal is determined not only by the masker's power spectrum but also by its phase spectrum. Specifically, when the phase spectrum is such that the output of an auditory filter centred on the signal has a highly modulated ("peaked") envelope, there is less masking than when that envelope is flat. This finding has been attributed to non-linearities, such as compression, reducing the average neural response to maskers that produce more peaked auditory filter outputs (Carlyon and Datta, J Acoust Soc Am 101:3636-3647, 1997). Here we evaluate an alternative explanation proposed by Wotcjzak and Oxenham (Wojtczak and Oxenham, J Assoc Res Otolaryngol 10:595-607, 2009). They reported a masker phase effect for 6-kHz signals when the masker components were at least an octave below the signal frequency. Wotcjzak and Oxenham argued that this effect was inconsistent with cochlear compression, and, because it did not occur at lower signal frequencies, was also inconsistent with more central compression. It was instead attributed to activation of the efferent system reducing the response to the subsequent probe. Here, experiment 1 replicated their main findings. Experiment 2 showed that the phase effect on off-frequency forward masking is similar at signal frequencies of 2 and 6 kHz, provided that one equates the number of components likely to interact within an auditory filter centred on the signal, thereby roughly equating the effect of masker phase on the peakiness of that filter output. Experiment 3 showed that for some subjects, masker phase also had a strong influence on off-frequency backward masking of the signal, and that the size of this effect correlated across subjects with that observed in forward masking. We conclude that the masker phase effect is mediated mainly by cochlear non-linearities, with a possible additional effect of more central compression. The data are not consistent with a role for the efferent system.

  18. Auditory Imagery Shapes Movement Timing and Kinematics: Evidence from a Musical Task

    ERIC Educational Resources Information Center

    Keller, Peter E.; Dalla Bella, Simone; Koch, Iring

    2010-01-01

    The role of anticipatory auditory imagery in music-like sequential action was investigated by examining timing accuracy and kinematics using a motion capture system. Musicians responded to metronomic pacing signals by producing three unpaced taps on three vertically aligned keys at the given tempo. Taps triggered tones in two out of three blocked…

  19. Fully reconfigurable photonic microwave transversal filter based on digital micromirror device and continuous-wave, incoherent supercontinuum source.

    PubMed

    Lee, Ju Han; Chang, You Min; Han, Young-Geun; Lee, Sang Bae; Chung, Hae Yang

    2007-08-01

    The combined use of a programmable, digital micromirror device (DMD) and an ultrabroadband, cw, incoherent supercontinuum (SC) source is experimentally demonstrated to fully explore various aspects on the reconfiguration of a microwave filter transfer function by creating a range of multiwavelength optical filter shapes. Owing to both the unique characteristic of the DMD that an arbitrary optical filter shape can be readily produced and the ultrabroad bandwidth of the cw SC source that is 3 times larger than that of Er-amplified spontaneous emission, a multiwavelength optical beam pattern can be generated with a large number of wavelength filter taps apodized by an arbitrary amplitude window. Therefore various types of high-quality microwave filter can be readily achieved through the spectrum slicing-based photonic microwave transversal filter scheme. The experimental demonstration is performed in three aspects: the tuning of a filter resonance bandwidth at a fixed resonance frequency, filter resonance frequency tuning at a fixed resonance frequency, and flexible microwave filter shape reconstruction.

  20. Psychometric functions for informational masking

    NASA Astrophysics Data System (ADS)

    Lutfi, Robert A.; Kistler, Doris J.; Callahan, Michael R.; Wightman, Frederic L.

    2003-04-01

    The method of constant stimuli was used to obtain complete psychometric functions (PFs) from 44 normal-hearing listeners in conditions known to produce varying amounts of informational masking. The task was to detect a pure-tone signal in the presence of a broadband noise and in the presence of multitone maskers with frequencies and amplitudes that varied at random from one presentation to the next. Relative to the broadband noise condition, significant reductions were observed in both the slope and the upper asymptote of the PF for multitone maskers producing large amounts of informational masking. Slope was affected more for some listeners while asymptote was affected more for others. Mean slopes and asymptotes varied nonmonotonically with the number of masker components in much the same manner as mean thresholds. The results are consistent with a model that assumes trial-by-trial judgments are based on a weighted sum of dB levels at the output of independent auditory filters. For many listeners, however, the weights appear to reflect how often a nonsignal auditory filter is mistaken for the signal filter. For these listeners adaptive procedures may produce a significant bias in the estimates of threshold for conditions of informational masking. [Work supported by NIDCD.

  1. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields

    PubMed Central

    Lee, Christopher M.; Osman, Ahmad F.; Volgushev, Maxim; Escabí, Monty A.

    2016-01-01

    Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices. PMID:26843599

  2. Music training for the development of auditory skills.

    PubMed

    Kraus, Nina; Chandrasekaran, Bharath

    2010-08-01

    The effects of music training in relation to brain plasticity have caused excitement, evident from the popularity of books on this topic among scientists and the general public. Neuroscience research has shown that music training leads to changes throughout the auditory system that prime musicians for listening challenges beyond music processing. This effect of music training suggests that, akin to physical exercise and its impact on body fitness, music is a resource that tones the brain for auditory fitness. Therefore, the role of music in shaping individual development deserves consideration.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietrich, K.N.; Succop, P.A.; Berger, O.G.

    This analysis examined the relationship between lead exposure as registered in whole blood (PbB) and the central auditory processing abilities and cognitive developmental status of the Cincinnati cohort (N = 259) at age 5 years. Although the effects were small, higher prenatal, neonatal, and postnatal PbB levels were associated with poorer central auditory processing abilities on the Filtered Word Subtest of the SCAN (a screening test for auditory processing disorders). Higher postnatal PbB levels were associated with poorer performance on all cognitive developmental subscales of the Kaufman Assessment Battery for Children (K-ABC). However, following adjustment for measures of the homemore » environment and maternal intelligence, few statistically or near statistically significant associations remained. Our findings are discussed in the context of the related issues of confounding and the detection of weak associations in high risk populations.« less

  4. Neurite-specific Ca2+ dynamics underlying sound processing in an auditory interneurone.

    PubMed

    Baden, T; Hedwig, B

    2007-01-01

    Concepts on neuronal signal processing and integration at a cellular and subcellular level are driven by recording techniques and model systems available. The cricket CNS with the omega-1-neurone (ON1) provides a model system for auditory pattern recognition and directional processing. Exploiting ON1's planar structure we simultaneously imaged free intracellular Ca(2+) at both input and output neurites and recorded the membrane potential in vivo during acoustic stimulation. In response to a single sound pulse the rate of Ca(2+) rise followed the onset spike rate of ON1, while the final Ca(2+) level depended on the mean spike rate. Ca(2+) rapidly increased in both dendritic and axonal arborizations and only gradually in the axon and the cell body. Ca(2+) levels were particularly high at the spike-generating zone. Through the activation of a Ca(2+)-sensitive K(+) current this may exhibit a specific control over the cell's electrical response properties. In all cellular compartments presentation of species-specific calling song caused distinct oscillations of the Ca(2+) level in the chirp rhythm, but not the faster syllable rhythm. The Ca(2+)-mediated hyperpolarization of ON1 suppressed background spike activity between chirps, acting as a noise filter. During directional auditory processing, the functional interaction of Ca(2+)-mediated inhibition and contralateral synaptic inhibition was demonstrated. Upon stimulation with different sound frequencies, the dendrites, but not the axonal arborizations, demonstrated a tonotopic response profile. This mirrored the dominance of the species-specific carrier frequency and resulted in spatial filtering of high frequency auditory inputs. (c) 2006 Wiley Periodicals, Inc.

  5. Noise-invariant Neurons in the Avian Auditory Cortex: Hearing the Song in Noise

    PubMed Central

    Moore, R. Channing; Lee, Tyler; Theunissen, Frédéric E.

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex. PMID:23505354

  6. Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.

    PubMed

    Moore, R Channing; Lee, Tyler; Theunissen, Frédéric E

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.

  7. Audio-tactile integration and the influence of musical training.

    PubMed

    Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo

    2014-01-01

    Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.

  8. Cross-Domain Effects of Music and Language Experience on the Representation of Pitch in the Human Auditory Brainstem

    ERIC Educational Resources Information Center

    Bidelman, Gavin M.; Gandour, Jackson T.; Krishnan, Ananthanarayan

    2011-01-01

    Neural encoding of pitch in the auditory brainstem is known to be shaped by long-term experience with language or music, implying that early sensory processing is subject to experience-dependent neural plasticity. In language, pitch patterns consist of sequences of continuous, curvilinear contours; in music, pitch patterns consist of relatively…

  9. The Development of the Orthographic Consistency Effect in Speech Recognition: From Sublexical to Lexical Involvement

    ERIC Educational Resources Information Center

    Ventura, Paulo; Morais, Jose; Kolinsky, Regine

    2007-01-01

    The influence of orthography on children's on-line auditory word recognition was studied from the end of Grade 2 to the end of Grade 4, by examining the orthographic consistency effect [Ziegler, J. C., & Ferrand, L. (1998). Orthography shapes the perception of speech: The consistency effect in auditory recognition. "Psychonomic Bulletin & Review",…

  10. Sound texture perception via statistics of the auditory periphery: Evidence from sound synthesis

    PubMed Central

    McDermott, Josh H.; Simoncelli, Eero P.

    2014-01-01

    Rainstorms, insect swarms, and galloping horses produce “sound textures” – the collective result of many similar acoustic events. Sound textures are distinguished by temporal homogeneity, suggesting they could be recognized with time-averaged statistics. To test this hypothesis, we processed real-world textures with an auditory model containing filters tuned for sound frequencies and their modulations, and measured statistics of the resulting decomposition. We then assessed the realism and recognizability of novel sounds synthesized to have matching statistics. Statistics of individual frequency channels, capturing spectral power and sparsity, generally failed to produce compelling synthetic textures. However, combining them with correlations between channels produced identifiable and natural-sounding textures. Synthesis quality declined if statistics were computed from biologically implausible auditory models. The results suggest that sound texture perception is mediated by relatively simple statistics of early auditory representations, presumably computed by downstream neural populations. The synthesis methodology offers a powerful tool for their further investigation. PMID:21903084

  11. The central role of recognition in auditory perception: a neurobiological model.

    PubMed

    McLachlan, Neil; Wilson, Sarah

    2010-01-01

    The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior colliculus neurons and regulates the encoding of the echoic trace in the thalamus. Identification involves correlation of sequential spectral slices of the stimulus-driven neural activity with stored representations in association with multimodal memories, verbal lexicons, and contextual information. Identities are then consolidated in auditory short-term memory and bound with attribute information (usually pitch, loudness, and direction) that has been integrated according to the identities' spectral properties. Attention to, or recall of, a particular identity will excite a particular sequence in the identification hierarchies and so lead to modulation of thalamus and inferior colliculus neural spectrotemporal response fields. This operates as an adaptive filter for identities, or their attributes, and explains many puzzling human auditory behaviors, such as the cocktail party effect, selective attention, and continuity illusions.

  12. Assessment of central auditory processing in a group of workers exposed to solvents.

    PubMed

    Fuente, Adrian; McPherson, Bradley; Muñoz, Verónica; Pablo Espina, Juan

    2006-12-01

    Despite having normal hearing thresholds and speech recognition thresholds, results for central auditory tests were abnormal in a group of workers exposed to solvents. Workers exposed to solvents may have difficulties in everyday listening situations that are not related to a decrement in hearing thresholds. A central auditory processing disorder may underlie these difficulties. To study central auditory processing abilities in a group of workers occupationally exposed to a mix of organic solvents. Ten workers exposed to a mix of organic solvents and 10 matched non-exposed workers were studied. The test battery comprised pure-tone audiometry, tympanometry, acoustic reflex measurement, acoustic reflex decay, dichotic digit, pitch pattern sequence, masking level difference, filtered speech, random gap detection and hearing-in-noise tests. All the workers presented normal hearing thresholds and no signs of middle ear abnormalities. Workers exposed to solvents had lower results in comparison with the control group and previously reported normative data, in the majority of the tests.

  13. A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy.

    PubMed

    Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H

    2018-05-02

    A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Evaluation of a new breast-shaped compensation filter for a newly built breast imaging system

    NASA Astrophysics Data System (ADS)

    Cai, Weixing; Ning, Ruola; Zhang, Yan; Conover, David

    2007-03-01

    A new breast-shaped compensation filter has been designed and fabricated for breast imaging using our newly built breast imaging (CBCTBI) system, which is able to scan an uncompressed breast with pendant geometry. The shape of this compensation filter is designed based on an average-sized breast phantom. Unlike conventional bow-tie compensation filters, its cross-sectional profile varies along the chest wall-to-nipple direction for better compensation for the shape of a breast. Breast phantoms of three different sizes are used to evaluate the performance of this compensation filter. The reconstruction image quality was studied and compared to that obtained without the compensation filter in place. The uniformity of linear attenuation coefficient and the uniformity of noise distribution are significantly improved, and the contrast-to-noise ratios (CNR) of small lesions near the chest wall are increased as well. Multi-normal image method is used in the reconstruction process to correct compensation flood field and to reduce ring artifacts.

  15. Reconstructing spectral cues for sound localization from responses to rippled noise stimuli.

    PubMed

    Van Opstal, A John; Vliegen, Joyce; Van Esch, Thamar

    2017-01-01

    Human sound localization in the mid-saggital plane (elevation) relies on an analysis of the idiosyncratic spectral shape cues provided by the head and pinnae. However, because the actual free-field stimulus spectrum is a-priori unknown to the auditory system, the problem of extracting the elevation angle from the sensory spectrum is ill-posed. Here we test different spectral localization models by eliciting head movements toward broad-band noise stimuli with randomly shaped, rippled amplitude spectra emanating from a speaker at a fixed location, while varying the ripple bandwidth between 1.5 and 5.0 cycles/octave. Six listeners participated in the experiments. From the distributions of localization responses toward the individual stimuli, we estimated the listeners' spectral-shape cues underlying their elevation percepts, by applying maximum-likelihood estimation. The reconstructed spectral cues resulted to be invariant to the considerable variation in ripple bandwidth, and for each listener they had a remarkable resemblance to the idiosyncratic head-related transfer functions (HRTFs). These results are not in line with models that rely on the detection of a single peak or notch in the amplitude spectrum, nor with a local analysis of first- and second-order spectral derivatives. Instead, our data support a model in which the auditory system performs a cross-correlation between the sensory input at the eardrum-auditory nerve, and stored representations of HRTF spectral shapes, to extract the perceived elevation angle.

  16. Specialization of the auditory system for the processing of bio-sonar information in the frequency domain: Mustached bats.

    PubMed

    Suga, Nobuo

    2018-04-01

    For echolocation, mustached bats emit velocity-sensitive orientation sounds (pulses) containing a constant-frequency component consisting of four harmonics (CF 1-4 ). They show unique behavior called Doppler-shift compensation for Doppler-shifted echoes and hunting behavior for frequency and amplitude modulated echoes from fluttering insects. Their peripheral auditory system is highly specialized for fine frequency analysis of CF 2 (∼61.0 kHz) and detecting echo CF 2 from fluttering insects. In their central auditory system, lateral inhibition occurring at multiple levels sharpens V-shaped frequency-tuning curves at the periphery and creates sharp spindle-shaped tuning curves and amplitude tuning. The large CF 2 -tuned area of the auditory cortex systematically represents the frequency and amplitude of CF 2 in a frequency-versus-amplitude map. "CF/CF" neurons are tuned to a specific combination of pulse CF 1 and Doppler-shifted echo CF 2 or 3 . They are tuned to specific velocities. CF/CF neurons cluster in the CC ("C" stands for CF) and DIF (dorsal intrafossa) areas of the auditory cortex. The CC area has the velocity map for Doppler imaging. The DIF area is particularly for Dopper imaging of other bats approaching in cruising flight. To optimize the processing of behaviorally relevant sounds, cortico-cortical interactions and corticofugal feedback modulate the frequency tuning of cortical and sub-cortical auditory neurons and cochlear hair cells through a neural net consisting of positive feedback associated with lateral inhibition. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Auditory-Cortex Short-Term Plasticity Induced by Selective Attention

    PubMed Central

    Jääskeläinen, Iiro P.; Ahveninen, Jyrki

    2014-01-01

    The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance. PMID:24551458

  18. Arbitrary-shaped Brillouin microwave photonic filter by manipulating a directly modulated pump.

    PubMed

    Wei, Wei; Yi, Lilin; Jaouën, Yves; Hu, Weisheng

    2017-10-15

    We present a cost-effective gigahertz-wide arbitrary-shaped microwave photonic filter based on stimulated Brillouin scattering in fiber using a directly modulated laser (DML). After analyzing the relationship between the spectral power density and the modulation current of the DML, we manage to precisely adjust the optical spectrum of the DML, thereby controlling the Brillouin filter response arbitrarily for the first time, to the best of our knowledge. The filter performance is evaluated by amplifying a 500 Mb/s non-return-to-zero on-off keying signal using a 1 GHz rectangular filter. The comparison between the proposed DML approach and the previous approach adopting a complex IQ modulator shows similar filter flexibility, shape fidelity, and noise performance, proving that the DML-based Brillouin filter technique is a cost-effective and valid solution for microwave photonic applications.

  19. Is a High Tone Pointy? Speakers of Different Languages Match Mandarin Chinese Tones to Visual Shapes Differently

    PubMed Central

    Shang, Nan; Styles, Suzy J.

    2017-01-01

    Studies investigating cross-modal correspondences between auditory pitch and visual shapes have shown children and adults consistently match high pitch to pointy shapes and low pitch to curvy shapes, yet no studies have investigated linguistic-uses of pitch. In the present study, we used a bouba/kiki style task to investigate the sound/shape mappings for Tones of Mandarin Chinese, for three groups of participants with different language backgrounds. We recorded the vowels [i] and [u] articulated in each of the four tones of Mandarin Chinese. In Study 1 a single auditory stimulus was presented with two images (one curvy, one spiky). In Study 2 a single image was presented with two auditory stimuli differing only in tone. Participants were asked to select the best match in an online ‘Quiz.’ Across both studies, we replicated the previously observed ‘u-curvy, i-pointy’ sound/shape cross-modal correspondence in all groups. However, Tones were mapped differently by people with different language backgrounds: speakers of Mandarin Chinese classified as Chinese-dominant systematically matched Tone 1 (high, steady) to the curvy shape and Tone 4 (falling) to the pointy shape, while English speakers with no knowledge of Chinese preferred to match Tone 1 (high, steady) to the pointy shape and Tone 3 (low, dipping) to the curvy shape. These effects were observed most clearly in Study 2 where tone-pairs were contrasted explicitly. These findings are in line with the dominant patterns of linguistic pitch perception for speakers of these languages (pitch-change, and pitch height, respectively). Chinese English balanced bilinguals showed a bivalent pattern, swapping between the Chinese pitch-change pattern and the English pitch-height pattern depending on the task. These findings show for that the supposedly universal pattern of mapping linguistic sounds to shape is modulated by the sensory properties of a speaker’s language system, and that people with high functioning in more than one language can dynamically shift between patterns. PMID:29270147

  20. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    PubMed

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Computational Modeling of Blood Flow in the TrapEase Inferior Vena Cava Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, M A; Henshaw, W D; Wang, S L

    To evaluate the flow hemodynamics of the TrapEase vena cava filter using three dimensional computational fluid dynamics, including simulated thrombi of multiple shapes, sizes, and trapping positions. The study was performed to identify potential areas of recirculation and stagnation and areas in which trapped thrombi may influence intrafilter thrombosis. Computer models of the TrapEase filter, thrombi (volumes ranging from 0.25mL to 2mL, 3 different shapes), and a 23mm diameter cava were constructed. The hemodynamics of steady-state flow at Reynolds number 600 was examined for the unoccluded and partially occluded filter. Axial velocity contours and wall shear stresses were computed. Flowmore » in the unoccluded TrapEase filter experienced minimal disruption, except near the superior and inferior tips where low velocity flow was observed. For spherical thrombi in the superior trapping position, stagnant and recirculating flow was observed downstream of the thrombus; the volume of stagnant flow and the peak wall shear stress increased monotonically with thrombus volume. For inferiorly trapped spherical thrombi, marked disruption to the flow was observed along the cava wall ipsilateral to the thrombus and in the interior of the filter. Spherically shaped thrombus produced a lower peak wall shear stress than conically shaped thrombus and a larger peak stress than ellipsoidal thrombus. We have designed and constructed a computer model of the flow hemodynamics of the TrapEase IVC filter with varying shapes, sizes, and positions of thrombi. The computer model offers several advantages over in vitro techniques including: improved resolution, ease of evaluating different thrombus sizes and shapes, and easy adaptation for new filter designs and flow parameters. Results from the model also support a previously reported finding from photochromic experiments that suggest the inferior trapping position of the TrapEase IVC filter leads to an intra-filter region of recirculating/stagnant flow with very low shear stress that may be thrombogenic.« less

  2. Controlling the perceived distance of an auditory object by manipulation of loudspeaker directivity.

    PubMed

    Laitinen, Mikko-Ville; Politis, Archontis; Huhtakallio, Ilkka; Pulkki, Ville

    2015-06-01

    This work presents a method to control the perceived distance of an auditory object by changing the directivity pattern of a loudspeaker and consequently the direct-to-reverberant ratio at the listening spot. Control of the directivity pattern is achieved by beamforming using a compact multi-driver loudspeaker unit. A small-sized cubic array consisting of six drivers is assembled, and per driver beamforming filters are derived from directional measurements of the array. The proposed method is evaluated using formal listening tests. The results show that the perceived distance can be controlled effectively by directivity pattern modification.

  3. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  4. Synaptic integration in dendrites: exceptional need for speed

    PubMed Central

    Golding, Nace L; Oertel, Donata

    2012-01-01

    Some neurons in the mammalian auditory system are able to detect and report the coincident firing of inputs with remarkable temporal precision. A strong, low-voltage-activated potassium conductance (gKL) at the cell body and dendrites gives these neurons sensitivity to the rate of depolarization by EPSPs, allowing neurons to assess the coincidence of the rising slopes of unitary EPSPs. Two groups of neurons in the brain stem, octopus cells in the posteroventral cochlear nucleus and principal cells of the medial superior olive (MSO), extract acoustic information by assessing coincident firing of their inputs over a submillisecond timescale and convey that information at rates of up to 1000 spikes s−1. Octopus cells detect the coincident activation of groups of auditory nerve fibres by broadband transient sounds, compensating for the travelling wave delay by dendritic filtering, while MSO neurons detect coincident activation of similarly tuned neurons from each of the two ears through separate dendritic tufts. Each makes use of filtering that is introduced by the spatial distribution of inputs on dendrites. PMID:22930273

  5. Brian hears: online auditory processing using vectorization over channels.

    PubMed

    Fontaine, Bertrand; Goodman, Dan F M; Benichoux, Victor; Brette, Romain

    2011-01-01

    The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in "Brian Hears," a library for the spiking neural network simulator package "Brian." This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.

  6. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment

    PubMed Central

    Spiousas, Ignacio; Etchemendy, Pablo E.; Eguia, Manuel C.; Calcagno, Esteban R.; Abregú, Ezequiel; Vergara, Ramiro O.

    2017-01-01

    Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it. PMID:28690556

  7. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment.

    PubMed

    Spiousas, Ignacio; Etchemendy, Pablo E; Eguia, Manuel C; Calcagno, Esteban R; Abregú, Ezequiel; Vergara, Ramiro O

    2017-01-01

    Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1-6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it.

  8. Flying in tune: sexual recognition in mosquitoes.

    PubMed

    Gibson, Gabriella; Russell, Ian

    2006-07-11

    Mosquitoes hear with their antennae, which in most species are sexually dimorphic. Johnston, who discovered the mosquito auditory organ at the base of the antenna 150 years ago, speculated that audition was involved with mating behaviour. Indeed, male mosquitoes are attracted to female flight tones. The male auditory organ has been proposed to act as an acoustic filter for female flight tones, but female auditory behavior is unknown. We show, for the first time, interactive auditory behavior between males and females that leads to sexual recognition. Individual males and females both respond to pure tones by altering wing-beat frequency. Behavioral auditory tuning curves, based on minimum threshold sound levels that elicit a change in wing-beat frequency to pure tones, are sharper than the mechanical tuning of the antennae, with males being more sensitive than females. We flew opposite-sex pairs of tethered Toxorhynchites brevipalpis and found that each mosquito alters its wing-beat frequency in response to the flight tone of the other, so that within seconds their flight-tone frequencies are closely matched, if not completely synchronized. The flight tones of same-sex pairs may converge in frequency but eventually diverge dramatically.

  9. Temporal integration at consecutive processing stages in the auditory pathway of the grasshopper.

    PubMed

    Wirtssohn, Sarah; Ronacher, Bernhard

    2015-04-01

    Temporal integration in the auditory system of locusts was quantified by presenting single clicks and click pairs while performing intracellular recordings. Auditory neurons were studied at three processing stages, which form a feed-forward network in the metathoracic ganglion. Receptor neurons and most first-order interneurons ("local neurons") encode the signal envelope, while second-order interneurons ("ascending neurons") tend to extract more complex, behaviorally relevant sound features. In different neuron types of the auditory pathway we found three response types: no significant temporal integration (some ascending neurons), leaky energy integration (receptor neurons and some local neurons), and facilitatory processes (some local and ascending neurons). The receptor neurons integrated input over very short time windows (<2 ms). Temporal integration on longer time scales was found at subsequent processing stages, indicative of within-neuron computations and network activity. These different strategies, realized at separate processing stages and in parallel neuronal pathways within one processing stage, could enable the grasshopper's auditory system to evaluate longer time windows and thus to implement temporal filters, while at the same time maintaining a high temporal resolution. Copyright © 2015 the American Physiological Society.

  10. Central Auditory Processing of Temporal and Spectral-Variance Cues in Cochlear Implant Listeners

    PubMed Central

    Pham, Carol Q.; Bremen, Peter; Shen, Weidong; Yang, Shi-Ming; Middlebrooks, John C.; Zeng, Fan-Gang; Mc Laughlin, Myles

    2015-01-01

    Cochlear implant (CI) listeners have difficulty understanding speech in complex listening environments. This deficit is thought to be largely due to peripheral encoding problems arising from current spread, which results in wide peripheral filters. In normal hearing (NH) listeners, central processing contributes to segregation of speech from competing sounds. We tested the hypothesis that basic central processing abilities are retained in post-lingually deaf CI listeners, but processing is hampered by degraded input from the periphery. In eight CI listeners, we measured auditory nerve compound action potentials to characterize peripheral filters. Then, we measured psychophysical detection thresholds in the presence of multi-electrode maskers placed either inside (peripheral masking) or outside (central masking) the peripheral filter. This was intended to distinguish peripheral from central contributions to signal detection. Introduction of temporal asynchrony between the signal and masker improved signal detection in both peripheral and central masking conditions for all CI listeners. Randomly varying components of the masker created spectral-variance cues, which seemed to benefit only two out of eight CI listeners. Contrastingly, the spectral-variance cues improved signal detection in all five NH listeners who listened to our CI simulation. Together these results indicate that widened peripheral filters significantly hamper central processing of spectral-variance cues but not of temporal cues in post-lingually deaf CI listeners. As indicated by two CI listeners in our study, however, post-lingually deaf CI listeners may retain some central processing abilities similar to NH listeners. PMID:26176553

  11. Diminished auditory sensory gating during active auditory verbal hallucinations.

    PubMed

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Respiratory sinus arrhythmia and auditory processing in autism: modifiable deficits of an integrated social engagement system?

    PubMed

    Porges, Stephen W; Macellaio, Matthew; Stanfill, Shannon D; McCue, Kimberly; Lewis, Gregory F; Harden, Emily R; Handelman, Mika; Denver, John; Bazhenova, Olga V; Heilman, Keri J

    2013-06-01

    The current study evaluated processes underlying two common symptoms (i.e., state regulation problems and deficits in auditory processing) associated with a diagnosis of autism spectrum disorders. Although these symptoms have been treated in the literature as unrelated, when informed by the Polyvagal Theory, these symptoms may be viewed as the predictable consequences of depressed neural regulation of an integrated social engagement system, in which there is down regulation of neural influences to the heart (i.e., via the vagus) and to the middle ear muscles (i.e., via the facial and trigeminal cranial nerves). Respiratory sinus arrhythmia (RSA) and heart period were monitored to evaluate state regulation during a baseline and two auditory processing tasks (i.e., the SCAN tests for Filtered Words and Competing Words), which were used to evaluate auditory processing performance. Children with a diagnosis of autism spectrum disorders (ASD) were contrasted with aged matched typically developing children. The current study identified three features that distinguished the ASD group from a group of typically developing children: 1) baseline RSA, 2) direction of RSA reactivity, and 3) auditory processing performance. In the ASD group, the pattern of change in RSA during the attention demanding SCAN tests moderated the relation between performance on the Competing Words test and IQ. In addition, in a subset of ASD participants, auditory processing performance improved and RSA increased following an intervention designed to improve auditory processing. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. A New Acoustic Portal into the Odontocete Ear and Vibrational Analysis of the Tympanoperiotic Complex

    PubMed Central

    Cranford, Ted W.; Krysl, Petr; Amundin, Mats

    2010-01-01

    Global concern over the possible deleterious effects of noise on marine organisms was catalyzed when toothed whales stranded and died in the presence of high intensity sound. The lack of knowledge about mechanisms of hearing in toothed whales prompted our group to study the anatomy and build a finite element model to simulate sound reception in odontocetes. The primary auditory pathway in toothed whales is an evolutionary novelty, compensating for the impedance mismatch experienced by whale ancestors as they moved from hearing in air to hearing in water. The mechanism by which high-frequency vibrations pass from the low density fats of the lower jaw into the dense bones of the auditory apparatus is a key to understanding odontocete hearing. Here we identify a new acoustic portal into the ear complex, the tympanoperiotic complex (TPC) and a plausible mechanism by which sound is transduced into the bony components. We reveal the intact anatomic geometry using CT scanning, and test functional preconceptions using finite element modeling and vibrational analysis. We show that the mandibular fat bodies bifurcate posteriorly, attaching to the TPC in two distinct locations. The smaller branch is an inconspicuous, previously undescribed channel, a cone-shaped fat body that fits into a thin-walled bony funnel just anterior to the sigmoid process of the TPC. The TPC also contains regions of thin translucent bone that define zones of differential flexibility, enabling the TPC to bend in response to sound pressure, thus providing a mechanism for vibrations to pass through the ossicular chain. The techniques used to discover the new acoustic portal in toothed whales, provide a means to decipher auditory filtering, beam formation, impedance matching, and transduction. These tools can also be used to address concerns about the potential deleterious effects of high-intensity sound in a broad spectrum of marine organisms, from whales to fish. PMID:20694149

  14. A new acoustic portal into the odontocete ear and vibrational analysis of the tympanoperiotic complex.

    PubMed

    Cranford, Ted W; Krysl, Petr; Amundin, Mats

    2010-08-04

    Global concern over the possible deleterious effects of noise on marine organisms was catalyzed when toothed whales stranded and died in the presence of high intensity sound. The lack of knowledge about mechanisms of hearing in toothed whales prompted our group to study the anatomy and build a finite element model to simulate sound reception in odontocetes. The primary auditory pathway in toothed whales is an evolutionary novelty, compensating for the impedance mismatch experienced by whale ancestors as they moved from hearing in air to hearing in water. The mechanism by which high-frequency vibrations pass from the low density fats of the lower jaw into the dense bones of the auditory apparatus is a key to understanding odontocete hearing. Here we identify a new acoustic portal into the ear complex, the tympanoperiotic complex (TPC) and a plausible mechanism by which sound is transduced into the bony components. We reveal the intact anatomic geometry using CT scanning, and test functional preconceptions using finite element modeling and vibrational analysis. We show that the mandibular fat bodies bifurcate posteriorly, attaching to the TPC in two distinct locations. The smaller branch is an inconspicuous, previously undescribed channel, a cone-shaped fat body that fits into a thin-walled bony funnel just anterior to the sigmoid process of the TPC. The TPC also contains regions of thin translucent bone that define zones of differential flexibility, enabling the TPC to bend in response to sound pressure, thus providing a mechanism for vibrations to pass through the ossicular chain. The techniques used to discover the new acoustic portal in toothed whales, provide a means to decipher auditory filtering, beam formation, impedance matching, and transduction. These tools can also be used to address concerns about the potential deleterious effects of high-intensity sound in a broad spectrum of marine organisms, from whales to fish.

  15. Reconstructing spectral cues for sound localization from responses to rippled noise stimuli

    PubMed Central

    Vliegen, Joyce; Van Esch, Thamar

    2017-01-01

    Human sound localization in the mid-saggital plane (elevation) relies on an analysis of the idiosyncratic spectral shape cues provided by the head and pinnae. However, because the actual free-field stimulus spectrum is a-priori unknown to the auditory system, the problem of extracting the elevation angle from the sensory spectrum is ill-posed. Here we test different spectral localization models by eliciting head movements toward broad-band noise stimuli with randomly shaped, rippled amplitude spectra emanating from a speaker at a fixed location, while varying the ripple bandwidth between 1.5 and 5.0 cycles/octave. Six listeners participated in the experiments. From the distributions of localization responses toward the individual stimuli, we estimated the listeners’ spectral-shape cues underlying their elevation percepts, by applying maximum-likelihood estimation. The reconstructed spectral cues resulted to be invariant to the considerable variation in ripple bandwidth, and for each listener they had a remarkable resemblance to the idiosyncratic head-related transfer functions (HRTFs). These results are not in line with models that rely on the detection of a single peak or notch in the amplitude spectrum, nor with a local analysis of first- and second-order spectral derivatives. Instead, our data support a model in which the auditory system performs a cross-correlation between the sensory input at the eardrum-auditory nerve, and stored representations of HRTF spectral shapes, to extract the perceived elevation angle. PMID:28333967

  16. Complex Auditory Signals

    DTIC Science & Technology

    1988-09-01

    ability to detect a change in spectral shape. This question also beats on that of how the auditory system codes intensity. There are, at laast, two...This prior experience with the diotic presentations. disparity leads us to speculate that the tasks of detecting an We also considered how binaural ...quite complex. One Colburn and Durlach, 1978), one prerequisite for binaural may not be able to simply extrapolate from one to the other. interaction

  17. Auditory Temporal Resolution in Individuals with Diabetes Mellitus Type 2.

    PubMed

    Mishra, Rajkishor; Sanju, Himanshu Kumar; Kumar, Prawin

    2016-10-01

    Introduction  "Diabetes mellitus is a group of metabolic disorders characterized by elevated blood sugar and abnormalities in insulin secretion and action" (American Diabetes Association). Previous literature has reported connection between diabetes mellitus and hearing impairment. There is a dearth of literature on auditory temporal resolution ability in individuals with diabetes mellitus type 2. Objective  The main objective of the present study was to assess auditory temporal resolution ability through GDT (Gap Detection Threshold) in individuals with diabetes mellitus type 2 with high frequency hearing loss. Methods  Fifteen subjects with diabetes mellitus type 2 with high frequency hearing loss in the age range of 30 to 40 years participated in the study as the experimental group. Fifteen age-matched non-diabetic individuals with normal hearing served as the control group. We administered the Gap Detection Threshold (GDT) test to all participants to assess their temporal resolution ability. Result  We used the independent t -test to compare between groups. Results showed that the diabetic group (experimental) performed significantly poorer compared with the non-diabetic group (control). Conclusion  It is possible to conclude that widening of auditory filters and changes in the central auditory nervous system contributed to poorer performance for temporal resolution task (Gap Detection Threshold) in individuals with diabetes mellitus type 2. Findings of the present study revealed the deteriorating effect of diabetes mellitus type 2 at the central auditory processing level.

  18. A sound advantage: Increased auditory capacity in autism.

    PubMed

    Remington, Anna; Fairnie, Jake

    2017-09-01

    Autism Spectrum Disorder (ASD) has an intriguing auditory processing profile. Individuals show enhanced pitch discrimination, yet often find seemingly innocuous sounds distressing. This study used two behavioural experiments to examine whether an increased capacity for processing sounds in ASD could underlie both the difficulties and enhanced abilities found in the auditory domain. Autistic and non-autistic young adults performed a set of auditory detection and identification tasks designed to tax processing capacity and establish the extent of perceptual capacity in each population. Tasks were constructed to highlight both the benefits and disadvantages of increased capacity. Autistic people were better at detecting additional unexpected and expected sounds (increased distraction and superior performance respectively). This suggests that they have increased auditory perceptual capacity relative to non-autistic people. This increased capacity may offer an explanation for the auditory superiorities seen in autism (e.g. heightened pitch detection). Somewhat counter-intuitively, this same 'skill' could result in the sensory overload that is often reported - which subsequently can interfere with social communication. Reframing autistic perceptual processing in terms of increased capacity, rather than a filtering deficit or inability to maintain focus, increases our understanding of this complex condition, and has important practical implications that could be used to develop intervention programs to minimise the distress that is often seen in response to sensory stimuli. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Text as a Supplement to Speech in Young and Older Adults a)

    PubMed Central

    Krull, Vidya; Humes, Larry E.

    2015-01-01

    Objective The purpose of this experiment was to quantify the contribution of visual text to auditory speech recognition in background noise. Specifically, we tested the hypothesis that partially accurate visual text from an automatic speech recognizer could be used successfully to supplement speech understanding in difficult listening conditions in older adults, with normal or impaired hearing. Our working hypotheses were based on what is known regarding audiovisual speech perception in the elderly from speechreading literature. We hypothesized that: 1) combining auditory and visual text information will result in improved recognition accuracy compared to auditory or visual text information alone; 2) benefit from supplementing speech with visual text (auditory and visual enhancement) in young adults will be greater than that in older adults; and 3) individual differences in performance on perceptual measures would be associated with cognitive abilities. Design Fifteen young adults with normal hearing, fifteen older adults with normal hearing, and fifteen older adults with hearing loss participated in this study. All participants completed sentence recognition tasks in auditory-only, text-only, and combined auditory-text conditions. The auditory sentence stimuli were spectrally shaped to restore audibility for the older participants with impaired hearing. All participants also completed various cognitive measures, including measures of working memory, processing speed, verbal comprehension, perceptual and cognitive speed, processing efficiency, inhibition, and the ability to form wholes from parts. Group effects were examined for each of the perceptual and cognitive measures. Audiovisual benefit was calculated relative to performance on auditory-only and visual-text only conditions. Finally, the relationship between perceptual measures and other independent measures were examined using principal-component factor analyses, followed by regression analyses. Results Both young and older adults performed similarly on nine out of ten perceptual measures (auditory, visual, and combined measures). Combining degraded speech with partially correct text from an automatic speech recognizer improved the understanding of speech in both young and older adults, relative to both auditory- and text-only performance. In all subjects, cognition emerged as a key predictor for a general speech-text integration ability. Conclusions These results suggest that neither age nor hearing loss affected the ability of subjects to benefit from text when used to support speech, after ensuring audibility through spectral shaping. These results also suggest that the benefit obtained by supplementing auditory input with partially accurate text is modulated by cognitive ability, specifically lexical and verbal skills. PMID:26458131

  20. Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.

    NASA Astrophysics Data System (ADS)

    Wang, Avery Li-Chun

    This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters which require a small fraction of the computational power of conventional FIR implementations. This design strategy is based on truncated and stabilized IIR filters. These signal-processing methods have been applied to the problem of auditory source separation, resulting in voice separation from complex music that is significantly better than previous results at far lower computational cost.

  1. Music and language: relations and disconnections.

    PubMed

    Kraus, Nina; Slater, Jessica

    2015-01-01

    Music and language provide an important context in which to understand the human auditory system. While they perform distinct and complementary communicative functions, music and language are both rooted in the human desire to connect with others. Since sensory function is ultimately shaped by what is biologically important to the organism, the human urge to communicate has been a powerful driving force in both the evolution of auditory function and the ways in which it can be changed by experience within an individual lifetime. This chapter emphasizes the highly interactive nature of the auditory system as well as the depth of its integration with other sensory and cognitive systems. From the origins of music and language to the effects of auditory expertise on the neural encoding of sound, we consider key themes in auditory processing, learning, and plasticity. We emphasize the unique role of the auditory system as the temporal processing "expert" in the brain, and explore relationships between communication and cognition. We demonstrate how experience with music and language can have a significant impact on underlying neural function, and that auditory expertise strengthens some of the very same aspects of sound encoding that are deficient in impaired populations. © 2015 Elsevier B.V. All rights reserved.

  2. Evolutionary conservation and neuronal mechanisms of auditory perceptual restoration.

    PubMed

    Petkov, Christopher I; Sutter, Mitchell L

    2011-01-01

    Auditory perceptual 'restoration' occurs when the auditory system restores an occluded or masked sound of interest. Behavioral work on auditory restoration in humans began over 50 years ago using it to model a noisy environmental scene with competing sounds. It has become clear that not only humans experience auditory restoration: restoration has been broadly conserved in many species. Behavioral studies in humans and animals provide a necessary foundation to link the insights being obtained from human EEG and fMRI to those from animal neurophysiology. The aggregate of data resulting from multiple approaches across species has begun to clarify the neuronal bases of auditory restoration. Different types of neural responses supporting restoration have been found, supportive of multiple mechanisms working within a species. Yet a general principle has emerged that responses correlated with restoration mimic the response that would have been given to the uninterrupted sound of interest. Using the same technology to study different species will help us to better harness animal models of 'auditory scene analysis' to clarify the conserved neural mechanisms shaping the perceptual organization of sound and to advance strategies to improve hearing in natural environmental settings. © 2010 Elsevier B.V. All rights reserved.

  3. Auditory salience using natural soundscapes.

    PubMed

    Huang, Nicholas; Elhilali, Mounya

    2017-03-01

    Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience.

  4. Phonological and orthographic influences in the bouba-kiki effect.

    PubMed

    Cuskley, Christine; Simner, Julia; Kirby, Simon

    2017-01-01

    We examine a high-profile phenomenon known as the bouba-kiki effect, in which non-word names are assigned to abstract shapes in systematic ways (e.g. rounded shapes are preferentially labelled bouba over kiki). In a detailed evaluation of the literature, we show that most accounts of the effect point to predominantly or entirely iconic cross-sensory mappings between acoustic or articulatory properties of sound and shape as the mechanism underlying the effect. However, these accounts have tended to confound the acoustic or articulatory properties of non-words with another fundamental property: their written form. We compare traditional accounts of direct audio or articulatory-visual mapping with an account in which the effect is heavily influenced by matching between the shapes of graphemes and the abstract shape targets. The results of our two studies suggest that the dominant mechanism underlying the effect for literate subjects is matching based on aligning letter curvature and shape roundedness (i.e. non-words with curved letters are matched to round shapes). We show that letter curvature is strong enough to significantly influence word-shape associations even in auditory tasks, where written word forms are never presented to participants. However, we also find an additional phonological influence in that voiced sounds are preferentially linked with rounded shapes, although this arises only in a purely auditory word-shape association task. We conclude that many previous investigations of the bouba-kiki effect may not have given appropriate consideration or weight to the influence of orthography among literate subjects.

  5. Speech training alters consonant and vowel responses in multiple auditory cortex fields

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.

    2015-01-01

    Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927

  6. Can You Hear Me Now? Musical Training Shapes Functional Brain Networks for Selective Auditory Attention and Hearing Speech in Noise

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2011-01-01

    Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments. PMID:21716636

  7. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  8. Predictive motor control of sensory dynamics in Auditory Active Sensing

    PubMed Central

    Morillon, Benjamin; Hackett, Troy A.; Kajikawa, Yoshinao; Schroeder, Charles E.

    2016-01-01

    Neuronal oscillations present potential physiological substrates for brain operations that require temporal prediction. We review this idea in the context of auditory perception. Using speech as an exemplar, we illustrate how hierarchically organized oscillations can be used to parse and encode complex input streams. We then consider the motor system as a major source of rhythms (temporal priors) in auditory processing, that act in concert with attention to sharpen sensory representations and link them across areas. We discuss the anatomo-functional pathways that could mediate this audio-motor interaction, and notably the potential role of the somatosensory cortex. Finally, we reposition temporal predictions in the context of internal models, discussing how they interact with feature-based or spatial predictions. We argue that complementary predictions interact synergistically according to the organizational principles of each sensory system, forming multidimensional filters crucial to perception. PMID:25594376

  9. A MEMS disk resonator-based band pass filter electrical equivalent circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundaram, G. M.; Angira, Mahesh; Gupta, Navneet

    In this paper, coupled beam bandpass Disk filter is designed for 1 MHz bandwidth. Filter electrical equivalent circuit simulation is performed using circuit simulators. Important filter parameters such as insertion loss, shape factor and Q factor aresetimated using coventorware simulation. Disk resonator based radial contour mode filter provides 1.5 MHz bandwidth and unloaded quality factor of resonator and filter as 233480, 21797 respectively. From the simulation result it’s found that insertion loss minimum is 151.49 dB, insertion loss maximum is 213.94 dB, and 40 dB shape factor is 4.17.

  10. Shape Perception and Navigation in Blind Adults

    PubMed Central

    Gori, Monica; Cappagli, Giulia; Baud-Bovy, Gabriel; Finocchietti, Sara

    2017-01-01

    Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development. PMID:28144226

  11. Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation

    PubMed Central

    Lopez-Poveda, Enrique A.; Barrios, Pablo

    2013-01-01

    Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176

  12. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE.

    PubMed

    Krishnan, Ananthanarayan; Gandour, Jackson T

    2014-12-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information.

  13. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE

    PubMed Central

    Krishnan, Ananthanarayan; Gandour, Jackson T.

    2015-01-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information. PMID:25838636

  14. Auditory Weighting Functions and TTS/PTS Exposure Functions for Marine Mammals Exposed to Underwater Noise

    DTIC Science & Technology

    2016-12-01

    weighting functions utilized the “M-weighting” functions at lower frequencies, where no TTS existed at that time . Since derivation of the Phase 2...resulting shapes of the weighting functions (left) and exposure functions (right). The arrows indicate the direction of change when the designated parameter...thresholds are in dB re 1 μPa ..................................... iv 1. Species group designations for Navy Phase 3 auditory weighting functions

  15. P50 suppression in children with selective mutism: a preliminary report.

    PubMed

    Henkin, Yael; Feinholz, Maya; Arie, Miri; Bar-Haim, Yair

    2010-01-01

    Evidence suggests that children with selective mutism (SM) display significant aberrations in auditory efferent activity at the brainstem level that may underlie inefficient auditory processing during vocalization, and lead to speech avoidance. The objective of the present study was to explore auditory filtering processes at the cortical level in children with SM. The classic paired-click paradigm was utilized to assess suppression of the P50 event-related potential to the second, of two sequentially-presented clicks, in ten children with SM and 10 control children. A significant suppression of P50 to the second click was evident in the SM group, whereas no suppression effect was observed in controls. Suppression was evident in 90% of the SM group and in 40% of controls, whereas augmentation was found in 10% and 60%, respectively, yielding a significant association between group and suppression of P50. P50 to the first click was comparable in children with SM and controls. The adult-like, mature P50 suppression effect exhibited by children with SM may reflect a cortical mechanism of compensatory inhibition of irrelevant repetitive information that was not properly suppressed at lower levels of their auditory system. The current data extends our previous findings suggesting that differential auditory processing may be involved in speech selectivity in SM.

  16. Development of auditory sensitivity in budgerigars (Melopsittacus undulatus)

    NASA Astrophysics Data System (ADS)

    Brittan-Powell, Elizabeth F.; Dooling, Robert J.

    2004-06-01

    Auditory feedback influences the development of vocalizations in songbirds and parrots; however, little is known about the development of hearing in these birds. The auditory brainstem response was used to track the development of auditory sensitivity in budgerigars from hatch to 6 weeks of age. Responses were first obtained from 1-week-old at high stimulation levels at frequencies at or below 2 kHz, showing that budgerigars do not hear well at hatch. Over the next week, thresholds improved markedly, and responses were obtained for almost all test frequencies throughout the range of hearing by 14 days. By 3 weeks posthatch, birds' best sensitivity shifted from 2 to 2.86 kHz, and the shape of the auditory brainstem response (ABR) audiogram became similar to that of adult budgerigars. About a week before leaving the nest, ABR audiograms of young budgerigars are very similar to those of adult birds. These data complement what is known about vocal development in budgerigars and show that hearing is fully developed by the time that vocal learning begins.

  17. Auditory experience controls the maturation of song discrimination and sexual response in Drosophila

    PubMed Central

    Li, Xiaodong; Ishimoto, Hiroshi

    2018-01-01

    In birds and higher mammals, auditory experience during development is critical to discriminate sound patterns in adulthood. However, the neural and molecular nature of this acquired ability remains elusive. In fruit flies, acoustic perception has been thought to be innate. Here we report, surprisingly, that auditory experience of a species-specific courtship song in developing Drosophila shapes adult song perception and resultant sexual behavior. Preferences in the song-response behaviors of both males and females were tuned by social acoustic exposure during development. We examined the molecular and cellular determinants of this social acoustic learning and found that GABA signaling acting on the GABAA receptor Rdl in the pC1 neurons, the integration node for courtship stimuli, regulated auditory tuning and sexual behavior. These findings demonstrate that maturation of auditory perception in flies is unexpectedly plastic and is acquired socially, providing a model to investigate how song learning regulates mating preference in insects. PMID:29555017

  18. Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise.

    PubMed

    Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W

    2011-03-08

    How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.

  19. Hearing Shapes: Event-related Potentials Reveal the Time Course of Auditory-Visual Sensory Substitution.

    PubMed

    Graulty, Christian; Papaioannou, Orestis; Bauer, Phoebe; Pitts, Michael A; Canseco-Gonzalez, Enriqueta

    2018-04-01

    In auditory-visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111-121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begun to explore the neuroplastic changes that result from sensory substitution training. However, the time course of cross-sensory information transfer in sensory substitution is largely unexplored and may offer insights into the underlying neural mechanisms. In this study, we recorded ERPs to soundscapes before and after sighted participants were trained with the Meijer algorithm. We compared these posttraining versus pretraining ERP differences with those of a control group who received the same set of 80 auditory/visual stimuli but with arbitrary pairings during training. Our behavioral results confirmed the rapid acquisition of cross-sensory mappings, and the group trained with the Meijer algorithm was able to generalize their learning to novel soundscapes at impressive levels of accuracy. The ERP results revealed an early cross-sensory learning effect (150-210 msec) that was significantly enhanced in the algorithm-trained group compared with the control group as well as a later difference (420-480 msec) that was unique to the algorithm-trained group. These ERP modulations are consistent with previous fMRI results and provide additional insight into the time course of cross-sensory information transfer in sensory substitution.

  20. Temporal Fine Structure and Applications to Cochlear Implants

    ERIC Educational Resources Information Center

    Li, Xing

    2013-01-01

    Complex broadband sounds are decomposed by the auditory filters into a series of relatively narrowband signals, each of which conveys information about the sound by time-varying features. The slow changes in the overall amplitude constitute envelope, while the more rapid events, such as zero crossings, constitute temporal fine structure (TFS).…

  1. Brian Hears: Online Auditory Processing Using Vectorization Over Channels

    PubMed Central

    Fontaine, Bertrand; Goodman, Dan F. M.; Benichoux, Victor; Brette, Romain

    2011-01-01

    The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in “Brian Hears,” a library for the spiking neural network simulator package “Brian.” This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations. PMID:21811453

  2. Root Raised Cosine (RRC) Filters and Pulse Shaping in Communication Systems

    NASA Technical Reports Server (NTRS)

    Cubukcu, Erkin

    2012-01-01

    This presentation briefly discusses application of the Root Raised Cosine (RRC) pulse shaping in the space telecommunication. Use of the RRC filtering (i.e., pulse shaping) is adopted in commercial communications, such as cellular technology, and used extensively. However, its use in space communication is still relatively new. This will possibly change as the crowding of the frequency spectrum used in the space communication becomes a problem. The two conflicting requirements in telecommunication are the demand for high data rates per channel (or user) and need for more channels, i.e., more users. Theoretically as the channel bandwidth is increased to provide higher data rates the number of channels allocated in a fixed spectrum must be reduced. Tackling these two conflicting requirements at the same time led to the development of the RRC filters. More channels with wider bandwidth might be tightly packed in the frequency spectrum achieving the desired goals. A link model with the RRC filters has been developed and simulated. Using 90% power Bandwidth (BW) measurement definition showed that the RRC filtering might improve spectrum efficiency by more than 75%. Furthermore using the matching RRC filters both in the transmitter and receiver provides the improved Bit Error Rate (BER) performance. In this presentation the theory of three related concepts, namely pulse shaping, Inter Symbol Interference (ISI), and Bandwidth (BW) will be touched upon. Additionally the concept of the RRC filtering and some facts about the RRC filters will be presented

  3. Multichannel electrical stimulation of the auditory nerve in man. I. Basic psychophysics.

    PubMed

    Shannon, R V

    1983-08-01

    Basic psychophysical measurements were obtained from three patients implanted with multichannel cochlear implants. This paper presents measurements from stimulation of a single channel at a time (either monopolar or bipolar). The shape of the threshold vs. frequency curve can be partially related to the membrane biophysics of the remaining spiral ganglion and/or dendrites. Nerve survival in the region of the electrode may produce some increase in the dynamic range on that electrode. Loudness was related to the stimulus amplitude by a power law with exponents between 1.6 and 3.4, depending on frequency. Intensity discrimination was better than for normal auditory stimulation, but not enough to offset the small dynamic range for electrical stimulation. Measures of temporal integration were comparable to normals, indicating a central mechanism that is still intact in implant patients. No frequency analysis of the electrical signal was observed. Each electrode produced a unique pitch sensation, but they were not simply related to the tonotopic position of the stimulated electrode. Pitch increased over more than 4 octaves (for one patient) as the frequency was increased from 100 to 300 Hz, but above 300 Hz no pitch change was observed. Possibly the major limitation of single channel cochlear implants is the 1-2 ms integration time (probably due to the capacitative properties of the nerve membrane which acts as a low-pass filter at 100 Hz). Another limitation of electrical stimulation is that there is no spectral analysis of the electrical waveform so that temporal waveform alone determines the effective stimulus.

  4. Courtship song preferences in female zebra finches are shaped by developmental auditory experience.

    PubMed

    Chen, Yining; Clark, Oliver; Woolley, Sarah C

    2017-05-31

    The performance of courtship signals provides information about the behavioural state and quality of the signaller, and females can use such information for social decision-making (e.g. mate choice). However, relatively little is known about the degree to which the perception of and preference for differences in motor performance are shaped by developmental experiences. Furthermore, the neural substrates that development could act upon to influence the processing of performance features remains largely unknown. In songbirds, females use song to identify males and select mates. Moreover, female songbirds are often sensitive to variation in male song performance. Consequently, we investigated how developmental exposure to adult male song affected behavioural and neural responses to song in a small, gregarious songbird, the zebra finch. Zebra finch males modulate their song performance when courting females, and previous work has shown that females prefer the high-performance, female-directed courtship song. However, unlike females allowed to hear and interact with an adult male during development, females reared without developmental song exposure did not demonstrate behavioural preferences for high-performance courtship songs. Additionally, auditory responses to courtship and non-courtship song were altered in adult females raised without developmental song exposure. These data highlight the critical role of developmental auditory experience in shaping the perception and processing of song performance. © 2017 The Author(s).

  5. Matched Behavioral and Neural Adaptations for Low Sound Level Echolocation in a Gleaning Bat, Antrozous pallidus.

    PubMed

    Measor, Kevin R; Leavell, Brian C; Brewton, Dustin H; Rumschlag, Jeffrey; Barber, Jesse R; Razak, Khaleel A

    2017-01-01

    In active sensing, animals make motor adjustments to match sensory inputs to specialized neural circuitry. Here, we describe an active sensing system for sound level processing. The pallid bat uses downward frequency-modulated (FM) sweeps as echolocation calls for general orientation and obstacle avoidance. The bat's auditory cortex contains a region selective for these FM sweeps (FM sweep-selective region, FMSR). We show that the vast majority of FMSR neurons are sensitive and strongly selective for relatively low levels (30-60 dB SPL). Behavioral testing shows that when a flying bat approaches a target, it reduces output call levels to keep echo levels between ∼30 and 55 dB SPL. Thus, the pallid bat behaviorally matches echo levels to an optimized neural representation of sound levels. FMSR neurons are more selective for sound levels of FM sweeps than tones, suggesting that across-frequency integration enhances level tuning. Level-dependent timing of high-frequency sideband inhibition in the receptive field shapes increased level selectivity for FM sweeps. Together with previous studies, these data indicate that the same receptive field properties shape multiple filters (sweep direction, rate, and level) for FM sweeps, a sound common in multiple vocalizations, including human speech. The matched behavioral and neural adaptations for low-intensity echolocation in the pallid bat will facilitate foraging with reduced probability of acoustic detection by prey.

  6. Matched Behavioral and Neural Adaptations for Low Sound Level Echolocation in a Gleaning Bat, Antrozous pallidus

    PubMed Central

    Measor, Kevin R.; Leavell, Brian C.; Brewton, Dustin H.; Rumschlag, Jeffrey; Barber, Jesse R.

    2017-01-01

    Abstract In active sensing, animals make motor adjustments to match sensory inputs to specialized neural circuitry. Here, we describe an active sensing system for sound level processing. The pallid bat uses downward frequency-modulated (FM) sweeps as echolocation calls for general orientation and obstacle avoidance. The bat’s auditory cortex contains a region selective for these FM sweeps (FM sweep-selective region, FMSR). We show that the vast majority of FMSR neurons are sensitive and strongly selective for relatively low levels (30-60 dB SPL). Behavioral testing shows that when a flying bat approaches a target, it reduces output call levels to keep echo levels between ∼30 and 55 dB SPL. Thus, the pallid bat behaviorally matches echo levels to an optimized neural representation of sound levels. FMSR neurons are more selective for sound levels of FM sweeps than tones, suggesting that across-frequency integration enhances level tuning. Level-dependent timing of high-frequency sideband inhibition in the receptive field shapes increased level selectivity for FM sweeps. Together with previous studies, these data indicate that the same receptive field properties shape multiple filters (sweep direction, rate, and level) for FM sweeps, a sound common in multiple vocalizations, including human speech. The matched behavioral and neural adaptations for low-intensity echolocation in the pallid bat will facilitate foraging with reduced probability of acoustic detection by prey. PMID:28275715

  7. A Multi Directional Perfect Reconstruction Filter Bank Designed with 2-D Eigenfilter Approach: Application to Ultrasound Speckle Reduction.

    PubMed

    Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S

    2017-02-01

    B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.

  8. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    PubMed

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    PubMed

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Selective attention in normal and impaired hearing.

    PubMed

    Shinn-Cunningham, Barbara G; Best, Virginia

    2008-12-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.

  11. Selective Attention in Normal and Impaired Hearing

    PubMed Central

    Shinn-Cunningham, Barbara G.; Best, Virginia

    2008-01-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202

  12. Tuning of Human Modulation Filters Is Carrier-Frequency Dependent

    PubMed Central

    Simpson, Andrew J. R.; Reiss, Joshua D.; McAlpine, David

    2013-01-01

    Recent studies employing speech stimuli to investigate ‘cocktail-party’ listening have focused on entrainment of cortical activity to modulations at syllabic (5 Hz) and phonemic (20 Hz) rates. The data suggest that cortical modulation filters (CMFs) are dependent on the sound-frequency channel in which modulations are conveyed, potentially underpinning a strategy for separating speech from background noise. Here, we characterize modulation filters in human listeners using a novel behavioral method. Within an ‘inverted’ adaptive forced-choice increment detection task, listening level was varied whilst contrast was held constant for ramped increments with effective modulation rates between 0.5 and 33 Hz. Our data suggest that modulation filters are tonotopically organized (i.e., vary along the primary, frequency-organized, dimension). This suggests that the human auditory system is optimized to track rapid (phonemic) modulations at high sound-frequencies and slow (prosodic/syllabic) modulations at low frequencies. PMID:24009759

  13. Binding in visual working memory: the role of the episodic buffer.

    PubMed

    Baddeley, Alan D; Allen, Richard J; Hitch, Graham J

    2011-05-01

    The episodic buffer component of working memory is assumed to play a central role in the binding of features into objects, a process that was initially assumed to depend upon executive resources. Here, we review a program of work in which we specifically tested this assumption by studying the effects of a range of attentionally demanding concurrent tasks on the capacity to encode and retain both individual features and bound objects. We found no differential effect of concurrent load, even when the process of binding was made more demanding by separating the shape and color features spatially, temporally or across visual and auditory modalities. Bound features were however more readily disrupted by subsequent stimuli, a process we studied using a suffix paradigm. This suggested a need to assume a feature-based attentional filter followed by an object based storage process. Our results are interpreted within a modified version of the multicomponent working memory model. We also discuss work examining the role of the hippocampus in visual feature binding. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  15. Multi-channel spatial auditory display for speech communications

    NASA Astrophysics Data System (ADS)

    Begault, Durand; Erbe, Tom

    1993-10-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  16. Multichannel spatial auditory display for speech communications.

    PubMed

    Begault, D R; Erbe, T

    1994-10-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  17. Multichannel Spatial Auditory Display for Speed Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  18. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  19. In-air hearing of a diving duck: A comparison of psychoacoustic and auditory brainstem response thresholds

    USGS Publications Warehouse

    Crowell, Sara E.; Wells-Berlin, Alicia M.; Therrien, Ronald E.; Yannuzzi, Sally E.; Carr, Catherine E.

    2016-01-01

    Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000−3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.

  20. In-air hearing of a diving duck: A comparison of psychoacoustic and auditory brainstem response thresholds.

    PubMed

    Crowell, Sara E; Wells-Berlin, Alicia M; Therrien, Ronald E; Yannuzzi, Sally E; Carr, Catherine E

    2016-05-01

    Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000-3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.

  1. Seeing Circles and Drawing Ellipses: When Sound Biases Reproduction of Visual Motion

    PubMed Central

    Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard

    2016-01-01

    The perception and production of biological movements is characterized by the 1/3 power law, a relation linking the curvature and the velocity of an intended action. In particular, motions are perceived and reproduced distorted when their kinematics deviate from this biological law. Whereas most studies dealing with this perceptual-motor relation focused on visual or kinaesthetic modalities in a unimodal context, in this paper we show that auditory dynamics strikingly biases visuomotor processes. Biologically consistent or inconsistent circular visual motions were used in combination with circular or elliptical auditory motions. Auditory motions were synthesized friction sounds mimicking those produced by the friction of the pen on a paper when someone is drawing. Sounds were presented diotically and the auditory motion velocity was evoked through the friction sound timbre variations without any spatial cues. Remarkably, when subjects were asked to reproduce circular visual motion while listening to sounds that evoked elliptical kinematics without seeing their hand, they drew elliptical shapes. Moreover, distortion induced by inconsistent elliptical kinematics in both visual and auditory modalities added up linearly. These results bring to light the substantial role of auditory dynamics in the visuo-motor coupling in a multisensory context. PMID:27119411

  2. New HRCT-based measurement of the human outer ear canal as a basis for acoustical methods.

    PubMed

    Grewe, Johanna; Thiele, Cornelia; Mojallal, Hamidreza; Raab, Peter; Sankowsky-Rothe, Tobias; Lenarz, Thomas; Blau, Matthias; Teschner, Magnus

    2013-06-01

    As the form and size of the external auditory canal determine its transmitting function and hence the sound pressure in front of the eardrum, it is important to understand its anatomy in order to develop, optimize, and compare acoustical methods. High-resolution computed tomography (HRCT) data were measured retrospectively for 100 patients who had received a cochlear implant. In order to visualize the anatomy of the auditory canal, its length, radius, and the angle at which it runs were determined for the patients’ right and left ears. The canal’s volume was calculated, and a radius function was created. The determined length of the auditory canal averaged 23.6 mm for the right ear and 23.5 mm for the left ear. The calculated auditory canal volume (Vtotal) was 0.7 ml for the right ear and 0.69 ml for the left ear. The auditory canal was found to be significantly longer in men than in women, and the volume greater. The values obtained can be employed to develop a method that represents the shape of the auditory canal as accurately as possible to allow the best possible outcomes for hearing aid fitting.

  3. Auditory access, language access, and implicit sequence learning in deaf children.

    PubMed

    Hall, Matthew L; Eigsti, Inge-Marie; Bortfeld, Heather; Lillo-Martin, Diane

    2018-05-01

    Developmental psychology plays a central role in shaping evidence-based best practices for prelingually deaf children. The Auditory Scaffolding Hypothesis (Conway et al., 2009) asserts that a lack of auditory stimulation in deaf children leads to impoverished implicit sequence learning abilities, measured via an artificial grammar learning (AGL) task. However, prior research is confounded by a lack of both auditory and language input. The current study examines implicit learning in deaf children who were (Deaf native signers) or were not (oral cochlear implant users) exposed to language from birth, and in hearing children, using both AGL and Serial Reaction Time (SRT) tasks. Neither deaf nor hearing children across the three groups show evidence of implicit learning on the AGL task, but all three groups show robust implicit learning on the SRT task. These findings argue against the Auditory Scaffolding Hypothesis, and suggest that implicit sequence learning may be resilient to both auditory and language deprivation, within the tested limits. A video abstract of this article can be viewed at: https://youtu.be/EeqfQqlVHLI [Correction added on 07 August 2017, after first online publication: The video abstract link was added.]. © 2017 John Wiley & Sons Ltd.

  4. The use of linear programming techniques to design optimal digital filters for pulse shaping and channel equalization

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Burlage, D. W.

    1972-01-01

    A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.

  5. A shape-preserving oriented partial differential equation based on a new fidelity term for electronic speckle pattern interferometry fringe patterns denoising

    NASA Astrophysics Data System (ADS)

    Xu, Wenjun; Tang, Chen; Zheng, Tingyue; Qiu, Yue

    2018-07-01

    Oriented partial differential equations (OPDEs) have been demonstrated to be a powerful tool for preserving the integrity of fringes while filtering electronic speckle pattern interferometry (ESPI) fringe patterns. However, the main drawback of OPDEs-based methods is that many iterations are often needed, which causes the change in the shape of fringes. Change in the shape of fringes will affect the accuracy of subsequent fringe analysis. In this paper, we focus on preserving the shape of fringes while filtering, suggested here for the first time. We propose a shape-preserving OPDE for ESPI fringe patterns denoising by introducing a new fidelity term to the previous second-order single oriented PDE (SOOPDE). In our proposed fidelity term, the evolution image is subtracted from the shrinkage result of original noisy image by shearlet transform. Our proposed shape-preserving OPDE is capable of eliminating noise effectively, keeping the integrity of fringes, and more importantly, preserving the shape of fringes. We test the proposed shape-preserving OPDE on three computer-simulated and three experimentally obtained ESPI fringe patterns with poor quality. Furthermore, we compare our model with three representative filtering methods, including the widely used SOOPDE, shearlet transform and coherence-enhancing diffusion (CED). We also compare our proposed fidelity term with the traditional fidelity term. Experimental results show that the proposed shape-preserving OPDE not only yields filtered images with visual quality on par with those by CED which is the state-of-the-art method for ESPI fringe patterns denoising, but also keeps the shape of ESPI fringe patterns.

  6. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    PubMed Central

    Zaltz, Yael; Globerson, Eitan; Amir, Noam

    2017-01-01

    The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318

  7. Ontogenetic development of the inner ear saccule and utricle in the Lusitanian toadfish: Potential implications for auditory sensitivity.

    PubMed

    Chaves, Patrícia P; Valdoria, Ciara M C; Amorim, M Clara P; Vasconcelos, Raquel O

    2017-09-01

    Studies addressing structure-function relationships of the fish auditory system during development are sparse compared to other taxa. The Batrachoididae has become an important group to investigate mechanisms of auditory plasticity and evolution of auditory-vocal systems. A recent study reported ontogenetic improvements in the inner ear saccule sensitivity of the Lusitanian toadfish, Halobatrachus didactylus, but whether this results from changes in the sensory morphology remains unknown. We investigated how the macula and organization of auditory receptors in the saccule and utricle change during growth in this species. Inner ear sensory epithelia were removed from the end organs of previously PFA-fixed specimens, from non-vocal posthatch fry (<1.4 cm, standard length) to adults (>23 cm). Epithelia were phalloidin-stained and analysed for area, shape, number and orientation patterns of hair cells (HC), and number and size of saccular supporting cells (SC). Saccular macula area expanded 41x in total, and significantly more (relative to body length) among vocal juveniles (2.3-2.9 cm). Saccular HC number increased 25x but HC density decreased, suggesting that HC addition is slower relative to epithelial growth. While SC density decreased, SC apical area increased, contributing to the epithelial expansion. The utricule revealed increased HC density (striolar region) and less epithelial expansion (5x) with growth, contrasting with the saccule that may have a different developmental pattern due to its larger size and main auditory functions. Both macula shape and HC orientation patterns were already established in the posthatch fry and retained throughout growth in both end organs. We suggest that previously reported ontogenetic improvements in saccular sensitivity might be associated with changes in HC number (not density), size and/or molecular mechanisms controlling HC sensitivity. This is one of the first studies investigating the ontogenetic development of the saccule and utricle in a vocal fish and how it potentially relates to auditory enhancement for acoustic communication. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Spectral and Temporal Processing in Rat Posterior Auditory Cortex

    PubMed Central

    Pandya, Pritesh K.; Rathbun, Daniel L.; Moucha, Raluca; Engineer, Navzer D.; Kilgard, Michael P.

    2009-01-01

    The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex. PMID:17615251

  9. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    PubMed

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. A Temporal Model of Level-Invariant, Tone-in-Noise Detection

    ERIC Educational Resources Information Center

    Berg, Bruce G.

    2004-01-01

    Level-invariant detection refers to findings that thresholds in tone-in-noise detection are unaffected by roving-level procedures that degrade energy cues. Such data are inconsistent with ideas that detection is based on the energy passed by an auditory filter. A hypothesis that detection is based on a level-invariant temporal cue is advanced.…

  11. Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise

    PubMed Central

    Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P.; Ahlfors, Seppo P.; Huang, Samantha; Raij, Tommi; Sams, Mikko; Vasios, Christos E.; Belliveau, John W.

    2011-01-01

    How can we concentrate on relevant sounds in noisy environments? A “gain model” suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A “tuning model” suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for “frequency tagging” of attention effects on maskers. Noise masking reduced early (50–150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50–150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise. PMID:21368107

  12. The role of auditory and cognitive factors in understanding speech in noise by normal-hearing older listeners

    PubMed Central

    Schoof, Tim; Rosen, Stuart

    2014-01-01

    Normal-hearing older adults often experience increased difficulties understanding speech in noise. In addition, they benefit less from amplitude fluctuations in the masker. These difficulties may be attributed to an age-related auditory temporal processing deficit. However, a decline in cognitive processing likely also plays an important role. This study examined the relative contribution of declines in both auditory and cognitive processing to the speech in noise performance in older adults. Participants included older (60–72 years) and younger (19–29 years) adults with normal hearing. Speech reception thresholds (SRTs) were measured for sentences in steady-state speech-shaped noise (SS), 10-Hz sinusoidally amplitude-modulated speech-shaped noise (AM), and two-talker babble. In addition, auditory temporal processing abilities were assessed by measuring thresholds for gap, amplitude-modulation, and frequency-modulation detection. Measures of processing speed, attention, working memory, Text Reception Threshold (a visual analog of the SRT), and reading ability were also obtained. Of primary interest was the extent to which the various measures correlate with listeners' abilities to perceive speech in noise. SRTs were significantly worse for older adults in the presence of two-talker babble but not SS and AM noise. In addition, older adults showed some cognitive processing declines (working memory and processing speed) although no declines in auditory temporal processing. However, working memory and processing speed did not correlate significantly with SRTs in babble. Despite declines in cognitive processing, normal-hearing older adults do not necessarily have problems understanding speech in noise as SRTs in SS and AM noise did not differ significantly between the two groups. Moreover, while older adults had higher SRTs in two-talker babble, this could not be explained by age-related cognitive declines in working memory or processing speed. PMID:25429266

  13. The Effect of Pulse Shaping QPSK on Bandwidth Efficiency

    NASA Technical Reports Server (NTRS)

    Purba, Josua Bisuk Mubyarto; Horan, Shelia

    1997-01-01

    This research investigates the effect of pulse shaping QPSK on bandwidth efficiency over a non-linear channel. This investigation will include software simulations and the hardware implementation. Three kinds of filters: the 5th order Butterworth filter, the 3rd order Bessel filter and the Square Root Raised Cosine filter with a roll off factor (alpha) of 0.25,0.5 and 1, have been investigated as pulse shaping filters. Two different high power amplifiers, one a Traveling Wave Tube Amplifier (TWTA) and the other a Solid State Power Amplifier (SSPA) have been investigated in the hardware implementation. A significant improvement in the bandwidth utilization (rho) for the filtered data compared to unfiltered data through the non-linear channel is shown in the results. This method promises strong performance gains in a bandlimited channel when compared to unfiltered systems. This work was conducted at NMSU in the Center for Space Telemetering, and Telecommunications Systems in the Klipsch School of Electrical and Computer Engineering Department and is supported by a grant from the National Aeronautics and Space Administration (NASA) NAG5-1491.

  14. Segmentation of risk structures for otologic surgery using the Probabilistic Active Shape Model (PASM)

    NASA Astrophysics Data System (ADS)

    Becker, Meike; Kirschner, Matthias; Sakas, Georgios

    2014-03-01

    Our research project investigates a multi-port approach for minimally-invasive otologic surgery. For planning such a surgery, an accurate segmentation of the risk structures is crucial. However, the segmentation of these risk structures is a challenging task: The anatomical structures are very small and some have a complex shape, low contrast and vary both in shape and appearance. Therefore, prior knowledge is needed which is why we apply model-based approaches. In the present work, we use the Probabilistic Active Shape Model (PASM), which is a more flexible and specific variant of the Active Shape Model (ASM), to segment the following risk structures: cochlea, semicircular canals, facial nerve, chorda tympani, ossicles, internal auditory canal, external auditory canal and internal carotid artery. For the evaluation we trained and tested the algorithm on 42 computed tomography data sets using leave-one-out tests. Visual assessment of the results shows in general a good agreement of manual and algorithmic segmentations. Further, we achieve a good Average Symmetric Surface Distance while the maximum error is comparatively large due to low contrast at start and end points. Last, we compare the PASM to the standard ASM and show that the PASM leads to a higher accuracy.

  15. Investigation of FPGA-Based Real-Time Adaptive Digital Pulse Shaping for High-Count-Rate Applications

    NASA Astrophysics Data System (ADS)

    Saxena, Shefali; Hawari, Ayman I.

    2017-07-01

    Digital signal processing techniques have been widely used in radiation spectrometry to provide improved stability and performance with compact physical size over the traditional analog signal processing. In this paper, field-programmable gate array (FPGA)-based adaptive digital pulse shaping techniques are investigated for real-time signal processing. National Instruments (NI) NI 5761 14-bit, 250-MS/s adaptor module is used for digitizing high-purity germanium (HPGe) detector's preamplifier pulses. Digital pulse processing algorithms are implemented on the NI PXIe-7975R reconfigurable FPGA (Kintex-7) using the LabVIEW FPGA module. Based on the time separation between successive input pulses, the adaptive shaping algorithm selects the optimum shaping parameters (rise time and flattop time of trapezoid-shaping filter) for each incoming signal. A digital Sallen-Key low-pass filter is implemented to enhance signal-to-noise ratio and reduce baseline drifting in trapezoid shaping. A recursive trapezoid-shaping filter algorithm is employed for pole-zero compensation of exponentially decayed (with two-decay constants) preamplifier pulses of an HPGe detector. It allows extraction of pulse height information at the beginning of each pulse, thereby reducing the pulse pileup and increasing throughput. The algorithms for RC-CR2 timing filter, baseline restoration, pile-up rejection, and pulse height determination are digitally implemented for radiation spectroscopy. Traditionally, at high-count-rate conditions, a shorter shaping time is preferred to achieve high throughput, which deteriorates energy resolution. In this paper, experimental results are presented for varying count-rate and pulse shaping conditions. Using adaptive shaping, increased throughput is accepted while preserving the energy resolution observed using the longer shaping times.

  16. A Filtering Approach for Image-Guided Surgery With a Highly Articulated Surgical Snake Robot.

    PubMed

    Tully, Stephen; Choset, Howie

    2016-02-01

    The objective of this paper is to introduce a probabilistic filtering approach to estimate the pose and internal shape of a highly flexible surgical snake robot during minimally invasive surgery. Our approach renders a depiction of the robot that is registered to preoperatively reconstructed organ models to produce a 3-D visualization that can be used for surgical feedback. Our filtering method estimates the robot shape using an extended Kalman filter that fuses magnetic tracker data with kinematic models that define the motion of the robot. Using Lie derivative analysis, we show that this estimation problem is observable, and thus, the shape and configuration of the robot can be successfully recovered with a sufficient number of magnetic tracker measurements. We validate this study with benchtop and in-vivo image-guidance experiments in which the surgical robot was driven along the epicardial surface of a porcine heart. This paper introduces a filtering approach for shape estimation that can be used for image guidance during minimally invasive surgery. The methods being introduced in this paper enable informative image guidance for highly articulated surgical robots, which benefits the advancement of robotic surgery.

  17. Reducing Conservatism of Analytic Transient Response Bounds via Shaping Filters

    NASA Technical Reports Server (NTRS)

    Kwan, Aiyueh; Bedrossian, Nazareth; Jan, Jiann-Woei; Grigoriadis, Karolos; Hua, Tuyen (Technical Monitor)

    1999-01-01

    Recent results show that the peak transient response of a linear system to bounded energy inputs can be computed using the energy-to-peak gain of the system. However, analytically computed peak response bound can be conservative for a class of class bounded energy signals, specifically pulse trains generated from jet firings encountered in space vehicles. In this paper, shaping filters are proposed as a Methodology to reduce the conservatism of peak response analytic bounds. This Methodology was applied to a realistic Space Station assembly operation subject to jet firings. The results indicate that shaping filters indeed reduce the predicted peak response bounds.

  18. Perceptual integration of faces and voices depends on the interaction of emotional content and spatial frequency.

    PubMed

    Kokinous, Jenny; Tavano, Alessandro; Kotz, Sonja A; Schröger, Erich

    2017-02-01

    The role of spatial frequencies (SF) is highly debated in emotion perception, but previous work suggests the importance of low SFs for detecting emotion in faces. Furthermore, emotion perception essentially relies on the rapid integration of multimodal information from faces and voices. We used EEG to test the functional relevance of SFs in the integration of emotional and non-emotional audiovisual stimuli. While viewing dynamic face-voice pairs, participants were asked to identify auditory interjections, and the electroencephalogram (EEG) was recorded. Audiovisual integration was measured as auditory facilitation, indexed by the extent of the auditory N1 amplitude suppression in audiovisual compared to an auditory only condition. We found an interaction of SF filtering and emotion in the auditory response suppression. For neutral faces, larger N1 suppression ensued in the unfiltered and high SF conditions as compared to the low SF condition. Angry face perception led to a larger N1 suppression in the low SF condition. While the results for the neural faces indicate that perceptual quality in terms of SF content plays a major role in audiovisual integration, the results for angry faces suggest that early multisensory integration of emotional information favors low SF neural processing pathways, overruling the predictive value of the visual signal per se. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Integrating Information from Different Senses in the Auditory Cortex

    PubMed Central

    King, Andrew J.; Walker, Kerry M.M.

    2015-01-01

    Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies. PMID:22798035

  20. Temporal Organization of Sound Information in Auditory Memory.

    PubMed

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  1. Evaluation of an auditory model for echo delay accuracy in wideband biosonar.

    PubMed

    Sanderson, Mark I; Neretti, Nicola; Intrator, Nathan; Simmons, James A

    2003-09-01

    In a psychophysical task with echoes that jitter in delay, big brown bats can detect changes as small as 10-20 ns at an echo signal-to-noise ratio of approximately 49 dB and 40 ns at approximately 36 dB. This performance is possible to achieve with ideal coherent processing of the wideband echoes, but it is widely assumed that the bat's peripheral auditory system is incapable of encoding signal waveforms to represent delay with the requisite precision or phase at ultrasonic frequencies. This assumption was examined by modeling inner-ear transduction with a bank of parallel bandpass filters followed by low-pass smoothing. Several versions of the filterbank model were tested to learn how the smoothing filters, which are the most critical parameter for controlling the coherence of the representation, affect replication of the bat's performance. When tested at a signal-to-noise ratio of 36 dB, the model achieved a delay acuity of 83 ns using a second-order smoothing filter with a cutoff frequency of 8 kHz. The same model achieved a delay acuity of 17 ns when tested with a signal-to-noise ratio of 50 dB. Jitter detection thresholds were an order of magnitude worse than the bat for fifth-order smoothing or for lower cutoff frequencies. Most surprising is that effectively coherent reception is possible with filter cutoff frequencies well below any of the ultrasonic frequencies contained in the bat's sonar sounds. The results suggest that only a modest rise in the frequency response of smoothing in the bat's inner ear can confer full phase sensitivity on subsequent processing and account for the bat's fine acuity or delay.

  2. Habituation deficit of auditory N100m in patients with fibromyalgia.

    PubMed

    Choi, W; Lim, M; Kim, J S; Chung, C K

    2016-11-01

    Habituation refers to the brain's inhibitory mechanism against sensory overload and its brain correlate has been investigated in the form of a well-defined event-related potential, N100 (N1). Fibromyalgia is an extensively described chronic pain syndrome with concurrent manifestations of reduced tolerance and enhanced sensation of painful and non-painful stimulation, suggesting an association with central amplification of all sensory domains. Among diverse sensory modalities, we utilized repetitive auditory stimulation to explore the anomalous sensory information processing in fibromyalgia as evidenced by N1 habituation. Auditory N1 was assessed in 19 fibromyalgia patients and age-, education- and gender-matched 21 healthy control subjects under the duration-deviant passive oddball paradigm and magnetoencephalography recording. The brain signal of the first standard stimulus (following each deviant) and last standard stimulus (preceding each deviant) were analysed to identify N1 responses. N1 amplitude difference and adjusted amplitude ratio were computed as habituation indices. Fibromyalgia patients showed lower N1 amplitude difference (left hemisphere: p = 0.004; right hemisphere: p = 0.034) and adjusted N1 amplitude ratio (left hemisphere: p = 0.001; right hemisphere: p = 0.052) than healthy control subjects, indicating deficient auditory habituation. Further, augmented N1 amplitude pattern (p = 0.029) during the stimulus repetition was observed in fibromyalgia patients. Fibromyalgia patients failed to demonstrate auditory N1 habituation to repetitively presenting stimuli, which indicates their compromised early auditory information processing. Our findings provide neurophysiological evidence of inhibitory failure and cortical augmentation in fibromyalgia. WHAT'S ALREADY KNOWN ABOUT THIS TOPIC?: Fibromyalgia has been associated with altered filtering of irrelevant somatosensory input. However, whether this abnormality can extend to the auditory sensory system remains controversial. N!00, an event-related potential, has been widely utilized to assess the brain's habituation capacity against sensory overload. WHAT DOES THIS STUDY ADD?: Fibromyalgia patients showed defect in N100 habituation to repetitive auditory stimuli, indicating compromised early auditory functioning. This study identified deficient inhibitory control over irrelevant auditory stimuli in fibromyalgia. © 2016 European Pain Federation - EFIC®.

  3. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain.

    PubMed

    Martínez, I; Rosa, M; Arsuaga, J-L; Jarabo, P; Quam, R; Lorenzo, C; Gracia, A; Carretero, J-M; Bermúdez de Castro, J-M; Carbonell, E

    2004-07-06

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range.

  4. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain

    PubMed Central

    Martínez, I.; Rosa, M.; Arsuaga, J.-L.; Jarabo, P.; Quam, R.; Lorenzo, C.; Gracia, A.; Carretero, J.-M.; de Castro, J.-M. Bermúdez; Carbonell, E.

    2004-01-01

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range. PMID:15213327

  5. Pulse shaping system research of CdZnTe radiation detector for high energy x-ray diagnostic

    NASA Astrophysics Data System (ADS)

    Li, Miao; Zhao, Mingkun; Ding, Keyu; Zhou, Shousen; Zhou, Benjie

    2018-02-01

    As one of the typical wide band-gap semiconductor materials, the CdZnTe material has high detection efficiency and excellent energy resolution for the hard X-ray and the Gamma ray. The generated signal of the CdZnTe detector needs to be transformed to the pseudo-Gaussian pulse with a small impulse-width to remove noise and improve the energy resolution by the following nuclear spectrometry data acquisition system. In this paper, the multi-stage pseudo-Gaussian shaping-filter has been investigated based on the nuclear electronic principle. The optimized circuit parameters were also obtained based on the analysis of the characteristics of the pseudo-Gaussian shaping-filter in our following simulations. Based on the simulation results, the falling-time of the output pulse was decreased and faster response time can be obtained with decreasing shaping-time τs-k. And the undershoot was also removed when the ratio of input resistors was set to 1 to 2.5. Moreover, a two stage sallen-key Gaussian shaping-filter was designed and fabricated by using a low-noise voltage feedback operation amplifier LMH6628. A detection experiment platform had been built by using the precise pulse generator CAKE831 as the imitated radiation pulse which was equivalent signal of the semiconductor CdZnTe detector. Experiment results show that the output pulse of the two stage pseudo-Gaussian shaping filter has minimum 200ns pulse width (FWHM), and the output pulse of each stage was well consistent with the simulation results. Based on the performance in our experiment, this multi-stage pseudo-Gaussian shaping-filter can reduce the event-lost caused by pile-up in the CdZnTe semiconductor detector and improve the energy resolution effectively.

  6. Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature

    NASA Astrophysics Data System (ADS)

    Kwon, Minseok

    While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system, the regulation of time constant update for filters in signal/control path as well as level-independent frequency glides with fixed frequency modulation. First, we scrutinized performance development in keyword recognition using the proposed methods in quiet and noise-corrupted environments. The results argue that multi-scale integration should be used along with CE in order to avoid ambiguous continuity in unvoiced segments. Moreover, the inclusion of the all modifications was observed to guarantee the noise-type-independent robustness particularly with severe interference. Moreover, the CASA with the auditory model was implemented into a single/dual-channel ASR using reference TIMIT corpus so as to get more general result. Hidden Markov model (HTK) toolkit was used for phone recognition in various environmental conditions. In a single-channel ASR, the results argue that unmasked acoustic features (unmasked GFCC) should combine with target estimates from the mask to compensate for missing information. From the observation of a dual-channel ASR, the combined GFCC guarantees the highest performance regardless of interferences within speech. Moreover, consistent improvement of noise robustness by GFCC (unmasked or combined) shows the validity of our proposed CASA implementation in dual microphone system. In conclusion, the proposed framework proves the robustness of the acoustic features in various background interferences via both direct distance evaluation and statistical assessment. In addition, the introduction of dual microphone system using the framework in this study shows the potential of the effective implementation of the auditory model-based CASA in ASR.

  7. Auditory processing deficits in growth restricted fetuses affect later language development.

    PubMed

    Kisilevsky, Barbara S; Davies, Gregory A L

    2007-01-01

    An increased risk for language deficits in infants born growth restricted has been reported in follow-up studies for more than 20 years, suggesting a relation between fetal auditory system development and later language learning. Work with animal models indicate that there are at least two ways in which growth restriction could affect the development of auditory perception in human fetuses: a delay in myelination or conduction and an increase in sensorineural threshold. Systematic study of auditory function in growth restricted human fetuses has not been reported. However, results of studies employing low-risk fetuses delivering as healthy full-term infants demonstrate that, by late gestation, the fetus can hear, sound properties modulate behavior, and sensory information is available from both inside (e.g., maternal vascular) and outside (e.g., noise, voices, music) of the maternal body. These data provide substantive evidence that the auditory system is functioning and that environmental sounds are available for shaping neural networks and laying the foundation for language acquisition before birth. We hypothesize that fetal growth restriction affects auditory system development, resulting in atypical auditory information processing in growth restricted fetuses compared to healthy, appropriately-grown-for-gestational-age fetuses. Speech perception that lays the foundation for later language competence will differ in growth restricted compared to normally grown fetuses and be associated with later language abilities.

  8. Tympanal spontaneous oscillations reveal mechanisms for the control of amplified frequency in tree crickets

    NASA Astrophysics Data System (ADS)

    Mhatre, Natasha; Robert, Daniel

    2018-05-01

    Tree cricket hearing shows all the features of an actively amplified auditory system, particularly spontaneous oscillations (SOs) of the tympanal membrane. As expected from an actively amplified auditory system, SO frequency and the peak frequency in evoked responses as observed in sensitivity spectra are correlated. Sensitivity spectra also show compressive non-linearity at this frequency, i.e. a reduction in peak height and sharpness with increasing stimulus amplitude. Both SO and amplified frequency also change with ambient temperature, allowing the auditory system to maintain a filter that is matched to song frequency. In tree crickets, remarkably, song frequency varies with ambient temperature. Interestingly, active amplification has been reported to be switched ON and OFF. The mechanism of this switch is as yet unknown. In order to gain insights into this switch, we recorded and analysed SOs as the auditory system transitioned from the passive (OFF) state to the active (ON) state. We found that while SO amplitude did not follow a fixed pattern, SO frequency changed during the ON-OFF transition. SOs were first detected above noise levels at low frequencies, sometimes well below the known song frequency range (0.5-1 kHz lower). SO frequency was observed to increase over the next ˜30 minutes, in the absence of any ambient temperature change, before settling at a frequency within the range of conspecific song. We examine the frequency shift in SO spectra with temperature and during the ON/OFF transition and discuss the mechanistic implications. To our knowledge, such modulation of active auditory amplification, and its dynamics are unique amongst auditory animals.

  9. SU-C-207-06: In Vivo Quantification of Gold Nanoparticles Using K-Edge Imaging Via Spectrum Shaping by Gold Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, H; Cormack, R; Bhagwat, M

    Purpose: Gold nanoparticles (AuNP) are multifunctional platforms ideal for drug delivery, targeted imaging and radiosensitization. We have investigated quantitative imaging of AuNPs using on board imager (OBI) cone beam computed tomography (CBCT). To this end, we also present, for the first time, a novel method for k-edge imaging of AuNP by filter-based spectral shaping. Methods: We used a digital 25 cm diameter water phantom, embedded with 3 cm spheres filled with AuNPs of different concentrations (0 mg/ml – 16 mg/ml). A poly-energetic X-ray spectrum of 140 kVp from a conventional X-ray tube is shaped by balanced K-edge filters to createmore » an excess of photons right above the K-edge of gold at 80.7 keV. The filters consist of gold, tin, copper and aluminum foils. The phantom with appropriately assigned attenuation coefficients is forward projected onto a detector for each energy bin and then integrated. FKD reconstruction is performed on the integrated projections. Scatter, detector efficiency and noise are included. Results: We found that subtracting the results of two filter sets (Filter A:127 µm gold foil with 254 µm tin, 330 µm copper and 1 mm aluminum, and Filter B: 635 µm tin with 264 µm copper and 1 mm aluminum), provides substantial image contrast. The resulting filtered spectra match well below 80.7 keV, while maintaining sufficient X-ray quanta just above that. Voxel intensities of AuNP containing spheres increase linearly with AuNP concentration. K-edge imaging provides 18% more sensitivity than the tin filter alone, and 38% more sensitivity than the gold filter alone. Conclusion: We have shown that it is feasible to quantitatively detect AuNP distributions in a patient-sized phantom using clinical CBCT and K-edge spectral shaping.« less

  10. Advanced photonic filters based on cascaded Sagnac loop reflector resonators in silicon-on-insulator nanowires

    NASA Astrophysics Data System (ADS)

    Wu, Jiayang; Moein, Tania; Xu, Xingyuan; Moss, David J.

    2018-04-01

    We demonstrate advanced integrated photonic filters in silicon-on-insulator (SOI) nanowires implemented by cascaded Sagnac loop reflector (CSLR) resonators. We investigate mode splitting in these standing-wave (SW) resonators and demonstrate its use for engineering the spectral profile of on-chip photonic filters. By changing the reflectivity of the Sagnac loop reflectors (SLRs) and the phase shifts along the connecting waveguides, we tailor mode splitting in the CSLR resonators to achieve a wide range of filter shapes for diverse applications including enhanced light trapping, flat-top filtering, Q factor enhancement, and signal reshaping. We present the theoretical designs and compare the CSLR resonators with three, four, and eight SLRs fabricated in SOI. We achieve versatile filter shapes in the measured transmission spectra via diverse mode splitting that agree well with theory. This work confirms the effectiveness of using CSLR resonators as integrated multi-functional SW filters for flexible spectral engineering.

  11. The effect of filtered speech feedback on the frequency of stuttering

    NASA Astrophysics Data System (ADS)

    Rami, Manish Krishnakant

    2000-10-01

    This study investigated the effects of filtered components of speech and whispered speech on the frequency of stuttering. It is known that choral speech, shadowing, and altered auditory feedback are the only conditions which induce fluency without any additional effort than normally required to speak on the part of people who stutter. All these conditions use speech as a second signal. This experiment examined the role of components of speech signal as delineated by the source- filter theory of speech production. Three filtered speech signals, a whispered speech signal, and a choral speech signal formed the stimuli. It was postulated that if the speech signal in whole was necessary for producing fluency in people who stutter, then all other conditions except choral speech should fail to produce fluency enhancement. If the glottal source alone was adequate in restoring fluency, then only the conditions of NAF and whispered speech should fail in promoting fluency. In the event that full filter characteristics are necessary for the fluency creating effects, then all conditions except the choral speech and whispered speech should fail to produce fluency. If any part of the filter characteristics is sufficient in yielding fluency, then only the NAF and the approximate glottal source should fail to demonstrate an increase in the amount of fluency. Twelve adults who stuttered read passages under the six conditions while receiving auditory feedback consisting of one of the six experimental conditions: (a)NAF; (b)approximate glottal source; (c)glottal source and first formant; (d)glottal source and first two formants; and (e)whispered speech. Frequencies of stuttering were obtained for each condition and submitted to descriptive and inferential statistical analysis. Statistically significant differences in means were found within the choral feedback conditions. Specifically, the choral speech, the source and first formant, source and the first two formants, and the whispered speech conditions all decreased the frequency of stuttering while the approximate glottal source did not. It is suggested that articulatory events, chiefly the encoded speech output of the vocal tract origin, afford effective cues and induces fluent speech in people who stutter.

  12. GABAergic Local Interneurons Shape Female Fruit Fly Response to Mating Songs.

    PubMed

    Yamada, Daichi; Ishimoto, Hiroshi; Li, Xiaodong; Kohashi, Tsunehiko; Ishikawa, Yuki; Kamikouchi, Azusa

    2018-05-02

    Many animals use acoustic signals to attract a potential mating partner. In fruit flies ( Drosophila melanogaster ), the courtship pulse song has a species-specific interpulse interval (IPI) that activates mating. Although a series of auditory neurons in the fly brain exhibit different tuning patterns to IPIs, it is unclear how the response of each neuron is tuned. Here, we studied the neural circuitry regulating the activity of antennal mechanosensory and motor center (AMMC)-B1 neurons, key secondary auditory neurons in the excitatory neural pathway that relay song information. By performing Ca 2+ imaging in female flies, we found that the IPI selectivity observed in AMMC-B1 neurons differs from that of upstream auditory sensory neurons [Johnston's organ (JO)-B]. Selective knock-down of a GABA A receptor subunit in AMMC-B1 neurons increased their response to short IPIs, suggesting that GABA suppresses AMMC-B1 activity at these IPIs. Connection mapping identified two GABAergic local interneurons that synapse with AMMC-B1 and JO-B. Ca 2+ imaging combined with neuronal silencing revealed that these local interneurons, AMMC-LN and AMMC-B2, shape the response pattern of AMMC-B1 neurons at a 15 ms IPI. Neuronal silencing studies further suggested that both GABAergic local interneurons suppress the behavioral response to artificial pulse songs in flies, particularly those with a 15 ms IPI. Altogether, we identified a circuit containing two GABAergic local interneurons that affects the temporal tuning of AMMC-B1 neurons in the song relay pathway and the behavioral response to the courtship song. Our findings suggest that feedforward inhibitory pathways adjust the behavioral response to courtship pulse songs in female flies. SIGNIFICANCE STATEMENT To understand how the brain detects time intervals between sound elements, we studied the neural pathway that relays species-specific courtship song information in female Drosophila melanogaster We demonstrate that the signal transmission from auditory sensory neurons to key secondary auditory neurons antennal mechanosensory and motor center (AMMC)-B1 is the first-step to generate time interval selectivity of neurons in the song relay pathway. Two GABAergic local interneurons are suggested to shape the interval selectivity of AMMC-B1 neurons by receiving auditory inputs and in turn providing feedforward inhibition onto AMMC-B1 neurons. Furthermore, these GABAergic local interneurons suppress the song response behavior in an interval-dependent manner. Our results provide new insights into the neural circuit basis to adjust neuronal and behavioral responses to a species-specific communication sound. Copyright © 2018 the authors 0270-6474/18/384329-19$15.00/0.

  13. Perception of dissonance by people with normal hearing and sensorineural hearing loss

    NASA Astrophysics Data System (ADS)

    Tufts, Jennifer B.; Molis, Michelle R.; Leek, Marjorie R.

    2005-08-01

    The purpose of this study was to determine whether the perceived sensory dissonance of pairs of pure tones (PT dyads) or pairs of harmonic complex tones (HC dyads) is altered due to sensorineural hearing loss. Four normal-hearing (NH) and four hearing-impaired (HI) listeners judged the sensory dissonance of PT dyads geometrically centered at 500 and 2000 Hz, and of HC dyads with fundamental frequencies geometrically centered at 500 Hz. The frequency separation of the members of the dyads varied from 0 Hz to just over an octave. In addition, frequency selectivity was assessed at 500 and 2000 Hz for each listener. Maximum dissonance was perceived at frequency separations smaller than the auditory filter bandwidth for both groups of listners, but maximum dissonance for HI listeners occurred at a greater proportion of their bandwidths at 500 Hz than at 2000 Hz. Further, their auditory filter bandwidths at 500 Hz were significantly wider than those of the NH listeners. For both the PT and HC dyads, curves displaying dissonance as a function of frequency separation were more compressed for the HI listeners, possibly reflecting less contrast between their perceptions of consonance and dissonance compared with the NH listeners.

  14. Pulse shaping in mode-locked fiber lasers by in-cavity spectral filter.

    PubMed

    Boscolo, Sonia; Finot, Christophe; Karakuzu, Huseyin; Petropoulos, Periklis

    2014-02-01

    We numerically show the possibility of pulse shaping in a passively mode-locked fiber laser by inclusion of a spectral filter into the laser cavity. Depending on the amplitude transfer function of the filter, we are able to achieve various regimes of advanced temporal waveform generation, including ones featuring bright and dark parabolic-, flat-top-, triangular- and saw-tooth-profiled pulses. The results demonstrate the strong potential of an in-cavity spectral pulse shaper for controlling the dynamics of mode-locked fiber lasers.

  15. Multiscale characterization and analysis of shapes

    DOEpatents

    Prasad, Lakshman; Rao, Ramana

    2002-01-01

    An adaptive multiscale method approximates shapes with continuous or uniformly and densely sampled contours, with the purpose of sparsely and nonuniformly discretizing the boundaries of shapes at any prescribed resolution, while at the same time retaining the salient shape features at that resolution. In another aspect, a fundamental geometric filtering scheme using the Constrained Delaunay Triangulation (CDT) of polygonized shapes creates an efficient parsing of shapes into components that have semantic significance dependent only on the shapes' structure and not on their representations per se. A shape skeletonization process generalizes to sparsely discretized shapes, with the additional benefit of prunability to filter out irrelevant and morphologically insignificant features. The skeletal representation of characters of varying thickness and the elimination of insignificant and noisy spurs and branches from the skeleton greatly increases the robustness, reliability and recognition rates of character recognition algorithms.

  16. A pulse-shape discrimination method for improving Gamma-ray spectrometry based on a new digital shaping filter

    NASA Astrophysics Data System (ADS)

    Qin, Zhang-jian; Chen, Chuan; Luo, Jun-song; Xie, Xing-hong; Ge, Liang-quan; Wu, Qi-fan

    2018-04-01

    It is a usual practice for improving spectrum quality by the mean of designing a good shaping filter to improve signal-noise ratio in development of nuclear spectroscopy. Another method is proposed in the paper based on discriminating pulse-shape and discarding the bad pulse whose shape is distorted as a result of abnormal noise, unusual ballistic deficit or bad pulse pile-up. An Exponentially Decaying Pulse (EDP) generated in nuclear particle detectors can be transformed into a Mexican Hat Wavelet Pulse (MHWP) and the derivation process of the transform is given. After the transform is performed, the baseline drift is removed in the new MHWP. Moreover, the MHWP-shape can be discriminated with the three parameters: the time difference between the two minima of the MHWP, and the two ratios which are from the amplitude of the two minima respectively divided by the amplitude of the maximum in the MHWP. A new type of nuclear spectroscopy was implemented based on the new digital shaping filter and the Gamma-ray spectra were acquired with a variety of pulse-shape discrimination levels. It had manifested that the energy resolution and the peak-Compton ratio were both improved after the pulse-shape discrimination method was used.

  17. Conventional and cross-correlation brain-stem auditory evoked responses in the white leghorn chick: rate manipulations

    NASA Technical Reports Server (NTRS)

    Burkard, R.; Jones, S.; Jones, T.

    1994-01-01

    Rate-dependent changes in the chick brain-stem auditory evoked response (BAER) using conventional averaging and a cross-correlation technique were investigated. Five 15- to 19-day-old white leghorn chicks were anesthetized with Chloropent. In each chick, the left ear was acoustically stimulated. Electrical pulses of 0.1-ms duration were shaped, attenuated, and passed through a current driver to an Etymotic ER-2 which was sealed in the ear canal. Electrical activity from stainless-steel electrodes was amplified, filtered (300-3000 Hz) and digitized at 20 kHz. Click levels included 70 and 90 dB peSPL. In each animal, conventional BAERs were obtained at rates ranging from 5 to 90 Hz. BAERs were also obtained using a cross-correlation technique involving pseudorandom pulse sequences called maximum length sequences (MLSs). The minimum time between pulses, called the minimum pulse interval (MPI), ranged from 0.5 to 6 ms. Two BAERs were obtained for each condition. Dependent variables included the latency and amplitude of the cochlear microphonic (CM), wave 2 and wave 3. BAERs were observed in all chicks, for all level by rate combinations for both conventional and MLS BAERs. There was no effect of click level or rate on the latency of the CM. The latency of waves 2 and 3 increased with decreasing click level and increasing rate. CM amplitude decreased with decreasing click level, but was not influenced by click rate for the 70 dB peSPL condition. For the 90 dB peSPL click, CM amplitude was uninfluenced by click rate for conventional averaging. For MLS BAERs, CM amplitude was similar to conventional averaging for longer MPIs.(ABSTRACT TRUNCATED AT 250 WORDS).

  18. The plastic ear and perceptual relearning in auditory spatial perception

    PubMed Central

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  19. How musical expertise shapes speech perception: evidence from auditory classification images.

    PubMed

    Varnet, Léo; Wang, Tianyun; Peter, Chloe; Meunier, Fanny; Hoen, Michel

    2015-09-24

    It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians' higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.

  20. Early experience shapes vocal neural coding and perception in songbirds

    PubMed Central

    Woolley, Sarah M. N.

    2012-01-01

    Songbirds, like humans, are highly accomplished vocal learners. The many parallels between speech and birdsong and conserved features of mammalian and avian auditory systems have led to the emergence of the songbird as a model system for studying the perceptual mechanisms of vocal communication. Laboratory research on songbirds allows the careful control of early life experience and high-resolution analysis of brain function during vocal learning, production and perception. Here, I review what songbird studies have revealed about the role of early experience in the development of vocal behavior, auditory perception and the processing of learned vocalizations by auditory neurons. The findings of these studies suggest general principles for how exposure to vocalizations during development and into adulthood influences the perception of learned vocal signals. PMID:22711657

  1. A Bio-Realistic Analog CMOS Cochlea Filter With High Tunability and Ultra-Steep Roll-Off.

    PubMed

    Wang, Shiwei; Koickal, Thomas Jacob; Hamilton, Alister; Cheung, Rebecca; Smith, Leslie S

    2015-06-01

    This paper presents the design and experimental results of a cochlea filter in analog very large scale integration (VLSI) which highly resembles physiologically measured response of the mammalian cochlea. The filter consists of three specialized sub-filter stages which respectively provide passive response in low frequencies, actively tunable response in mid-band frequencies and ultra-steep roll-off at transition frequencies from pass-band to stop-band. The sub-filters are implemented in balanced ladder topology using floating active inductors. Measured results from the fabricated chip show that wide range of mid-band tuning including gain tuning of over 20 dB, Q factor tuning from 2 to 19 as well as the bio-realistic center frequency shift are achieved by adjusting only one circuit parameter. Besides, the filter has an ultra-steep roll-off reaching over 300 dB/dec. By changing biasing currents, the filter can be configured to operate with center frequencies from 31 Hz to 8 kHz. The filter is 9th order, consumes 59.5 ∼ 90.0 μW power and occupies 0.9 mm2 chip area. A parallel bank of the proposed filter can be used as the front-end in hearing prosthesis devices, speech processors as well as other bio-inspired auditory systems owing to its bio-realistic behavior, low power consumption and small size.

  2. Experimental investigation of the effect of inlet particle properties on the capture efficiency in an exhaust particulate filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viswanathan, Sandeep; Rothamer, David; Zelenyuk, Alla

    The impact of inlet particle properties on the filtration performance of clean and particulate matter (PM) laden cordierite filter samples was evaluated using PM generated by a spark-ignition direct-injection (SIDI) engine fuelled with tier II EEE certification gasoline. Prior to the filtration experiments, a scanning mobility particle spectrometer (SMPS) was used to measure the electrical-mobility based particle size distribution (PSD) in the SIDI exhaust from distinct engine operating conditions. An advanced aerosol characterization system that comprised of a centrifugal particle mass analyser (CPMA), a differential mobility analyser (DMA), and a single particle mass spectrometer (SPLAT II) was used to obtainmore » additional information on the SIDI particulate, including particle composition, mass, and dynamic shape factors (DSFs) in the transition () and free-molecular () flow regimes. During the filtration experiments, real-time measurements of PSDs upstream and downstream of the filter sample were used to estimate the filtration performance and the total trapped mass within the filter using an integrated particle size distribution method. The filter loading process was paused multiple times to evaluate the filtration performance in the partially loaded state. The change in vacuum aerodynamic diameter () distribution of mass-selected particles was examined for flow through the filter to identify whether preferential capture of particles of certain shapes occurred in the filter. The filter was also probed using different inlet PSDs to understand their impact on particle capture within the filter sample. Results from the filtration experiment suggest that pausing the filter loading process and subsequently performing the filter probing experiments did not impact the overall evolution of filtration performance. Within the present distribution of particle sizes, filter efficiency was independent of particle shape potentially due to the diffusion-dominant filtration process. Particle mobility diameter and trapped mass within the filter appeared to be the dominant parameters that impacted filter performance.« less

  3. PubChem3D: Shape compatibility filtering using molecular shape quadrupoles

    PubMed Central

    2011-01-01

    Background PubChem provides a 3-D neighboring relationship, which involves finding the maximal shape overlap between two static compound 3-D conformations, a computationally intensive step. It is highly desirable to avoid this overlap computation, especially if it can be determined with certainty that a conformer pair cannot meet the criteria to be a 3-D neighbor. As such, PubChem employs a series of pre-filters, based on the concept of volume, to remove approximately 65% of all conformer neighbor pairs prior to shape overlap optimization. Given that molecular volume, a somewhat vague concept, is rather effective, it leads one to wonder: can the existing PubChem 3-D neighboring relationship, which consists of billions of shape similar conformer pairs from tens of millions of unique small molecules, be used to identify additional shape descriptor relationships? Or, put more specifically, can one place an upper bound on shape similarity using other "fuzzy" shape-like concepts like length, width, and height? Results Using a basis set of 4.18 billion 3-D neighbor pairs identified from single conformer per compound neighboring of 17.1 million molecules, shape descriptors were computed for all conformers. These steric shape descriptors included several forms of molecular volume and shape quadrupoles, which essentially embody the length, width, and height of a conformer. For a given 3-D neighbor conformer pair, the volume and each quadrupole component (Qx, Qy, and Qz) were binned and their frequency of occurrence was examined. Per molecular volume type, this effectively produced three different maps, one per quadrupole component (Qx, Qy, and Qz), of allowed values for the similarity metric, shape Tanimoto (ST) ≥ 0.8. The efficiency of these relationships (in terms of true positive, true negative, false positive and false negative) as a function of ST threshold was determined in a test run of 13.2 billion conformer pairs not previously considered by the 3-D neighbor set. At an ST ≥ 0.8, a filtering efficiency of 40.4% of true negatives was achieved with only 32 false negatives out of 24 million true positives, when applying the separate Qx, Qy, and Qz maps in a series (Qxyz). This efficiency increased linearly as a function of ST threshold in the range 0.8-0.99. The Qx filter was consistently the most efficient followed by Qy and then by Qz. Use of a monopole volume showed the best overall performance, followed by the self-overlap volume and then by the analytic volume. Application of the monopole-based Qxyz filter in a "real world" test of 3-D neighboring of 4,218 chemicals of biomedical interest against 26.1 million molecules in PubChem reduced the total CPU cost of neighboring by between 24-38% and, if used as the initial filter, removed from consideration 48.3% of all conformer pairs at almost negligible computational overhead. Conclusion Basic shape descriptors, such as those embodied by size, length, width, and height, can be highly effective in identifying shape incompatible compound conformer pairs. When performing a 3-D search using a shape similarity cut-off, computation can be avoided by identifying conformer pairs that cannot meet the result criteria. Applying this methodology as a filter for PubChem 3-D neighboring computation, an improvement of 31% was realized, increasing the average conformer pair throughput from 154,000 to 202,000 per second per CPU core. PMID:21774809

  4. Loud Music Exposure and Cochlear Synaptopathy in Young Adults: Isolated Auditory Brainstem Response Effects but No Perceptual Consequences.

    PubMed

    Grose, John H; Buss, Emily; Hall, Joseph W

    2017-01-01

    The purpose of this study was to test the hypothesis that listeners with frequent exposure to loud music exhibit deficits in suprathreshold auditory performance consistent with cochlear synaptopathy. Young adults with normal audiograms were recruited who either did ( n = 31) or did not ( n = 30) have a history of frequent attendance at loud music venues where the typical sound levels could be expected to result in temporary threshold shifts. A test battery was administered that comprised three sets of procedures: (a) electrophysiological tests including distortion product otoacoustic emissions, auditory brainstem responses, envelope following responses, and the acoustic change complex evoked by an interaural phase inversion; (b) psychoacoustic tests including temporal modulation detection, spectral modulation detection, and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and speech-in-noise recognition. The results demonstrated that a history of loud music exposure can lead to a profile of peripheral auditory function that is consistent with an interpretation of cochlear synaptopathy in humans, namely, modestly abnormal auditory brainstem response Wave I/Wave V ratios in the presence of normal distortion product otoacoustic emissions and normal audiometric thresholds. However, there were no other electrophysiological, psychophysical, or speech perception effects. The absence of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synaptopathy is a valid pathophysiological condition in humans, its perceptual sequelae are either too diffuse or too inconsequential to permit a simple differential diagnosis of hidden hearing loss.

  5. The effects of the activation of the inner-hair-cell basolateral K+ channels on auditory nerve responses.

    PubMed

    Altoè, Alessandro; Pulkki, Ville; Verhulst, Sarah

    2018-07-01

    The basolateral membrane of the mammalian inner hair cell (IHC) expresses large voltage and Ca 2+ gated outward K + currents. To quantify how the voltage-dependent activation of the K + channels affects the functionality of the auditory nerve innervating the IHC, this study adopts a model of mechanical-to-neural transduction in which the basolateral K + conductances of the IHC can be made voltage-dependent or not. The model shows that the voltage-dependent activation of the K + channels (i) enhances the phase-locking properties of the auditory fiber (AF) responses; (ii) enables the auditory nerve to encode a large dynamic range of sound levels; (iii) enables the AF responses to synchronize precisely with the envelope of amplitude modulated stimuli; and (iv), is responsible for the steep offset responses of the AFs. These results suggest that the basolateral K + channels play a major role in determining the well-known response properties of the AFs and challenge the classical view that describes the IHC membrane as an electrical low-pass filter. In contrast to previous models of the IHC-AF complex, this study ascribes many of the AF response properties to fairly basic mechanisms in the IHC membrane rather than to complex mechanisms in the synapse. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Mate choice in the eye and ear of the beholder? Female multimodal sensory configuration influences her preferences.

    PubMed

    Ronald, Kelly L; Fernández-Juricic, Esteban; Lucas, Jeffrey R

    2018-05-16

    A common assumption in sexual selection studies is that receivers decode signal information similarly. However, receivers may vary in how they rank signallers if signal perception varies with an individual's sensory configuration. Furthermore, receivers may vary in their weighting of different elements of multimodal signals based on their sensory configuration. This could lead to complex levels of selection on signalling traits. We tested whether multimodal sensory configuration could affect preferences for multimodal signals. We used brown-headed cowbird ( Molothrus ater ) females to examine how auditory sensitivity and auditory filters, which influence auditory spectral and temporal resolution, affect song preferences, and how visual spatial resolution and visual temporal resolution, which influence resolution of a moving visual signal, affect visual display preferences. Our results show that multimodal sensory configuration significantly affects preferences for male displays: females with better auditory temporal resolution preferred songs that were shorter, with lower Wiener entropy, and higher frequency; and females with better visual temporal resolution preferred males with less intense visual displays. Our findings provide new insights into mate-choice decisions and receiver signal processing. Furthermore, our results challenge a long-standing assumption in animal communication which can affect how we address honest signalling, assortative mating and sensory drive. © 2018 The Author(s).

  7. Evaluation of psychoacoustic tests and P300 event-related potentials in elderly patients with hyperhomocysteinemia.

    PubMed

    Díaz-Leines, Sergio; Peñaloza-López, Yolanda R; Serrano-Miranda, Tirzo A; Flores-Ávalos, Blanca; Vidal-Ixta, Martha T; Jiménez-Herrera, Blanca

    2013-01-01

    Hyperhomocysteinemia as a risk factor for hearing impairment, neuronal damage and cognitive impairment in elderly patients is controversial and is limited by the small number of studies. The aim of this work was determine if elderly patients detected with hyperhomocysteinemia have an increased risk of developing abnormalities in the central auditory processes as compared with a group of patients with appropriate homocysteine levels, and to define the behaviour of psychoacoustic tests and long latency potentials (P300) in these patients. This was a cross-sectional, comparative and analytical study. We formed a group of patients with hyperhomocysteinemia and a control group with normal levels of homocysteine. All patients underwent audiometry, tympanometry and a selection of psychoacoustic tests (dichotic digits, low-pass filtered words, speech in noise and masking level difference), auditory evoked brainstem potentials and P300. Patients with hyperhomocysteinemia had higher values in the test of masking level difference than did the control group (P=.049) and more protracted latency in P300 (P=.000). Hyperhomocysteinemia is a factor that alters the central auditory functions. Alterations in psychoacoustic tests and disturbances in electrophysiological tests suggest that the central portion of the auditory pathway is affected in patients with hyperhomocysteinemia. Copyright © 2012 Elsevier España, S.L. All rights reserved.

  8. Some Behavioral and Neurobiological Constraints on Theories of Audiovisual Speech Integration: A Review and Suggestions for New Directions

    PubMed Central

    Altieri, Nicholas; Pisoni, David B.; Townsend, James T.

    2012-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield’s feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration. PMID:21968081

  9. Some behavioral and neurobiological constraints on theories of audiovisual speech integration: a review and suggestions for new directions.

    PubMed

    Altieri, Nicholas; Pisoni, David B; Townsend, James T

    2011-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield's feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration.

  10. A biophysical model for modulation frequency encoding in the cochlear nucleus.

    PubMed

    Eguia, Manuel C; Garcia, Guadalupe C; Romano, Sebastian A

    2010-01-01

    Encoding of amplitude modulated (AM) acoustical signals is one of the most compelling tasks for the mammalian auditory system: environmental sounds, after being filtered and transduced by the cochlea, become narrowband AM signals. Despite much experimental work dedicated to the comprehension of auditory system extraction and encoding of AM information, the neural mechanisms underlying this remarkable feature are far from being understood (Joris et al., 2004). One of the most accepted theories for this processing is the existence of a periodotopic organization (based on temporal information) across the more studied tonotopic axis (Frisina et al., 1990b). In this work, we will review some recent advances in the study of the mechanisms involved in neural processing of AM sounds, and propose an integrated model that runs from the external ear, through the cochlea and the auditory nerve, up to a sub-circuit of the cochlear nucleus (the first processing unit in the central auditory system). We will show that varying the amount of inhibition in our model we can obtain a range of best modulation frequencies (BMF) in some principal cells of the cochlear nucleus. This could be a basis for a synchronicity based, low-level periodotopic organization. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  11. Inhibitory Network Interactions Shape the Auditory Processing of Natural Communication Signals in the Songbird Auditory Forebrain

    PubMed Central

    Pinaud, Raphael; Terleph, Thomas A.; Tremere, Liisa A.; Phan, Mimi L.; Dagostin, André A.; Leão, Ricardo M.; Mello, Claudio V.; Vicario, David S.

    2008-01-01

    The role of GABA in the central processing of complex auditory signals is not fully understood. We have studied the involvement of GABAA-mediated inhibition in the processing of birdsong, a learned vocal communication signal requiring intact hearing for its development and maintenance. We focused on caudomedial nidopallium (NCM), an area analogous to parts of the mammalian auditory cortex with selective responses to birdsong. We present evidence that GABAA-mediated inhibition plays a pronounced role in NCM's auditory processing of birdsong. Using immunocytochemistry, we show that approximately half of NCM's neurons are GABAergic. Whole cell patch-clamp recordings in a slice preparation demonstrate that, at rest, spontaneously active GABAergic synapses inhibit excitatory inputs onto NCM neurons via GABAA receptors. Multi-electrode electrophysiological recordings in awake birds show that local blockade of GABAA-mediated inhibition in NCM markedly affects the temporal pattern of song-evoked responses in NCM without modifications in frequency tuning. Surprisingly, this blockade increases the phasic and largely suppresses the tonic response component, reflecting dynamic relationships of inhibitory networks that could include disinhibition. Thus processing of learned natural communication sounds in songbirds, and possibly other vocal learners, may depend on complex interactions of inhibitory networks. PMID:18480371

  12. Shaping the aging brain: role of auditory input patterns in the emergence of auditory cortical impairments

    PubMed Central

    Kamal, Brishna; Holman, Constance; de Villers-Sidani, Etienne

    2013-01-01

    Age-related impairments in the primary auditory cortex (A1) include poor tuning selectivity, neural desynchronization, and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function. PMID:24062649

  13. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.

    PubMed

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.

  14. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children

    PubMed Central

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442

  15. Dimension reduction: additional benefit of an optimal filter for independent component analysis to extract event-related potentials.

    PubMed

    Cong, Fengyu; Leppänen, Paavo H T; Astikainen, Piia; Hämäläinen, Jarmo; Hietanen, Jari K; Ristaniemi, Tapani

    2011-09-30

    The present study addresses benefits of a linear optimal filter (OF) for independent component analysis (ICA) in extracting brain event-related potentials (ERPs). A filter such as the digital filter is usually considered as a denoising tool. Actually, in filtering ERP recordings by an OF, the ERP' topography should not be changed by the filter, and the output should also be able to be modeled by the linear transformation. Moreover, an OF designed for a specific ERP source or component may remove noise, as well as reduce the overlap of sources and even reject some non-targeted sources in the ERP recordings. The OF can thus accomplish both the denoising and dimension reduction (reducing the number of sources) simultaneously. We demonstrated these effects using two datasets, one containing visual and the other auditory ERPs. The results showed that the method including OF and ICA extracted much more reliable components than the sole ICA without OF did, and that OF removed some non-targeted sources and made the underdetermined model of EEG recordings approach to the determined one. Thus, we suggest designing an OF based on the properties of an ERP to filter recordings before using ICA decomposition to extract the targeted ERP component. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Octave-Band Thresholds for Modeled Reverberant Fields

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Tran, Laura L.; Anderson, Mark R.; Trejo, Leonard J. (Technical Monitor)

    1998-01-01

    Auditory thresholds for 10 subjects were obtained for speech stimuli reverberation. The reverberation was produced and manipulated by 3-D audio modeling based on an actual room. The independent variables were octave-band-filtering (bypassed, 0.25 - 2.0 kHz Fc) and reverberation time (0.2- 1.1 sec). An ANOVA revealed significant effects (threshold range: -19 to -35 dB re 60 dB SRL).

  17. Unenhanced third-generation dual-source chest CT using a tin filter for spectral shaping at 100kVp.

    PubMed

    Haubenreisser, Holger; Meyer, Mathias; Sudarski, Sonja; Allmendinger, Thomas; Schoenberg, Stefan O; Henzler, Thomas

    2015-08-01

    To prospectively investigate image quality and radiation dose of 100kVp spectral shaping chest CT using a dedicated tin filter on a 3rd generation dual-source CT (DSCT) in comparison to standard 100kVp chest CT. Sixty patients referred for a non-contrast chest on a 3rd generation DSCT were prospectively included and examined at 100kVp with a dedicated tin filter. These patients were retrospectively matched with patients that were examined on a 2nd generation DSCT at 100kVp without tin filter. Objective and subjective image quality was assessed in various anatomic regions and radiation dose was compared. Radiation dose was decreased by 90% using the tin filter (3.0 vs 0.32mSv). Soft tissue attenuation and image noise was not statistically different for both examination techniques (p>0.05), however image noise was found to be significantly higher in the trachea when using the additional tin filter (p=0.002). SNR was found to be statistically similar in pulmonary tissue, significantly lower when measured in air and significantly higher in the aorta for the scans on the 3rd generation DSCT. Subjective image quality with regard to overall quality and image noise and sharpness was not statistically significantly different (p>0.05). 100kVp spectral shaping chest CT by means of a tube-based tin-filter on a 3rd generation DSCT allows 90% dose reduction when compared to 100kVp chest CT on a 2nd generation DSCT without spectral shaping. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. A biological inspired fuzzy adaptive window median filter (FAWMF) for enhancing DNA signal processing.

    PubMed

    Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin

    2017-10-01

    Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. The relationship between mismatch response and the acoustic change complex in normal hearing infants.

    PubMed

    Uhler, Kristin M; Hunter, Sharon K; Tierney, Elyse; Gilley, Phillip M

    2018-06-01

    To examine the utility of the mismatch response (MMR) and acoustic change complex (ACC) for assessing speech discrimination in infants. Continuous EEG was recorded during sleep from 48 (24 male, 20 female) normally hearing aged 1.77 to -4.57 months in response to two auditory discrimination tasks. ACC was recorded in response to a three-vowel sequence (/i/-/a/-/i/). MMR was recorded in response to a standard vowel, /a/, (probability 85%), and to a deviant vowel, /i/, (probability of 15%). A priori comparisons included: age, sex, and sleep state. These were conducted separately for each of the three bandpass filter settings were compared (1-18, 1-30, and 1-40 Hz). A priori tests revealed no differences in MMR or ACC for age, sex, or sleep state for any of the three filter settings. ACC and MMR responses were prominently observed in all 44 sleeping infants (data from four infants were excluded). Significant differences observed for ACC were to the onset and offset of stimuli. However, neither group nor individual differences were observed to changes in speech stimuli in the ACC. MMR revealed two prominent peaks occurring at the stimulus onset and at the stimulus offset. Permutation t-tests revealed significant differences between the standard and deviant stimuli for both the onset and offset MMR peaks (p < 0.01). The 1-18 Hz filter setting revealed significant differences for all participants in the MMR paradigm. Both ACC and MMR responses were observed to auditory stimulation suggesting that infants perceive and process speech information even during sleep. Significant differences between the standard and deviant responses were observed in the MMR, but not ACC paradigm. These findings suggest that the MMR is sensitive to detecting auditory/speech discrimination processing. This paper identified that MMR can be used to identify discrimination in normal hearing infants. This suggests that MMR has potential for use in infants with hearing loss to validate hearing aid fittings. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  20. Spectral shaping of an all-fiber torsional acousto-optic tunable filter.

    PubMed

    Ko, Jeakwon; Lee, Kwang Jo; Kim, Byoung Yoon

    2014-12-20

    Spectral shaping of an all-fiber torsional acousto-optic (AO) tunable filter is studied. The technique is based on the axial modulation of AO coupling strength along a highly birefringent optical fiber, which is achieved by tailoring the outer diameter of the fiber along its propagation axis. Two kinds of filter spectral shaping schemes-Gaussian apodization and matched filtering with triple resonance peaks-are proposed and numerically investigated under realistic experimental conditions: at the 50-cm-long AO interaction length of the fiber and at half of the original fiber diameter as the minimum thickness of the tailored fiber section. The results show that the highest peak of sidelobe spectra in filter transmission is suppressed from 11.64% to 0.54% via Gaussian modulation of the AO coupling coefficient (κ). Matched filtering with triple resonance peaks operating with a single radio frequency signal is also achieved by cosine modulation of κ, of which the modulation period determines the spectral distance between two satellite peaks located in both wings of the main resonance peak. The splitting of two satellite peaks in the filter spectra reaches 48.2 nm while the modulation period varies from 7.7 to 50 cm. The overall peak power of two satellite resonances is calculated to be 22% of the main resonance power. The results confirm the validity and practicality of our approach, and we predict robust and stable operation of the designed all-fiber torsional AO filters.

  1. Size and shape variations of the bony components of sperm whale cochleae.

    PubMed

    Schnitzler, Joseph G; Frédérich, Bruno; Früchtnicht, Sven; Schaffeld, Tobias; Baltzer, Johannes; Ruser, Andreas; Siebert, Ursula

    2017-04-25

    Several mass strandings of sperm whales occurred in the North Sea during January and February 2016. Twelve animals were necropsied and sampled around 48 h after their discovery on German coasts of Schleswig Holstein. The present study aims to explore the morphological variation of the primary sensory organ of sperm whales, the left and right auditory system, using high-resolution computerised tomography imaging. We performed a quantitative analysis of size and shape of cochleae using landmark-based geometric morphometrics to reveal inter-individual anatomical variations. A hierarchical cluster analysis based on thirty-one external morphometric characters classified these 12 individuals in two stranding clusters. A relative amount of shape variation could be attributable to geographical differences among stranding locations and clusters. Our geometric data allowed the discrimination of distinct bachelor schools among sperm whales that stranded on German coasts. We argue that the cochleae are individually shaped, varying greatly in dimensions and that the intra-specific variation observed in the morphology of the cochleae may partially reflect their affiliation to their bachelor school. There are increasing concerns about the impact of noise on cetaceans and describing the auditory periphery of odontocetes is a key conservation issue to further assess the effect of noise pollution.

  2. Mother’s voice and heartbeat sounds elicit auditory plasticity in the human brain before full gestation

    PubMed Central

    Webb, Alexandra R.; Heller, Howard T.; Benson, Carol B.; Lahav, Amir

    2015-01-01

    Brain development is largely shaped by early sensory experience. However, it is currently unknown whether, how early, and to what extent the newborn’s brain is shaped by exposure to maternal sounds when the brain is most sensitive to early life programming. The present study examined this question in 40 infants born extremely prematurely (between 25- and 32-wk gestation) in the first month of life. Newborns were randomized to receive auditory enrichment in the form of audio recordings of maternal sounds (including their mother’s voice and heartbeat) or routine exposure to hospital environmental noise. The groups were otherwise medically and demographically comparable. Cranial ultrasonography measurements were obtained at 30 ± 3 d of life. Results show that newborns exposed to maternal sounds had a significantly larger auditory cortex (AC) bilaterally compared with control newborns receiving standard care. The magnitude of the right and left AC thickness was significantly correlated with gestational age but not with the duration of sound exposure. Measurements of head circumference and the widths of the frontal horn (FH) and the corpus callosum (CC) were not significantly different between the two groups. This study provides evidence for experience-dependent plasticity in the primary AC before the brain has reached full-term maturation. Our results demonstrate that despite the immaturity of the auditory pathways, the AC is more adaptive to maternal sounds than environmental noise. Further studies are needed to better understand the neural processes underlying this early brain plasticity and its functional implications for future hearing and language development. PMID:25713382

  3. Developmental Experience Alters Information Coding in Auditory Midbrain and Forebrain Neurons

    PubMed Central

    Woolley, Sarah M. N.; Hauber, Mark E.; Theunissen, Frederic E.

    2010-01-01

    In songbirds, species identity and developmental experience shape vocal behavior and behavioral responses to vocalizations. The interaction of species identity and developmental experience may also shape the coding properties of sensory neurons. We tested whether responses of auditory midbrain and forebrain neurons to songs differed between species and between groups of conspecific birds with different developmental exposure to song. We also compared responses of individual neurons to conspecific and heterospecific songs. Zebra and Bengalese finches that were raised and tutored by conspecific birds, and zebra finches that were cross-tutored by Bengalese finches were studied. Single-unit responses to zebra and Bengalese finch songs were recorded and analyzed by calculating mutual information, response reliability, mean spike rate, fluctuations in time-varying spike rate, distributions of time-varying spike rates, and neural discrimination of individual songs. Mutual information quantifies a response’s capacity to encode information about a stimulus. In midbrain and forebrain neurons, mutual information was significantly higher in normal zebra finch neurons than in Bengalese finch and cross-tutored zebra finch neurons, but not between Bengalese finch and cross-tutored zebra finch neurons. Information rate differences were largely due to spike rate differences. Mutual information did not differ between responses to conspecific and heterospecific songs. Therefore, neurons from normal zebra finches encoded more information about songs than did neurons from other birds, but conspecific and heterospecific songs were encoded equally. Neural discrimination of songs and mutual information were highly correlated. Results demonstrate that developmental exposure to vocalizations shapes the information coding properties of songbird auditory neurons. PMID:20039264

  4. Voice responses to changes in pitch of voice or tone auditory feedback

    NASA Astrophysics Data System (ADS)

    Sivasankar, Mahalakshmi; Bauer, Jay J.; Babu, Tara; Larson, Charles R.

    2005-02-01

    The present study was undertaken to examine if a subject's voice F0 responded not only to perturbations in pitch of voice feedback but also to changes in pitch of a side tone presented congruent with voice feedback. Small magnitude brief duration perturbations in pitch of voice or tone auditory feedback were randomly introduced during sustained vowel phonations. Results demonstrated a higher rate and larger magnitude of voice F0 responses to changes in pitch of the voice compared with a triangular-shaped tone (experiment 1) or a pure tone (experiment 2). However, response latencies did not differ across voice or tone conditions. Data suggest that subjects responded to the change in F0 rather than harmonic frequencies of auditory feedback because voice F0 response prevalence, magnitude, or latency did not statistically differ across triangular-shaped tone or pure-tone feedback. Results indicate the audio-vocal system is sensitive to the change in pitch of a variety of sounds, which may represent a flexible system capable of adapting to changes in the subject's voice. However, lower prevalence and smaller responses to tone pitch-shifted signals suggest that the audio-vocal system may resist changes to the pitch of other environmental sounds when voice feedback is present. .

  5. The Effect of Photon Statistics and Pulse Shaping on the Performance of the Wiener Filter Crystal Identification Algorithm Applied to LabPET Phoswich Detectors

    NASA Astrophysics Data System (ADS)

    Yousefzadeh, Hoorvash Camilia; Lecomte, Roger; Fontaine, Réjean

    2012-06-01

    A fast Wiener filter-based crystal identification (WFCI) algorithm was recently developed to discriminate crystals with close scintillation decay times in phoswich detectors. Despite the promising performance of WFCI, the influence of various physical factors and electrical noise sources of the data acquisition chain (DAQ) on the crystal identification process was not fully investigated. This paper examines the effect of different noise sources, such as photon statistics, avalanche photodiode (APD) excess multiplication noise, and front-end electronic noise, as well as the influence of different shaping filters on the performance of the WFCI algorithm. To this end, a PET-like signal simulator based on a model of the LabPET DAQ, a small animal APD-based digital PET scanner, was developed. Simulated signals were generated under various noise conditions with CR-RC shapers of order 1, 3, and 5 having different time constants (τ). Applying the WFCI algorithm to these simulated signals showed that the non-stationary Poisson photon statistics is the main contributor to the identification error of WFCI algorithm. A shaping filter of order 1 with τ = 50 ns yielded the best WFCI performance (error 1%), while a longer shaping time of τ = 100 ns slightly degraded the WFCI performance (error 3%). Filters of higher orders with fast shaping time constants (10-33 ns) also produced good WFCI results (error 1.4% to 1.6%). This study shows the advantage of the pulse simulator in evaluating various DAQ conditions and confirms the influence of the detection chain on the WFCI performance.

  6. Characterization Of Improved Binary Phase-Only Filters In A Real-Time Coherent Optical Correlation System

    NASA Astrophysics Data System (ADS)

    Flannery, D.; Keller, P.; Cartwright, S.; Loomis, J.

    1987-06-01

    Attractive correlation system performance potential is possible using magneto-optic spatial light modulators (SLM) to implement binary phase-only reference filters at high rates, provided the correlation performance of such reduced-information-content filters is adequate for the application. In the case studied here, the desired filter impulse response is a rectangular shape, which cannot be achieved with the usual binary phase-only filter formulation. The correlation application problem is described and techniques for synthesizing improved filter impulse response are considered. A compromise solution involves the cascading of a fixed amplitude-only weighting mask with the binary phase-only SLM. Based on simulations presented, this approach provides improved impulse responses and good correlation performance, while retaining the critical feature of real-time variations of the size, shape, and orientation of the rectangle by electronic programming of the phase pattern in the SLM. Simulations indicate that, for at least one very challenging input scene clutter situation, these filters provide higher correlation signal-to-noise than does "ideal" correlation, i.e. using a perfect rectangle filter response.

  7. [Communication and auditory behavior obtained by auditory evoked potentials in mammals, birds, amphibians, and reptiles].

    PubMed

    Arch-Tirado, Emilio; Collado-Corona, Miguel Angel; Morales-Martínez, José de Jesús

    2004-01-01

    amphibians, Frog catesbiana (frog bull, 30 animals); reptiles, Sceloporus torcuatus (common small lizard, 22 animals); birds: Columba livia (common dove, 20 animals), and mammals, Cavia porcellus, (guinea pig, 20 animals). With regard to lodging, all animals were maintained at the Institute of Human Communication Disorders, were fed with special food for each species, and had water available ad libitum. Regarding procedure, for carrying out analysis of auditory evoked potentials of brain stem SPL amphibians, birds, and mammals were anesthetized with ketamine 20, 25, and 50 mg/kg, by injection. Reptiles were anesthetized by freezing (6 degrees C). Study subjects had needle electrodes placed in an imaginary line on the half sagittal line between both ears and eyes, behind right ear, and behind left ear. Stimulation was carried out inside a no noise site by means of a horn in free field. The sign was filtered at between 100 and 3,000 Hz and analyzed in a computer for provoked potentials (Racia APE 78). In data shown by amphibians, wave-evoked responses showed greater latency than those of the other species. In reptiles, latency was observed as reduced in comparison with amphibians. In the case of birds, lesser latency values were observed, while in the case of guinea pigs latencies were greater than those of doves but they were stimulated by 10 dB, which demonstrated best auditory threshold in the four studied species. Last, it was corroborated that as the auditory threshold of each species it descends conforms to it advances in the phylogenetic scale. Beginning with these registrations, we care able to say that response for evoked brain stem potential showed to be more complex and lesser values of absolute latency as we advance along the phylogenetic scale; thus, the opposing auditory threshold is better agreement with regard to the phylogenetic scale among studied species. These data indicated to us that seeking of auditory information is more complex in more evolved species.

  8. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too. PMID:26758822

  9. Subthalamic nucleus deep brain stimulation affects distractor interference in auditory working memory.

    PubMed

    Camalier, Corrie R; Wang, Alice Y; McIntosh, Lindsey G; Park, Sohee; Neimat, Joseph S

    2017-03-01

    Computational and theoretical accounts hypothesize the basal ganglia play a supramodal "gating" role in the maintenance of working memory representations, especially in preservation from distractor interference. There are currently two major limitations to this account. The first is that supporting experiments have focused exclusively on the visuospatial domain, leaving questions as to whether such "gating" is domain-specific. The second is that current evidence relies on correlational measures, as it is extremely difficult to causally and reversibly manipulate subcortical structures in humans. To address these shortcomings, we examined non-spatial, auditory working memory performance during reversible modulation of the basal ganglia, an approach afforded by deep brain stimulation of the subthalamic nucleus. We found that subthalamic nucleus stimulation impaired auditory working memory performance, specifically in the group tested in the presence of distractors, even though the distractors were predictable and completely irrelevant to the encoding of the task stimuli. This study provides key causal evidence that the basal ganglia act as a supramodal filter in working memory processes, further adding to our growing understanding of their role in cognition. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Bimanual tapping of a syncopated rhythm reveals hemispheric preferences for relative movement frequencies.

    PubMed

    Pflug, Anja; Gompf, Florian; Kell, Christian Alexander

    2017-08-01

    In bimanual multifrequency tapping, right-handers commonly use the right hand to tap the relatively higher rate and the left hand to tap the relatively lower rate. This could be due to hemispheric specializations for the processing of relative frequencies. An extension of the double-filtering-by-frequency theory to motor control proposes a left hemispheric specialization for the control of relatively high and a right hemispheric specialization for the control of relatively low tapping rates. We investigated timing variability and rhythmic accentuation in right handers tapping mono- and multifrequent bimanual rhythms to test the predictions of the double-filtering-by-frequency theory. Yet, hemispheric specializations for the processing of relative tapping rates could be masked by a left hemispheric dominance for the control of known sequences. Tapping was thus either performed in an overlearned quadruple meter (tap of the slow rhythm on the first auditory beat) or in a syncopated quadruple meter (tap of the slow rhythm on the fourth auditory beat). Independent of syncopation, the right hand outperformed the left hand in timing accuracy for fast tapping. A left hand timing benefit for slow tapping rates as predicted by the double-filtering-by-frequency theory was only found in the syncopated tapping group. This suggests a right hemisphere preference for the control of slow tapping rates when rhythms are not overlearned. Error rates indicate that overlearned rhythms represent hierarchically structured meters that are controlled by a single timer that could potentially reside in the left hemisphere. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. MRT letter: Guided filtering of image focus volume for 3D shape recovery of microscopic objects.

    PubMed

    Mahmood, Muhammad Tariq

    2014-12-01

    In this letter, a shape from focus (SFF) method is proposed that utilizes the guided image filtering to enhance the image focus volume efficiently. First, image focus volume is computed using a conventional focus measure. Then each layer of image focus volume is filtered using guided filtering. In this work, the all-in-focus image, which can be obtained from the initial focus volume, is used as guidance image. Finally, improved depth map is obtained from the filtered image focus volume by maximizing the focus measure along the optical axis. The proposed SFF method is efficient and provides better depth maps. The improved performance is highlighted by conducting several experiments using image sequences of simulated and real microscopic objects. The comparative analysis demonstrates the effectiveness of the proposed SFF method. © 2014 Wiley Periodicals, Inc.

  12. Hydraulic modeling of clay ceramic water filters for point-of-use water treatment.

    PubMed

    Schweitzer, Ryan W; Cunningham, Jeffrey A; Mihelcic, James R

    2013-01-02

    The acceptability of ceramic filters for point-of-use water treatment depends not only on the quality of the filtered water, but also on the quantity of water the filters can produce. This paper presents two mathematical models for the hydraulic performance of ceramic water filters under typical usage. A model is developed for two common filter geometries: paraboloid- and frustum-shaped. Both models are calibrated and evaluated by comparison to experimental data. The hydraulic models are able to predict the following parameters as functions of time: water level in the filter (h), instantaneous volumetric flow rate of filtrate (Q), and cumulative volume of water produced (V). The models' utility is demonstrated by applying them to estimate how the volume of water produced depends on factors such as the filter shape and the frequency of filling. Both models predict that the volume of water produced can be increased by about 45% if users refill the filter three times per day versus only once per day. Also, the models predict that filter geometry affects the volume of water produced: for two filters with equal volume, equal wall thickness, and equal hydraulic conductivity, a filter that is tall and thin will produce as much as 25% more water than one which is shallow and wide. We suggest that the models can be used as tools to help optimize filter performance.

  13. On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP.

    PubMed

    Winkler, Irene; Debener, Stefan; Müller, Klaus-Robert; Tangermann, Michael

    2015-01-01

    Standard artifact removal methods for electroencephalographic (EEG) signals are either based on Independent Component Analysis (ICA) or they regress out ocular activity measured at electrooculogram (EOG) channels. Successful ICA-based artifact reduction relies on suitable pre-processing. Here we systematically evaluate the effects of high-pass filtering at different frequencies. Offline analyses were based on event-related potential data from 21 participants performing a standard auditory oddball task and an automatic artifactual component classifier method (MARA). As a pre-processing step for ICA, high-pass filtering between 1-2 Hz consistently produced good results in terms of signal-to-noise ratio (SNR), single-trial classification accuracy and the percentage of `near-dipolar' ICA components. Relative to no artifact reduction, ICA-based artifact removal significantly improved SNR and classification accuracy. This was not the case for a regression-based approach to remove EOG artifacts.

  14. Quantitative analysis of neuronal response properties in primary and higher-order auditory cortical fields of awake house mice (Mus musculus)

    PubMed Central

    Joachimsthaler, Bettina; Uhlmann, Michaela; Miller, Frank; Ehret, Günter; Kurt, Simone

    2014-01-01

    Because of its great genetic potential, the mouse (Mus musculus) has become a popular model species for studies on hearing and sound processing along the auditory pathways. Here, we present the first comparative study on the representation of neuronal response parameters to tones in primary and higher-order auditory cortical fields of awake mice. We quantified 12 neuronal properties of tone processing in order to estimate similarities and differences of function between the fields, and to discuss how far auditory cortex (AC) function in the mouse is comparable to that in awake monkeys and cats. Extracellular recordings were made from 1400 small clusters of neurons from cortical layers III/IV in the primary fields AI (primary auditory field) and AAF (anterior auditory field), and the higher-order fields AII (second auditory field) and DP (dorsoposterior field). Field specificity was shown with regard to spontaneous activity, correlation between spontaneous and evoked activity, tone response latency, sharpness of frequency tuning, temporal response patterns (occurrence of phasic responses, phasic-tonic responses, tonic responses, and off-responses), and degree of variation between the characteristic frequency (CF) and the best frequency (BF) (CF–BF relationship). Field similarities were noted as significant correlations between CFs and BFs, V-shaped frequency tuning curves, similar minimum response thresholds and non-monotonic rate-level functions in approximately two-thirds of the neurons. Comparative and quantitative analyses showed that the measured response characteristics were, to various degrees, susceptible to influences of anesthetics. Therefore, studies of neuronal responses in the awake AC are important in order to establish adequate relationships between neuronal data and auditory perception and acoustic response behavior. PMID:24506843

  15. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  16. Pulse Shaped Constant Envelope 8-PSK Modulation Study

    NASA Technical Reports Server (NTRS)

    Tao, Jianping; Horan, Sheila

    1997-01-01

    This report provides simulation results for constant envelope pulse shaped 8 Level Phase Shift Keying (8 PSK) modulation for end to end system performance. In order to increase bandwidth utilization, pulse shaping is applied to signals before they are modulated. This report provides simulation results of power spectra and measurement of bit errors produced by pulse shaping in a non-linear channel with Additive White Gaussian Noise (AWGN). The pulse shaping filters can placed before (Type B) or after (Type A) signals are modulated. Three kinds of baseband filters, 5th order Butterworth, 3rd order Bessel and Square-Root Raised Cosine with different BTs or roll off factors, are utilized in the simulations. The simulations were performed on a Signal Processing Worksystem (SPW).

  17. Cortical potentials evoked by confirming and disconfirming feedback following an auditory discrimination.

    NASA Technical Reports Server (NTRS)

    Squires, K. C.; Hillyard, S. A.; Lindsay, P. H.

    1973-01-01

    Vertex potentials elicited by visual feedback signals following an auditory intensity discrimination have been studied with eight subjects. Feedback signals which confirmed the prior sensory decision elicited small P3s, while disconfirming feedback elicited P3s that were larger. On the average, the latency of P3 was also found to increase with increasing disparity between the judgment and the feedback information. These effects were part of an overall dichotomy in wave shape following confirming vs disconfirming feedback. These findings are incorporated in a general model of the role of P3 in perceptual decision making.

  18. Novel non-periodic spoof surface plasmon polaritons with H-shaped cells and its application to high selectivity wideband bandpass filter.

    PubMed

    Gao, Xin; Che, Wenquan; Feng, Wenjie

    2018-02-06

    In this paper, one kind of novel non-periodic spoof surface plasmon polaritons (SSPPs) with H-shaped cells is proposed. As we all know, the cutoff frequency exists inherently for the conventional comb-shaped SSPPs, which is a kind of periodic groove shape structures and fed by a conventional coplanar waveguide (CPW). In this work, instead of increasing the depth of all the grooves, two H-shaped cells are introduced to effectively reduce the cutoff frequency of the conventional comb-shaped SSPPs (about 12 GHz) for compact design. More importantly, the guide waves can be gradually transformed to SSPP waves with high efficiency, and better impedance matching from 50 Ω to the novel SSPP strip is achieved. Based on the proposed non-periodic SSPPs with H-shaped cells, a wideband bandpass filter (the 3-dB fractional bandwidths 68%) is realized by integrating the spiral-shaped defected ground structure (DGS) etched on CPW. Specifically, the filter shows high passband selectivity (Δf 3 dB /Δf 20 dB  = 0.91) and wide upper stopband with -20 dB rejection. A prototype is fabricated for demonstration. Good agreements can be observed between the measured and simulated results, indicating potential applications in the integrated plasmonic devices and circuits at microwave and even THz frequencies.

  19. An ERP study on whether semantic integration exists in processing ecologically unrelated audio-visual information.

    PubMed

    Liu, Baolin; Meng, Xianyao; Wang, Zhongning; Wu, Guangning

    2011-11-14

    In the present study, we used event-related potentials (ERPs) to examine whether semantic integration occurs for ecologically unrelated audio-visual information. Videos with synchronous audio-visual information were used as stimuli, where the auditory stimuli were sine wave sounds with different sound levels, and the visual stimuli were simple geometric figures with different areas. In the experiment, participants were shown an initial display containing a single shape (drawn from a set of 6 shapes) with a fixed size (14cm(2)) simultaneously with a 3500Hz tone of a fixed intensity (80dB). Following a short delay, another shape/tone pair was presented and the relationship between the size of the shape and the intensity of the tone varied across trials: in the V+A- condition, a large shape was paired with a soft tone; in the V+A+ condition, a large shape was paired with a loud tone, and so forth. The ERPs results revealed that N400 effect was elicited under the VA- condition (V+A- and V-A+) as compared to the VA+ condition (V+A+ and V-A-). It was shown that semantic integration would occur when simultaneous, ecologically unrelated auditory and visual stimuli enter the human brain. We considered that this semantic integration was based on semantic constraint of audio-visual information, which might come from the long-term learned association stored in the human brain and short-term experience of incoming information. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  20. Optimum constrained image restoration filters

    NASA Technical Reports Server (NTRS)

    Riemer, T. E.; Mcgillem, C. D.

    1974-01-01

    The filter was developed in Hilbert space by minimizing the radius of gyration of the overall or composite system point-spread function subject to constraints on the radius of gyration of the restoration filter point-spread function, the total noise power in the restored image, and the shape of the composite system frequency spectrum. An iterative technique is introduced which alters the shape of the optimum composite system point-spread function, producing a suboptimal restoration filter which suppresses undesirable secondary oscillations. Finally this technique is applied to multispectral scanner data obtained from the Earth Resources Technology Satellite to provide resolution enhancement. An experimental approach to the problems involving estimation of the effective scanner aperture and matching the ERTS data to available restoration functions is presented.

  1. A deterministic compressive sensing model for bat biosonar.

    PubMed

    Hague, David A; Buck, John R; Bilik, Igal

    2012-12-01

    The big brown bat (Eptesicus fuscus) uses frequency modulated (FM) echolocation calls to accurately estimate range and resolve closely spaced objects in clutter and noise. They resolve glints spaced down to 2 μs in time delay which surpasses what traditional signal processing techniques can achieve using the same echolocation call. The Matched Filter (MF) attains 10-12 μs resolution while the Inverse Filter (IF) achieves higher resolution at the cost of significantly degraded detection performance. Recent work by Fontaine and Peremans [J. Acoustic. Soc. Am. 125, 3052-3059 (2009)] demonstrated that a sparse representation of bat echolocation calls coupled with a decimating sensing method facilitates distinguishing closely spaced objects over realistic SNRs. Their work raises the intriguing question of whether sensing approaches structured more like a mammalian auditory system contains the necessary information for the hyper-resolution observed in behavioral tests. This research estimates sparse echo signatures using a gammatone filterbank decimation sensing method which loosely models the processing of the bat's auditory system. The decimated filterbank outputs are processed with [script-l](1) minimization. Simulations demonstrate that this model maintains higher resolution than the MF and significantly better detection performance than the IF for SNRs of 5-45 dB while undersampling the return signal by a factor of six.

  2. Auditory “bubbles”: Efficient classification of the spectrotemporal modulations essential for speech intelligibility

    PubMed Central

    Venezia, Jonathan H.; Hickok, Gregory; Richards, Virginia M.

    2016-01-01

    Speech intelligibility depends on the integrity of spectrotemporal patterns in the signal. The current study is concerned with the speech modulation power spectrum (MPS), which is a two-dimensional representation of energy at different combinations of temporal and spectral (i.e., spectrotemporal) modulation rates. A psychophysical procedure was developed to identify the regions of the MPS that contribute to successful reception of auditory sentences. The procedure, based on the two-dimensional image classification technique known as “bubbles” (Gosselin and Schyns (2001). Vision Res. 41, 2261–2271), involves filtering (i.e., degrading) the speech signal by removing parts of the MPS at random, and relating filter patterns to observer performance (keywords identified) over a number of trials. The result is a classification image (CImg) or “perceptual map” that emphasizes regions of the MPS essential for speech intelligibility. This procedure was tested using normal-rate and 2×-time-compressed sentences. The results indicated: (a) CImgs could be reliably estimated in individual listeners in relatively few trials, (b) CImgs tracked changes in spectrotemporal modulation energy induced by time compression, though not completely, indicating that “perceptual maps” deviated from physical stimulus energy, and (c) the bubbles method captured variance in intelligibility not reflected in a common modulation-based intelligibility metric (spectrotemporal modulation index or STMI). PMID:27586738

  3. Biologically inspired circuitry that mimics mammalian hearing

    NASA Astrophysics Data System (ADS)

    Hubbard, Allyn; Cohen, Howard; Karl, Christian; Freedman, David; Mountain, David; Ziph-Schatzberg, Leah; Nourzad Karl, Marianne; Kelsall, Sarah; Gore, Tyler; Pu, Yirong; Yang, Zibing; Xing, Xinyu; Deligeorges, Socrates

    2009-05-01

    We are developing low-power microcircuitry that implements classification and direction finding systems of very small size and small acoustic aperture. Our approach was inspired by the fact that small mammals are able to localize sounds despite their ears may be separated by as little as a centimeter. Gerbils, in particular are good low-frequency localizers, which is a particularly difficult task, since a wavelength at 500 Hz is on the order of two feet. Given such signals, crosscorrelation- based methods to determine direction fail badly in the presence of a small amount of noise, e.g. wind noise and noise clutter common to almost any realistic environment. Circuits are being developed using both analog and digital techniques, each of which process signals in fundamentally the same way the peripheral auditory system of mammals processes sound. A filter bank represents filtering done by the cochlea. The auditory nerve is implemented using a combination of an envelope detector, an automatic gain stage, and a unique one-bit A/D, which creates what amounts to a neural impulse. These impulses are used to extract pitch characteristics, which we use to classify sounds such as vehicles, small and large weaponry from AK-47s to 155mm cannon, including mortar launches and impacts. In addition to the pitchograms, we also use neural nets for classification.

  4. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearinga)

    PubMed Central

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-01-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047

  5. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing.

    PubMed

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-07-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.

  6. Systemic Nicotine Increases Gain and Narrows Receptive Fields in A1 via Integrated Cortical and Subcortical Actions

    PubMed Central

    Intskirveli, Irakli

    2017-01-01

    Abstract Nicotine enhances sensory and cognitive processing via actions at nicotinic acetylcholine receptors (nAChRs), yet the precise circuit- and systems-level mechanisms remain unclear. In sensory cortex, nicotinic modulation of receptive fields (RFs) provides a model to probe mechanisms by which nAChRs regulate cortical circuits. Here, we examine RF modulation in mouse primary auditory cortex (A1) using a novel electrophysiological approach: current-source density (CSD) analysis of responses to tone-in-notched-noise (TINN) acoustic stimuli. TINN stimuli consist of a tone at the characteristic frequency (CF) of the recording site embedded within a white noise stimulus filtered to create a spectral “notch” of variable width centered on CF. Systemic nicotine (2.1 mg/kg) enhanced responses to the CF tone and to narrow-notch stimuli, yet reduced the response to wider-notch stimuli, indicating increased response gain within a narrowed RF. Subsequent manipulations showed that modulation of cortical RFs by systemic nicotine reflected effects at several levels in the auditory pathway: nicotine suppressed responses in the auditory midbrain and thalamus, with suppression increasing with spectral distance from CF so that RFs became narrower, and facilitated responses in the thalamocortical pathway, while nicotinic actions within A1 further contributed to both suppression and facilitation. Thus, multiple effects of systemic nicotine integrate along the ascending auditory pathway. These actions at nAChRs in cortical and subcortical circuits, which mimic effects of auditory attention, likely contribute to nicotinic enhancement of sensory and cognitive processing. PMID:28660244

  7. Systemic Nicotine Increases Gain and Narrows Receptive Fields in A1 via Integrated Cortical and Subcortical Actions.

    PubMed

    Askew, Caitlin; Intskirveli, Irakli; Metherate, Raju

    2017-01-01

    Nicotine enhances sensory and cognitive processing via actions at nicotinic acetylcholine receptors (nAChRs), yet the precise circuit- and systems-level mechanisms remain unclear. In sensory cortex, nicotinic modulation of receptive fields (RFs) provides a model to probe mechanisms by which nAChRs regulate cortical circuits. Here, we examine RF modulation in mouse primary auditory cortex (A1) using a novel electrophysiological approach: current-source density (CSD) analysis of responses to tone-in-notched-noise (TINN) acoustic stimuli. TINN stimuli consist of a tone at the characteristic frequency (CF) of the recording site embedded within a white noise stimulus filtered to create a spectral "notch" of variable width centered on CF. Systemic nicotine (2.1 mg/kg) enhanced responses to the CF tone and to narrow-notch stimuli, yet reduced the response to wider-notch stimuli, indicating increased response gain within a narrowed RF. Subsequent manipulations showed that modulation of cortical RFs by systemic nicotine reflected effects at several levels in the auditory pathway: nicotine suppressed responses in the auditory midbrain and thalamus, with suppression increasing with spectral distance from CF so that RFs became narrower, and facilitated responses in the thalamocortical pathway, while nicotinic actions within A1 further contributed to both suppression and facilitation. Thus, multiple effects of systemic nicotine integrate along the ascending auditory pathway. These actions at nAChRs in cortical and subcortical circuits, which mimic effects of auditory attention, likely contribute to nicotinic enhancement of sensory and cognitive processing.

  8. Neural networks for data compression and invariant image recognition

    NASA Technical Reports Server (NTRS)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  9. Learning Pitch with STDP: A Computational Model of Place and Temporal Pitch Perception Using Spiking Neural Networks.

    PubMed

    Erfanian Saeedi, Nafise; Blamey, Peter J; Burkitt, Anthony N; Grayden, David B

    2016-04-01

    Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons' action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.

  10. Perinatal exposure to a noncoplanar polychlorinated biphenyl alters tonotopy, receptive fields, and plasticity in rat primary auditory cortex

    PubMed Central

    Kenet, T.; Froemke, R. C.; Schreiner, C. E.; Pessah, I. N.; Merzenich, M. M.

    2007-01-01

    Noncoplanar polychlorinated biphenyls (PCBs) are widely dispersed in human environment and tissues. Here, an exemplar noncoplanar PCB was fed to rat dams during gestation and throughout three subsequent nursing weeks. Although the hearing sensitivity and brainstem auditory responses of pups were normal, exposure resulted in the abnormal development of the primary auditory cortex (A1). A1 was irregularly shaped and marked by internal nonresponsive zones, its topographic organization was grossly abnormal or reversed in about half of the exposed pups, the balance of neuronal inhibition to excitation for A1 neurons was disturbed, and the critical period plasticity that underlies normal postnatal auditory system development was significantly altered. These findings demonstrate that developmental exposure to this class of environmental contaminant alters cortical development. It is proposed that exposure to noncoplanar PCBs may contribute to common developmental disorders, especially in populations with heritable imbalances in neurotransmitter systems that regulate the ratio of inhibition and excitation in the brain. We conclude that the health implications associated with exposure to noncoplanar PCBs in human populations merit a more careful examination. PMID:17460041

  11. Learning Pitch with STDP: A Computational Model of Place and Temporal Pitch Perception Using Spiking Neural Networks

    PubMed Central

    Erfanian Saeedi, Nafise; Blamey, Peter J.; Burkitt, Anthony N.; Grayden, David B.

    2016-01-01

    Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons’ action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy. PMID:27049657

  12. Angle-resolved and polarization-dependent investigation of cross-shaped frequency-selective surface terahertz filters

    NASA Astrophysics Data System (ADS)

    Ferraro, A.; Zografopoulos, D. C.; Caputo, R.; Beccherelli, R.

    2017-04-01

    The spectral response of a terahertz (THz) filter is investigated in detail for different angles of incidence and polarization of the incoming THz wave. The filter is fabricated by patterning an aluminum frequency-selective surface of cross-shaped apertures on a thin foil of the low-loss cyclo-olefin polymer Zeonor. Two different types of resonances are observed, namely, a broadline resonance stemming from the transmittance of the slot apertures and a series of narrowline guided-mode resonances, with the latter being investigated by employing the grating theory. Numerical simulations of the filter transmittance based on the finite-element method agree with experimental measurements by means of THz time domain spectroscopy (THz-TDS). The results reveal extensive possibilities for tuning the guided-mode resonances by mechanically adjusting the incidence or polarization angle, while the fundamental broadline resonance is not significantly affected. Such filters are envisaged as functional elements in emerging THz systems for filtering or sensing applications.

  13. A probabilistic Poisson-based model accounts for an extensive set of absolute auditory threshold measurements.

    PubMed

    Heil, Peter; Matysiak, Artur; Neubauer, Heinrich

    2017-09-01

    Thresholds for detecting sounds in quiet decrease with increasing sound duration in every species studied. The neural mechanisms underlying this trade-off, often referred to as temporal integration, are not fully understood. Here, we probe the human auditory system with a large set of tone stimuli differing in duration, shape of the temporal amplitude envelope, duration of silent gaps between bursts, and frequency. Duration was varied by varying the plateau duration of plateau-burst (PB) stimuli, the duration of the onsets and offsets of onset-offset (OO) stimuli, and the number of identical bursts of multiple-burst (MB) stimuli. Absolute thresholds for a large number of ears (>230) were measured using a 3-interval-3-alternative forced choice (3I-3AFC) procedure. Thresholds decreased with increasing sound duration in a manner that depended on the temporal envelope. Most commonly, thresholds for MB stimuli were highest followed by thresholds for OO and PB stimuli of corresponding durations. Differences in the thresholds for MB and OO stimuli and in the thresholds for MB and PB stimuli, however, varied widely across ears, were negative in some ears, and were tightly correlated. We show that the variation and correlation of MB-OO and MB-PB threshold differences are linked to threshold microstructure, which affects the relative detectability of the sidebands of the MB stimuli and affects estimates of the bandwidth of auditory filters. We also found that thresholds for MB stimuli increased with increasing duration of the silent gaps between bursts. We propose a new model and show that it accurately accounts for our results and does so considerably better than a leaky-integrator-of-intensity model and a probabilistic model proposed by others. Our model is based on the assumption that sensory events are generated by a Poisson point process with a low rate in the absence of stimulation and higher, time-varying rates in the presence of stimulation. A subject in a 3I-3AFC task is assumed to choose the interval in which the greatest number of events occurred or randomly chooses among intervals which are tied for the greatest number of events. The subject is further assumed to count events over the duration of an evaluation interval that has the same timing and duration as the expected stimulus. The increase in the rate of the events caused by stimulation is proportional to the time-varying amplitude envelope of the bandpass-filtered signal raised to an exponent. We find the exponent to be about 3, consistent with our previous studies. This challenges models that are based on the assumption of the integration of a neural response that is directly proportional to the stimulus amplitude or proportional to its square (i.e., proportional to the stimulus intensity or power). Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Ventral-stream-like shape representation: from pixel intensity values to trainable object-selective COSFIRE models

    PubMed Central

    Azzopardi, George; Petkov, Nicolai

    2014-01-01

    The remarkable abilities of the primate visual system have inspired the construction of computational models of some visual neurons. We propose a trainable hierarchical object recognition model, which we call S-COSFIRE (S stands for Shape and COSFIRE stands for Combination Of Shifted FIlter REsponses) and use it to localize and recognize objects of interests embedded in complex scenes. It is inspired by the visual processing in the ventral stream (V1/V2 → V4 → TEO). Recognition and localization of objects embedded in complex scenes is important for many computer vision applications. Most existing methods require prior segmentation of the objects from the background which on its turn requires recognition. An S-COSFIRE filter is automatically configured to be selective for an arrangement of contour-based features that belong to a prototype shape specified by an example. The configuration comprises selecting relevant vertex detectors and determining certain blur and shift parameters. The response is computed as the weighted geometric mean of the blurred and shifted responses of the selected vertex detectors. S-COSFIRE filters share similar properties with some neurons in inferotemporal cortex, which provided inspiration for this work. We demonstrate the effectiveness of S-COSFIRE filters in two applications: letter and keyword spotting in handwritten manuscripts and object spotting in complex scenes for the computer vision system of a domestic robot. S-COSFIRE filters are effective to recognize and localize (deformable) objects in images of complex scenes without requiring prior segmentation. They are versatile trainable shape detectors, conceptually simple and easy to implement. The presented hierarchical shape representation contributes to a better understanding of the brain and to more robust computer vision algorithms. PMID:25126068

  15. Thresholding of auditory cortical representation by background noise

    PubMed Central

    Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029

  16. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    NASA Astrophysics Data System (ADS)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  17. Modelling auditory attention

    PubMed Central

    Kaya, Emine Merve

    2017-01-01

    Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information—a phenomenon referred to as the ‘cocktail party problem’. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by ‘bottom-up’ sensory-driven factors, as well as ‘top-down’ task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044012

  18. Numerical simulation of terahertz transmission of bilayer metallic meshes with different thickness of substrates

    NASA Astrophysics Data System (ADS)

    Zhang, Gaohui; Zhao, Guozhong; Zhang, Shengbo

    2012-12-01

    The terahertz transmission characteristics of bilayer metallic meshes are studied based on the finite difference time domain method. The bilayer well-shaped grid, the array of complementary square metallic pill and the cross wire-hole array were investigated. The results show that the bilayer well-shaped grid achieves a high-pass of filter function, while the bilayer array of complementary square metallic pill achieves a low-pass of filter function, the bilayer cross wire-hole array achieves a band-pass of filter function. Between two metallic microstructures, the medium need to be deposited. Obviously, medium thicknesses have an influence on the terahertz transmission characteristics of metallic microstructures. Simulation results show that with increasing the thicknesses of the medium the cut-off frequency of high-pass filter and low-pass filter move to low frequency. But the bilayer cross wire-hole array possesses two transmission peaks which display competition effect.

  19. ASSESSMENT OF LOW-FREQUENCY HEARING WITH NARROW-BAND CHIRP EVOKED 40-HZ SINUSOIDAL AUDITORY STEADY STATE RESPONSE

    PubMed Central

    Wilson, Uzma S.; Kaf, Wafaa A.; Danesh, Ali A.; Lichtenhan, Jeffery T.

    2016-01-01

    Objective To determine the clinical utility of narrow-band chirp evoked 40-Hz sinusoidal auditory steady state responses (s-ASSR) in the assessment of low-frequency hearing in noisy participants. Design Tone bursts and narrow-band chirps were used to respectively evoke auditory brainstem responses (tb-ABR) and 40-Hz s-ASSR thresholds with the Kalman-weighted filtering technique and were compared to behavioral thresholds at 500, 2000, and 4000 Hz. A repeated measure ANOVA and post-hoc t-tests, and simple regression analyses were performed for each of the three stimulus frequencies. Study Sample Thirty young adults aged 18–25 with normal hearing participated in this study. Results When 4000 equivalent responses averages were used, the range of mean s-ASSR thresholds from 500, 2000, and 4000 Hz were 17–22 dB lower (better) than when 2000 averages were used. The range of mean tb-ABR thresholds were lower by 11–15 dB for 2000 and 4000 Hz when twice as many equivalent response averages were used, while mean tb-ABR thresholds for 500 Hz were indistinguishable regardless of additional response averaging Conclusion Narrow band chirp evoked 40-Hz s-ASSR requires a ~15 dB smaller correction factor than tb-ABR for estimating low-frequency auditory threshold in noisy participants when adequate response averaging is used. PMID:26795555

  20. Human cortical organization for processing vocalizations indicates representation of harmonic structure as a signal attribute

    PubMed Central

    Lewis, James W.; Talkington, William J.; Walker, Nathan A.; Spirou, George A.; Jajosky, Audrey; Frum, Chris

    2009-01-01

    The ability to detect and rapidly process harmonic sounds, which in nature are typical of animal vocalizations and speech, can be critical for communication among conspecifics and for survival. Single-unit studies have reported neurons in auditory cortex sensitive to specific combinations of frequencies (e.g. harmonics), theorized to rapidly abstract or filter for specific structures of incoming sounds, where large ensembles of such neurons may constitute spectral templates. We studied the contribution of harmonic structure to activation of putative spectral templates in human auditory cortex by using a wide variety of animal vocalizations, as well as artificially constructed iterated rippled noises (IRNs). Both the IRNs and vocalization sounds were quantitatively characterized by calculating a global harmonics-to-noise ratio (HNR). Using fMRI we identified HNR-sensitive regions when presenting either artificial IRNs and/or recordings of natural animal vocalizations. This activation included regions situated between functionally defined primary auditory cortices and regions preferential for processing human non-verbal vocalizations or speech sounds. These results demonstrate that the HNR of sound reflects an important second-order acoustic signal attribute that parametrically activates distinct pathways of human auditory cortex. Thus, these results provide novel support for putative spectral templates, which may subserve a major role in the hierarchical processing of vocalizations as a distinct category of behaviorally relevant sound. PMID:19228981

  1. Neural networks mediating sentence reading in the deaf

    PubMed Central

    Hirshorn, Elizabeth A.; Dye, Matthew W. G.; Hauser, Peter C.; Supalla, Ted R.; Bavelier, Daphne

    2014-01-01

    The present work addresses the neural bases of sentence reading in deaf populations. To better understand the relative role of deafness and spoken language knowledge in shaping the neural networks that mediate sentence reading, three populations with different degrees of English knowledge and depth of hearing loss were included—deaf signers, oral deaf and hearing individuals. The three groups were matched for reading comprehension and scanned while reading sentences. A similar neural network of left perisylvian areas was observed, supporting the view of a shared network of areas for reading despite differences in hearing and English knowledge. However, differences were observed, in particular in the auditory cortex, with deaf signers and oral deaf showing greatest bilateral superior temporal gyrus (STG) recruitment as compared to hearing individuals. Importantly, within deaf individuals, the same STG area in the left hemisphere showed greater recruitment as hearing loss increased. To further understand the functional role of such auditory cortex re-organization after deafness, connectivity analyses were performed from the STG regions identified above. Connectivity from the left STG toward areas typically associated with semantic processing (BA45 and thalami) was greater in deaf signers and in oral deaf as compared to hearing. In contrast, connectivity from left STG toward areas identified with speech-based processing was greater in hearing and in oral deaf as compared to deaf signers. These results support the growing literature indicating recruitment of auditory areas after congenital deafness for visually-mediated language functions, and establish that both auditory deprivation and language experience shape its functional reorganization. Implications for differential reliance on semantic vs. phonological pathways during reading in the three groups is discussed. PMID:24959127

  2. Effects of sound intensity on temporal properties of inhibition in the pallid bat auditory cortex.

    PubMed

    Razak, Khaleel A

    2013-01-01

    Auditory neurons in bats that use frequency modulated (FM) sweeps for echolocation are selective for the behaviorally-relevant rates and direction of frequency change. Such selectivity arises through spectrotemporal interactions between excitatory and inhibitory components of the receptive field. In the pallid bat auditory system, the relationship between FM sweep direction/rate selectivity and spectral and temporal properties of sideband inhibition have been characterized. Of note is the temporal asymmetry in sideband inhibition, with low-frequency inhibition (LFI) exhibiting faster arrival times compared to high-frequency inhibition (HFI). Using the two-tone inhibition over time (TTI) stimulus paradigm, this study investigated the interactions between two sound parameters in shaping sideband inhibition: intensity and time. Specifically, the impact of changing relative intensities of the excitatory and inhibitory tones on arrival time of inhibition was studied. Using this stimulation paradigm, single unit data from the auditory cortex of pentobarbital-anesthetized cortex show that the threshold for LFI is on average ~8 dB lower than HFI. For equal intensity tones near threshold, LFI is stronger than HFI. When the inhibitory tone intensity is increased further from threshold, the strength asymmetry decreased. The temporal asymmetry in LFI vs. HFI arrival time is strongest when the excitatory and inhibitory tones are of equal intensities or if excitatory tone is louder. As inhibitory tone intensity is increased, temporal asymmetry decreased suggesting that the relative magnitude of excitatory and inhibitory inputs shape arrival time of inhibition and FM sweep rate and direction selectivity. Given that most FM bats use downward sweeps as echolocation calls, a similar asymmetry in threshold and strength of LFI vs. HFI may be a general adaptation to enhance direction selectivity while maintaining sweep-rate selective responses to downward sweeps.

  3. Speaking-related changes in cortical functional connectivity associated with assisted and spontaneous recovery from developmental stuttering.

    PubMed

    Kell, Christian A; Neumann, Katrin; Behrens, Marion; von Gudenberg, Alexander W; Giraud, Anne-Lise

    2018-03-01

    We previously reported speaking-related activity changes associated with assisted recovery induced by a fluency shaping therapy program and unassisted recovery from developmental stuttering (Kell et al., Brain 2009). While assisted recovery re-lateralized activity to the left hemisphere, unassisted recovery was specifically associated with the activation of the left BA 47/12 in the lateral orbitofrontal cortex. These findings suggested plastic changes in speaking-related functional connectivity between left hemispheric speech network nodes. We reanalyzed these data involving 13 stuttering men before and after fluency shaping, 13 men who recovered spontaneously from their stuttering, and 13 male control participants, and examined functional connectivity during overt vs. covert reading by means of psychophysiological interactions computed across left cortical regions involved in articulation control. Persistent stuttering was associated with reduced auditory-motor coupling and enhanced integration of somatosensory feedback between the supramarginal gyrus and the prefrontal cortex. Assisted recovery reduced this hyper-connectivity and increased functional connectivity between the articulatory motor cortex and the auditory feedback processing anterior superior temporal gyrus. In spontaneous recovery, both auditory-motor coupling and integration of somatosensory feedback were normalized. In addition, activity in the left orbitofrontal cortex and superior cerebellum appeared uncoupled from the rest of the speech production network. These data suggest that therapy and spontaneous recovery normalizes the left hemispheric speaking-related activity via an improvement of auditory-motor mapping. By contrast, long-lasting unassisted recovery from stuttering is additionally supported by a functional isolation of the superior cerebellum from the rest of the speech production network, through the pivotal left BA 47/12. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Low-Dose Contrast-Enhanced Breast CT Using Spectral Shaping Filters: An Experimental Study.

    PubMed

    Makeev, Andrey; Glick, Stephen J

    2017-12-01

    Iodinated contrast-enhanced X-ray imaging of the breast has been studied with various modalities, including full-field digital mammography (FFDM), digital breast tomosynthesis (DBT), and dedicated breast CT. Contrast imaging with breast CT has a number of advantages over FFDM and DBT, including the lack of breast compression, and generation of fully isotropic 3-D reconstructions. Nonetheless, for breast CT to be considered as a viable tool for routine clinical use, it would be desirable to reduce radiation dose. One approach for dose reduction in breast CT is spectral shaping using X-ray filters. In this paper, two high atomic number filter materials are studied, namely, gadolinium (Gd) and erbium (Er), and compared with Al and Cu filters currently used in breast CT systems. Task-based performance is assessed by imaging a cylindrical poly(methyl methacrylate) phantom with iodine inserts on a benchtop breast CT system that emulates clinical breast CT. To evaluate detectability, a channelized hoteling observer (CHO) is used with sums of Laguerre-Gauss channels. It was observed that spectral shaping using Er and Gd filters substantially increased the dose efficiency (defined as signal-to-noise ratio of the CHO divided by mean glandular dose) as compared with kilovolt peak and filter settings used in commercial and prototype breast CT systems. These experimental phantom study results are encouraging for reducing dose of breast CT, however, further evaluation involving patients is needed.

  5. Optimized digital filtering techniques for radiation detection with HPGe detectors

    NASA Astrophysics Data System (ADS)

    Salathe, Marco; Kihm, Thomas

    2016-02-01

    This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of 1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.

  6. Caution and Warning Alarm Design and Evaluation for NASA CEV Auditory Displays: SHFE Information Presentation Directed Research Project (DRPP) report 12.07

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Godfroy, Martine; Sandor, Aniko; Holden, Kritina

    2008-01-01

    The design of caution-warning signals for NASA s Crew Exploration Vehicle (CEV) and other future spacecraft will be based on both best practices based on current research and evaluation of current alarms. A design approach is presented based upon cross-disciplinary examination of psychoacoustic research, human factors experience, aerospace practices, and acoustical engineering requirements. A listening test with thirteen participants was performed involving ranking and grading of current and newly developed caution-warning stimuli under three conditions: (1) alarm levels adjusted for compliance with ISO 7731, "Danger signals for work places - Auditory Danger Signals", (2) alarm levels adjusted to an overall 15 dBA s/n ratio and (3) simulated codec low-pass filtering. Questionnaire data yielded useful insights regarding cognitive associations with the sounds.

  7. Rapid tuning shifts in human auditory cortex enhance speech intelligibility

    PubMed Central

    Holdgraf, Christopher R.; de Heer, Wendy; Pasley, Brian; Rieger, Jochem; Crone, Nathan; Lin, Jack J.; Knight, Robert T.; Theunissen, Frédéric E.

    2016-01-01

    Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement' in understanding speech. PMID:27996965

  8. Application of Micropore Filter Technology: Exploring the Blood Flow Path in Arterial-Line Filters and Its Effect on Bubble Trapping Functions

    PubMed Central

    Herbst, Daniel P.

    2017-01-01

    Abstract: Conventional arterial-line filters commonly use a large volume circular shaped housing, a wetted micropore screen, and a purge port to trap, separate, and remove gas bubbles from extracorporeal blood flow. Focusing on the bubble trapping function, this work attempts to explore how the filter housing shape and its resulting blood flow path affect the clinical application of arterial-line filters in terms of gross air handling. A video camera was used in a wet-lab setting to record observations made during gross air-bolus injections in three different radially designed filters using a 30–70% glycerol–saline mixture flowing at 4.5 L/min. Two of the filters both had inlet ports attached near the filter-housing top with bottom oriented outlet ports at the bottom, whereas the third filter had its inlet and outlet ports both located at the bottom of the filter housing. The two filters with top-in bottom-out fluid paths were shown to direct the incoming flow downward as it passed through the filter, placing the forces of buoyancy and viscous drag in opposition to each other. This contrasted with the third filter's bottom-in bottom-out fluid path, which was shown to direct the incoming flow upward so that the forces of buoyancy and viscous drag work together. The direction of the blood flow path through a filter may be important to the application of arterial-line filter technology as it helps determine how the forces of buoyancy and flow are aligned with one another. PMID:28298665

  9. Application of Micropore Filter Technology: Exploring the Blood Flow Path in Arterial-Line Filters and Its Effect on Bubble Trapping Functions.

    PubMed

    Herbst, Daniel P

    2017-03-01

    Conventional arterial-line filters commonly use a large volume circular shaped housing, a wetted micropore screen, and a purge port to trap, separate, and remove gas bubbles from extracorporeal blood flow. Focusing on the bubble trapping function, this work attempts to explore how the filter housing shape and its resulting blood flow path affect the clinical application of arterial-line filters in terms of gross air handling. A video camera was used in a wet-lab setting to record observations made during gross air-bolus injections in three different radially designed filters using a 30-70% glycerol-saline mixture flowing at 4.5 L/min. Two of the filters both had inlet ports attached near the filter-housing top with bottom oriented outlet ports at the bottom, whereas the third filter had its inlet and outlet ports both located at the bottom of the filter housing. The two filters with top-in bottom-out fluid paths were shown to direct the incoming flow downward as it passed through the filter, placing the forces of buoyancy and viscous drag in opposition to each other. This contrasted with the third filter's bottom-in bottom-out fluid path, which was shown to direct the incoming flow upward so that the forces of buoyancy and viscous drag work together. The direction of the blood flow path through a filter may be important to the application of arterial-line filter technology as it helps determine how the forces of buoyancy and flow are aligned with one another.

  10. Intracerebral evidence of rhythm transform in the human auditory cortex.

    PubMed

    Nozaradan, Sylvie; Mouraux, André; Jonas, Jacques; Colnat-Coulbois, Sophie; Rossion, Bruno; Maillard, Louis

    2017-07-01

    Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (<30 Hz) and high-frequency (>30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.

  11. Temporal pattern of acoustic imaging noise asymmetrically modulates activation in the auditory cortex.

    PubMed

    Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2016-01-01

    This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Characteristics of hearing and echolocation in under-studied odontocete species

    NASA Astrophysics Data System (ADS)

    Smith, Adam B.

    All odontoctes (toothed whales and dolphins) studied to date have been shown to echolocate. They use sound as their primary means for foraging, navigation, and communication with conspecifics and are thus considered acoustic specialists. However, the vast majority of what is known about odontocete acoustic systems comes from only a handful of the 76 recognized extant species. The research presented in this dissertation investigated basic characteristics of odontocete hearing and echolocation, including auditory temporal resolution, auditory pathways, directional hearing, and transmission beam characteristics, in individuals of five different odontocete species that are understudied. Modulation rate transfer functions were measured from formerly stranded individuals of four different species (Stenella longirostris, Feresa attenuata, Globicephala melas, Mesoplodon densirostris) using non-invasive auditory evoked potential methods. All individuals showed acute auditory temporal resolution that was comparable to other studied odontocete species. Using the same electrophysiological methods, auditory pathways and directional hearing were investigated in a Risso's dolphin (Grampus griseus) using both localized and far-field acoustic stimuli. The dolphin's hearing showed significant, frequency dependent asymmetry to localized sound presented on the right and left sides of its head. The dolphin also showed acute, but mostly symmetrical, directional auditory sensitivity to sounds presented in the far-field. Furthermore, characteristics of the echolocation transmission beam of this same individual Risso's dolphin were measured using a 16 element hydrophone array. The dolphin exhibited both single and dual lobed beam shapes that were more directional than similar measurements from a bottlenose dolphin, harbor porpoise, and false killer whale.

  13. Synchronisation signatures in the listening brain: a perspective from non-invasive neuroelectrophysiology.

    PubMed

    Weisz, Nathan; Obleser, Jonas

    2014-01-01

    Human magneto- and electroencephalography (M/EEG) are capable of tracking brain activity at millisecond temporal resolution in an entirely non-invasive manner, a feature that offers unique opportunities to uncover the spatiotemporal dynamics of the hearing brain. In general, precise synchronisation of neural activity within as well as across distributed regions is likely to subserve any cognitive process, with auditory cognition being no exception. Brain oscillations, in a range of frequencies, are a putative hallmark of this synchronisation process. Embedded in a larger effort to relate human cognition to brain oscillations, a field of research is emerging on how synchronisation within, as well as between, brain regions may shape auditory cognition. Combined with much improved source localisation and connectivity techniques, it has become possible to study directly the neural activity of auditory cortex with unprecedented spatio-temporal fidelity and to uncover frequency-specific long-range connectivities across the human cerebral cortex. In the present review, we will summarise recent contributions mainly of our laboratories to this emerging domain. We present (1) a more general introduction on how to study local as well as interareal synchronisation in human M/EEG; (2) how these networks may subserve and influence illusory auditory perception (clinical and non-clinical) and (3) auditory selective attention; and (4) how oscillatory networks further reflect and impact on speech comprehension. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Geometrical superresolved imaging using nonperiodic spatial masking.

    PubMed

    Borkowski, Amikam; Zalevsky, Zeev; Javidi, Bahram

    2009-03-01

    The resolution of every imaging system is limited either by the F-number of its optics or by the geometry of its detection array. The geometrical limitation is caused by lack of spatial sampling points as well as by the shape of every sampling pixel that generates spectral low-pass filtering. We present a novel approach to overcome the low-pass filtering that is due to the shape of the sampling pixels. The approach combines special algorithms together with spatial masking placed in the intermediate image plane and eventually allows geometrical superresolved imaging without relation to the actual shape of the pixels.

  15. Acousto-Optic Tunable Filter for Time-Domain Processing of Ultra-Short Optical Pulses,

    DTIC Science & Technology

    The application of acousto - optic tunable filters for shaping of ultra-fast pulses in the time domain is analyzed and demonstrated. With the rapid...advance of acousto - optic tunable filter (AOTF) technology, the opportunity for sophisticated signal processing capabilities arises. AOTFs offer unique

  16. Art and science: how musical training shapes the brain

    PubMed Central

    Barrett, Karen Chan; Ashley, Richard; Strait, Dana L.; Kraus, Nina

    2013-01-01

    What makes a musician? In this review, we discuss innate and experience-dependent factors that mold the musician brain in addition to presenting new data in children that indicate that some neural enhancements in musicians unfold with continued training over development. We begin by addressing effects of training on musical expertise, presenting neural, perceptual, and cognitive evidence to support the claim that musicians are shaped by their musical training regimes. For example, many musician-advantages in the neural encoding of sound, auditory perception, and auditory-cognitive skills correlate with their extent of musical training, are not observed in young children just initiating musical training, and differ based on the type of training pursued. Even amidst innate characteristics that contribute to the biological building blocks that make up the musician, musicians demonstrate further training-related enhancements through extensive education and practice. We conclude by reviewing evidence from neurobiological and epigenetic approaches to frame biological markers of musicianship in the context of interactions between genetic and experience-related factors. PMID:24137142

  17. Art and science: how musical training shapes the brain.

    PubMed

    Barrett, Karen Chan; Ashley, Richard; Strait, Dana L; Kraus, Nina

    2013-01-01

    What makes a musician? In this review, we discuss innate and experience-dependent factors that mold the musician brain in addition to presenting new data in children that indicate that some neural enhancements in musicians unfold with continued training over development. We begin by addressing effects of training on musical expertise, presenting neural, perceptual, and cognitive evidence to support the claim that musicians are shaped by their musical training regimes. For example, many musician-advantages in the neural encoding of sound, auditory perception, and auditory-cognitive skills correlate with their extent of musical training, are not observed in young children just initiating musical training, and differ based on the type of training pursued. Even amidst innate characteristics that contribute to the biological building blocks that make up the musician, musicians demonstrate further training-related enhancements through extensive education and practice. We conclude by reviewing evidence from neurobiological and epigenetic approaches to frame biological markers of musicianship in the context of interactions between genetic and experience-related factors.

  18. Manipulation of BDNF signaling modifies the experience-dependent plasticity induced by pure tone exposure during the critical period in the primary auditory cortex.

    PubMed

    Anomal, Renata; de Villers-Sidani, Etienne; Merzenich, Michael M; Panizzutti, Rogerio

    2013-01-01

    Sensory experience powerfully shapes cortical sensory representations during an early developmental "critical period" of plasticity. In the rat primary auditory cortex (A1), the experience-dependent plasticity is exemplified by significant, long-lasting distortions in frequency representation after mere exposure to repetitive frequencies during the second week of life. In the visual system, the normal unfolding of critical period plasticity is strongly dependent on the elaboration of brain-derived neurotrophic factor (BDNF), which promotes the establishment of inhibition. Here, we tested the hypothesis that BDNF signaling plays a role in the experience-dependent plasticity induced by pure tone exposure during the critical period in the primary auditory cortex. Elvax resin implants filled with either a blocking antibody against BDNF or the BDNF protein were placed on the A1 of rat pups throughout the critical period window. These pups were then exposed to 7 kHz pure tone for 7 consecutive days and their frequency representations were mapped. BDNF blockade completely prevented the shaping of cortical tuning by experience and resulted in poor overall frequency tuning in A1. By contrast, BDNF infusion on the developing A1 amplified the effect of 7 kHz tone exposure compared to control. These results indicate that BDNF signaling participates in the experience-dependent plasticity induced by pure tone exposure during the critical period in A1.

  19. Experimental investigation of shaping disturbance observer design for motion control of precision mechatronic stages with resonances

    NASA Astrophysics Data System (ADS)

    Yang, Jin; Hu, Chuxiong; Zhu, Yu; Wang, Ze; Zhang, Ming

    2017-08-01

    In this paper, shaping disturbance observer (SDOB) is investigated for precision mechatronic stages with middle-frequency zero/pole type resonance to achieve good motion control performance in practical manufacturing situations. Compared with traditional standard disturbance observer (DOB), in SDOB a pole-zero cancellation based shaping filter is cascaded to the mechatronic stage plant to meet the challenge of motion control performance deterioration caused by actual resonance. Noting that pole-zero cancellation is inevitably imperfect and the controller may even consequently become unstable in practice, frequency domain stability analysis is conducted to find out how each parameter of the shaping filter affects the control stability. Moreover, the robust design criterion of the shaping filter, and the design procedure of SDOB, are both proposed to guide the actual design and facilitate practical implementation. The SDOB with the proposed design criterion is applied to a linear motor driven stage and a voice motor driven stage, respectively. Experimental results consistently validate the effectiveness nature of the proposed SDOB scheme in practical mechatronics motion applications. The proposed SDOB design actually could be an effective unit in the controller design for motion stages of mechanical manufacture equipments.

  20. Tunable multiwavelength fiber laser based on a θ-shaped microfiber filter

    NASA Astrophysics Data System (ADS)

    Li, Yue; Xu, Zhilin; Luo, Yiyang; Xiang, Yang; Yan, Zhijun; Liu, Deming; Sun, Qizhen

    2018-06-01

    We propose and experimentally demonstrate a flexibly tunable multiwavelength fiber ring laser based on a θ-shaped microfiber filter in conjunction with an erbium-doped fiber amplifier. The stable operation of the multiwavelength lasing is successfully achieved at room temperature, with the peak power fluctuation less than 0.519 dB. By micro-adjusting the cavity length of the filter, the channel spacing can be independently tuned within the gain range of the optical amplifier. We have achieved 0.084 nm-spacing 48 channel, 0.147 nm-spacing 25 channel, 0.190 nm-spacing 20 channel and 0.302 nm-spacing 15 channel lasing wavelengths at room temperature.

  1. Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter

    NASA Astrophysics Data System (ADS)

    Dansereau, Donald G.; Bongiorno, Daniel L.; Pizarro, Oscar; Williams, Stefan B.

    2013-02-01

    Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan's performance scales with aperture count.

  2. Neural correlates of audiovisual integration in music reading.

    PubMed

    Nichols, Emily S; Grahn, Jessica A

    2016-10-01

    Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Testing the importance of auditory detections in avian point counts

    USGS Publications Warehouse

    Brewster, J.P.; Simons, T.R.

    2009-01-01

    Recent advances in the methods used to estimate detection probability during point counts suggest that the detection process is shaped by the types of cues available to observers. For example, models of the detection process based on distance-sampling or time-of-detection methods may yield different results for auditory versus visual cues because of differences in the factors that affect the transmission of these cues from a bird to an observer or differences in an observer's ability to localize cues. Previous studies suggest that auditory detections predominate in forested habitats, but it is not clear how often observers hear birds prior to detecting them visually. We hypothesized that auditory cues might be even more important than previously reported, so we conducted an experiment in a forested habitat in North Carolina that allowed us to better separate auditory and visual detections. Three teams of three observers each performed simultaneous 3-min unlimited-radius point counts at 30 points in a mixed-hardwood forest. One team member could see, but not hear birds, one could hear, but not see, and the third was nonhandicapped. Of the total number of birds detected, 2.9% were detected by deafened observers, 75.1% by blinded observers, and 78.2% by nonhandicapped observers. Detections by blinded and nonhandicapped observers were the same only 54% of the time. Our results suggest that the detection of birds in forest habitats is almost entirely by auditory cues. Because many factors affect the probability that observers will detect auditory cues, the accuracy and precision of avian point count estimates are likely lower than assumed by most field ornithologists. ?? 2009 Association of Field Ornithologists.

  4. To Modulate and Be Modulated: Estrogenic Influences on Auditory Processing of Communication Signals within a Socio-Neuro-Endocrine Framework

    PubMed Central

    Yoder, Kathleen M.; Vicario, David S.

    2012-01-01

    Gonadal hormones modulate behavioral responses to sexual stimuli, and communication signals can also modulate circulating hormone levels. In several species, these combined effects appear to underlie a two-way interaction between circulating gonadal hormones and behavioral responses to socially salient stimuli. Recent work in songbirds has shown that manipulating local estradiol levels in the auditory forebrain produces physiological changes that affect discrimination of conspecific vocalizations and can affect behavior. These studies provide new evidence that estrogens can directly alter auditory processing and indirectly alter the behavioral response to a stimulus. These studies show that: 1. Local estradiol action within an auditory area is necessary for socially-relevant sounds to induce normal physiological responses in the brains of both sexes; 2. These physiological effects occur much more quickly than predicted by the classical time-frame for genomic effects; 3. Estradiol action within the auditory forebrain enables behavioral discrimination among socially-relevant sounds in males; and 4. Estradiol is produced locally in the male brain during exposure to particular social interactions. The accumulating evidence suggests a socio-neuro-endocrinology framework in which estradiol is essential to auditory processing, is increased by a socially relevant stimulus, acts rapidly to shape perception of subsequent stimuli experienced during social interactions, and modulates behavioral responses to these stimuli. Brain estrogens are likely to function similarly in both songbird sexes because aromatase and estrogen receptors are present in both male and female forebrain. Estrogenic modulation of perception in songbirds and perhaps other animals could fine-tune male advertising signals and female ability to discriminate them, facilitating mate selection by modulating behaviors. Keywords: Estrogens, Songbird, Social Context, Auditory Perception PMID:22201281

  5. Fast multiview three-dimensional reconstruction method using cost volume filtering

    NASA Astrophysics Data System (ADS)

    Lee, Seung Joo; Park, Min Ki; Jang, In Yeop; Lee, Kwan H.

    2014-03-01

    As the number of customers who want to record three-dimensional (3-D) information using a mobile electronic device increases, it becomes more and more important to develop a method which quickly reconstructs a 3-D model from multiview images. A fast multiview-based 3-D reconstruction method is presented, which is suitable for the mobile environment by constructing a cost volume of the 3-D height field. This method consists of two steps: the construction of a reliable base surface and the recovery of shape details. In each step, the cost volume is constructed using photoconsistency and then it is filtered according to the multiscale. The multiscale-based cost volume filtering allows the 3-D reconstruction to maintain the overall shape and to preserve the shape details. We demonstrate the strength of the proposed method in terms of computation time, accuracy, and unconstrained acquisition environment.

  6. Mass peak shape improvement of a quadrupole mass filter when operating with a rectangular wave power supply.

    PubMed

    Luo, Chan; Jiang, Dan; Ding, Chuan-Fan; Konenkov, Nikolai V

    2009-09-01

    Numeric experiments were performed to study the first and second stability regions and find the optimal configurations of a quadrupole mass filter constructed of circular quadrupole rods with a rectangular wave power supply. The ion transmission contours were calculated using ion trajectory simulations. For the first stability region, the optimal rod set configuration and the ratio r/r(0) is 1.110-1.115; for the second stability region, it is 1.128-1.130. Low-frequency direct current (DC) modulation with the parameters of m = 0.04-0.16 and nu = omega/Omega = 1/8-1/14 improves the mass peak shape of the circular rod quadrupole mass filter at the optimal r/r(0) ratio of 1.130. The amplitude modulation does not improve mass peak shape. Copyright (c) 2009 John Wiley & Sons, Ltd.

  7. The iso-response method: measuring neuronal stimulus integration with closed-loop experiments

    PubMed Central

    Gollisch, Tim; Herz, Andreas V. M.

    2012-01-01

    Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, “iso-response” may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments. PMID:23267315

  8. Input-current shaped ac to dc converters

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The problem of achieving near unity power factor while supplying power to a dc load from a single phase ac source of power is examined. Power processors for this application must perform three functions: input current shaping, energy storage, and output voltage regulation. The methods available for performing each of these three functions are reviewed. Input current shaping methods are either active or passive, with the active methods divided into buck-like and boost-like techniques. In addition to large reactances, energy storage methods include resonant filters, active filters, and active storage schemes. Fast voltage regulation can be achieved by post regulation or by supplementing the current shaping topology with an extra switch. Some indications of which methods are best suited for particular applications concludes the discussion.

  9. Speech enhancement using the modified phase-opponency model.

    PubMed

    Deshmukh, Om D; Espy-Wilson, Carol Y; Carney, Laurel H

    2007-06-01

    In this paper we present a model called the Modified Phase-Opponency (MPO) model for single-channel speech enhancement when the speech is corrupted by additive noise. The MPO model is based on the auditory PO model, proposed for detection of tones in noise. The PO model includes a physiologically realistic mechanism for processing the information in neural discharge times and exploits the frequency-dependent phase properties of the tuned filters in the auditory periphery by using a cross-auditory-nerve-fiber coincidence detection for extracting temporal cues. The MPO model alters the components of the PO model such that the basic functionality of the PO model is maintained but the properties of the model can be analyzed and modified independently. The MPO-based speech enhancement scheme does not need to estimate the noise characteristics nor does it assume that the noise satisfies any statistical model. The MPO technique leads to the lowest value of the LPC-based objective measures and the highest value of the perceptual evaluation of speech quality measure compared to other methods when the speech signals are corrupted by fluctuating noise. Combining the MPO speech enhancement technique with our aperiodicity, periodicity, and pitch detector further improves its performance.

  10. Habituation of Auditory Steady State Responses Evoked by Amplitude-Modulated Acoustic Signals in Rats

    PubMed Central

    Prado-Gutierrez, Pavel; Castro-Fariñas, Anisleidy; Morgado-Rodriguez, Lisbet; Velarde-Reyes, Ernesto; Martínez, Agustín D.; Martínez-Montes, Eduardo

    2015-01-01

    Generation of the auditory steady state responses (ASSR) is commonly explained by the linear combination of random background noise activity and the stationary response. Based on this model, the decrease of amplitude that occurs over the sequential averaging of epochs of the raw data has been exclusively linked to the cancelation of noise. Nevertheless, this behavior might also reflect the non-stationary response of the ASSR generators. We tested this hypothesis by characterizing the ASSR time course in rats with different auditory maturational stages. ASSR were evoked by 8-kHz tones of different supra-threshold intensities, modulated in amplitude at 115 Hz. Results show that the ASSR amplitude habituated to the sustained stimulation and that dishabituation occurred when deviant stimuli were presented. ASSR habituation increased as animals became adults, suggesting that the ability to filter acoustic stimuli with no-relevant temporal information increased with age. Results are discussed in terms of the current model of the ASSR generation and analysis procedures. They might have implications for audiometric tests designed to assess hearing in subjects who cannot provide reliable results in the psychophysical trials. PMID:26557360

  11. Effects of background noise on acoustic characteristics of Bengalese finch songs.

    PubMed

    Shiba, Shintaro; Okanoya, Kazuo; Tachibana, Ryosuke O

    2016-12-01

    Online regulation of vocalization in response to auditory feedback is one of the essential issues for vocal communication. One such audio-vocal interaction is the Lombard effect, an involuntary increase in vocal amplitude in response to the presence of background noise. Along with vocal amplitude, other acoustic characteristics, including fundamental frequency (F0), also change in some species. Bengalese finches (Lonchura striata var. domestica) are a suitable model for comparative, ethological, and neuroscientific studies on audio-vocal interaction because they require real-time auditory feedback of their own songs to maintain normal singing. Here, the changes in amplitude and F0 with a focus on the distinct song elements (i.e., notes) of Bengalese finches under noise presentation are demonstrated. To accurately analyze these acoustic characteristics, two different bandpass-filtered noises at two levels of sound intensity were used. The results confirmed that the Lombard effect occurs at the note level of Bengalese finch song. Further, individually specific modes of changes in F0 are shown. These behavioral changes suggested the vocal control mechanisms on which the auditory feedback is based have a predictable effect on amplitude, but complex spectral effects on individual note production.

  12. Relating binaural pitch perception to the individual listener's auditory profile.

    PubMed

    Santurette, Sébastien; Dau, Torsten

    2012-04-01

    The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception was linked to a specific deficit, the auditory profiles of the individual listeners were characterized using measures of loudness perception, cognitive ability, binaural processing, temporal fine structure processing, and frequency selectivity, in addition to common audiometric measures. Two of the listeners were found not to perceive binaural pitch at all, despite a clear detection of monaural pitch. While both binaural and monaural pitches were detectable by all other listeners, identification scores were significantly lower for binaural than for monaural pitch. A total absence of binaural pitch sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural pitch perception showed deficits in temporal fine structure processing. Whether the observed deficits stemmed from peripheral or central mechanisms could not be resolved here, but the present findings may be useful for hearing loss characterization.

  13. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    PubMed

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human ERP and psychophysical music listening studies. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Central auditory processing effects induced by solvent exposure.

    PubMed

    Fuente, Adrian; McPherson, Bradley

    2007-01-01

    Various studies have demonstrated that organic solvent exposure may induce auditory damage. Studies conducted in workers occupationally exposed to solvents suggest, on the one hand, poorer hearing thresholds than in matched non-exposed workers, and on the other hand, central auditory damage due to solvent exposure. Taking into account the potential auditory damage induced by solvent exposure due to the neurotoxic properties of such substances, the present research aimed at studying the possible auditory processing disorder (APD), and possible hearing difficulties in daily life listening situations that solvent-exposed workers may acquire. Fifty workers exposed to a mixture of organic solvents (xylene, toluene, methyl ethyl ketone) and 50 non-exposed workers matched by age, gender and education were assessed. Only subjects with no history of ear infections, high blood pressure, kidney failure, metabolic and neurological diseases, or alcoholism were selected. The subjects had either normal hearing or sensorineural hearing loss, and normal tympanometric results. Hearing-in-noise (HINT), dichotic digit (DD), filtered speech (FS), pitch pattern sequence (PPS), and random gap detection (RGD) tests were carried out in the exposed and non-exposed groups. A self-report inventory of each subject's performance in daily life listening situations, the Amsterdam Inventory for Auditory Disability and Handicap, was also administered. Significant threshold differences between exposed and non-exposed workers were found at some of the hearing test frequencies, for both ears. However, exposed workers still presented normal hearing thresholds as a group (equal or better than 20 dB HL). Also, for the HINT, DD, PPS, FS and RGD tests, non-exposed workers obtained better results than exposed workers. Finally, solvent-exposed workers reported significantly more hearing complaints in daily life listening situations than non-exposed workers. It is concluded that subjects exposed to solvents may acquire an APD and thus the sole use of pure-tone audiometry is insufficient to assess hearing in solvent-exposed populations.

  15. Flexible RF filter using a nonuniform SCISSOR.

    PubMed

    Zhuang, Leimeng

    2016-03-15

    This work presents a flexible radiofrequency (RF) filter using an integrated microwave photonic circuit that comprises a nonuniform side-coupled integrated spaced sequence of resonators (N-SCISSOR). The filter passband can be reconfigured by varying the N-SCISSOR parameters. When employing a dual-parallel Mach-Zechnder modulator, the filter is also able to perform frequency down-conversion. In the experiment, various filter response shapes are shown, ranging from a flat-top band-pass filter to a total opposite high-rejection (>40  dB) notch filter, with a frequency coverage of greater than two octaves. The frequency down-conversion function is also demonstrated.

  16. Recursive Algorithms for Real-Time Digital CR-RCn Pulse Shaping

    NASA Astrophysics Data System (ADS)

    Nakhostin, M.

    2011-10-01

    This paper reports on recursive algorithms for real-time implementation of CR-(RC)n filters in digital nuclear spectroscopy systems. The algorithms are derived by calculating the Z-transfer function of the filters for filter orders up to n=4 . The performances of the filters are compared with the performance of the conventional digital trapezoidal filter using a noise generator which separately generates pure series, 1/f and parallel noise. The results of our study enable one to select the optimum digital filter for different noise and rate conditions.

  17. Gravitoinertial force magnitude and direction influence head-centric auditory localization

    NASA Technical Reports Server (NTRS)

    DiZio, P.; Held, R.; Lackner, J. R.; Shinn-Cunningham, B.; Durlach, N.

    2001-01-01

    We measured the influence of gravitoinertial force (GIF) magnitude and direction on head-centric auditory localization to determine whether a true audiogravic illusion exists. In experiment 1, supine subjects adjusted computer-generated dichotic stimuli until they heard a fused sound straight ahead in the midsagittal plane of the head under a variety of GIF conditions generated in a slow-rotation room. The dichotic stimuli were constructed by convolving broadband noise with head-related transfer function pairs that model the acoustic filtering at the listener's ears. These stimuli give rise to the perception of externally localized sounds. When the GIF was increased from 1 to 2 g and rotated 60 degrees rightward relative to the head and body, subjects on average set an acoustic stimulus 7.3 degrees right of their head's median plane to hear it as straight ahead. When the GIF was doubled and rotated 60 degrees leftward, subjects set the sound 6.8 degrees leftward of baseline values to hear it as centered. In experiment 2, increasing the GIF in the median plane of the supine body to 2 g did not influence auditory localization. In experiment 3, tilts up to 75 degrees of the supine body relative to the normal 1 g GIF led to small shifts, 1--2 degrees, of auditory setting toward the up ear to maintain a head-centered sound localization. These results show that head-centric auditory localization is affected by azimuthal rotation and increase in magnitude of the GIF and demonstrate that an audiogravic illusion exists. Sound localization is shifted in the direction opposite GIF rotation by an amount related to the magnitude of the GIF and its angular deviation relative to the median plane.

  18. Compact wideband filter element-based on complementary split-ring resonators

    NASA Astrophysics Data System (ADS)

    Horestani, Ali K.; Shaterian, Zahra; Withayachumnankul, Withawat; Fumeaux, Christophe; Al-Sarawi, Said; Abbott, Derek

    2011-12-01

    A double resonance defected ground structure is proposed as a filter element. The structure involves a transmission line loaded with complementary split ring resonators embedded in a dumbbell shape defected ground structure. By using a parametric study, it is demonstrated that the two resonance frequencies can be independently tuned. Therefore the structure can be used for different applications such as dual bandstop filters and wide bandstop filters.

  19. Optimal Recursive Digital Filters for Active Bending Stabilization

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2013-01-01

    In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

  20. Irregular Speech Rate Dissociates Auditory Cortical Entrainment, Evoked Responses, and Frontal Alpha

    PubMed Central

    Kayser, Stephanie J.; Ince, Robin A.A.; Gross, Joachim

    2015-01-01

    The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features. SIGNIFICANCE STATEMENT The entrainment of rhythmic auditory cortical activity to the speech envelope is considered to be critical for hearing. Previous work has proposed divergent views in which entrainment reflects either early evoked responses related to sound encoding or high-level processes related to expectation or cognitive selection. Using a manipulation of speech rate, we dissociated auditory entrainment at different time scales. Specifically, our results suggest that delta entrainment is controlled by frontal alpha mechanisms and thus support the notion that rhythmic auditory cortical entrainment is shaped by top-down mechanisms. PMID:26538641

  1. Cell nuclei and cytoplasm joint segmentation using the sliding band filter.

    PubMed

    Quelhas, Pedro; Marcuzzo, Monica; Mendonça, Ana Maria; Campilho, Aurélio

    2010-08-01

    Microscopy cell image analysis is a fundamental tool for biological research. In particular, multivariate fluorescence microscopy is used to observe different aspects of cells in cultures. It is still common practice to perform analysis tasks by visual inspection of individual cells which is time consuming, exhausting and prone to induce subjective bias. This makes automatic cell image analysis essential for large scale, objective studies of cell cultures. Traditionally the task of automatic cell analysis is approached through the use of image segmentation methods for extraction of cells' locations and shapes. Image segmentation, although fundamental, is neither an easy task in computer vision nor is it robust to image quality changes. This makes image segmentation for cell detection semi-automated requiring frequent tuning of parameters. We introduce a new approach for cell detection and shape estimation in multivariate images based on the sliding band filter (SBF). This filter's design makes it adequate to detect overall convex shapes and as such it performs well for cell detection. Furthermore, the parameters involved are intuitive as they are directly related to the expected cell size. Using the SBF filter we detect cells' nucleus and cytoplasm location and shapes. Based on the assumption that each cell has the same approximate shape center in both nuclei and cytoplasm fluorescence channels, we guide cytoplasm shape estimation by the nuclear detections improving performance and reducing errors. Then we validate cell detection by gathering evidence from nuclei and cytoplasm channels. Additionally, we include overlap correction and shape regularization steps which further improve the estimated cell shapes. The approach is evaluated using two datasets with different types of data: a 20 images benchmark set of simulated cell culture images, containing 1000 simulated cells; a 16 images Drosophila melanogaster Kc167 dataset containing 1255 cells, stained for DNA and actin. Both image datasets present a difficult problem due to the high variability of cell shapes and frequent cluster overlap between cells. On the Drosophila dataset our approach achieved a precision/recall of 95%/69% and 82%/90% for nuclei and cytoplasm detection respectively and an overall accuracy of 76%.

  2. Live CT imaging of sound reception anatomy and hearing measurements in the pygmy killer whale, Feresa attenuata.

    PubMed

    Montie, Eric W; Manire, Charlie A; Mann, David A

    2011-03-15

    In June 2008, two pygmy killer whales (Feresa attenuata) were stranded alive near Boca Grande, FL, USA, and were taken into rehabilitation. We used this opportunity to learn about the peripheral anatomy of the auditory system and hearing sensitivity of these rare toothed whales. Three-dimensional (3-D) reconstructions of head structures from X-ray computed tomography (CT) images revealed mandibles that were hollow, lacked a bony lamina medial to the pan bone and contained mandibular fat bodies that extended caudally and abutted the tympanoperiotic complex. Using auditory evoked potential (AEP) procedures, the modulation rate transfer function was determined. Maximum evoked potential responses occurred at modulation frequencies of 500 and 1000 Hz. The AEP-derived audiograms were U-shaped. The lowest hearing thresholds occurred between 20 and 60 kHz, with the best hearing sensitivity at 40 kHz. The auditory brainstem response (ABR) was composed of seven waves and resembled the ABR of the bottlenose and common dolphins. By changing electrode locations, creating 3-D reconstructions of the brain from CT images and measuring the amplitude of the ABR waves, we provided evidence that the neuroanatomical sources of ABR waves I, IV and VI were the auditory nerve, inferior colliculus and the medial geniculate body, respectively. The combination of AEP testing and CT imaging provided a new synthesis of methods for studying the auditory system of cetaceans.

  3. Impact imaging of aircraft composite structure based on a model-independent spatial-wavenumber filter.

    PubMed

    Qiu, Lei; Liu, Bin; Yuan, Shenfang; Su, Zhongqing

    2016-01-01

    The spatial-wavenumber filtering technique is an effective approach to distinguish the propagating direction and wave mode of Lamb wave in spatial-wavenumber domain. Therefore, it has been gradually studied for damage evaluation in recent years. But for on-line impact monitoring in practical application, the main problem is how to realize the spatial-wavenumber filtering of impact signal when the wavenumber of high spatial resolution cannot be measured or the accurate wavenumber curve cannot be modeled. In this paper, a new model-independent spatial-wavenumber filter based impact imaging method is proposed. In this method, a 2D cross-shaped array constructed by two linear piezoelectric (PZT) sensor arrays is used to acquire impact signal on-line. The continuous complex Shannon wavelet transform is adopted to extract the frequency narrowband signals from the frequency wideband impact response signals of the PZT sensors. A model-independent spatial-wavenumber filter is designed based on the spatial-wavenumber filtering technique. Based on the designed filter, a wavenumber searching and best match mechanism is proposed to implement the spatial-wavenumber filtering of the frequency narrowband signals without modeling, which can be used to obtain a wavenumber-time image of the impact relative to a linear PZT sensor array. By using the two wavenumber-time images of the 2D cross-shaped array, the impact direction can be estimated without blind angle. The impact distance relative to the 2D cross-shaped array can be calculated by using the difference of time-of-flight between the frequency narrowband signals of two different central frequencies and the corresponding group velocities. The validations performed on a carbon fiber composite laminate plate and an aircraft composite oil tank show a good impact localization accuracy of the model-independent spatial-wavenumber filter based impact imaging method. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Biodegradable microfabricated plug-filters for glaucoma drainage devices.

    PubMed

    Maleki, Teimour; Chitnis, Girish; Park, Jun Hyeong; Cantor, Louis B; Ziaie, Babak

    2012-06-01

    We report on the development of a batch fabricated biodegradable truncated-cone-shaped plug filter to overcome the postoperative hypotony in nonvalved glaucoma drainage devices. Plug filters are composed of biodegradable polymers that disappear once wound healing and bleb formation has progressed past the stage where hypotony from overfiltration may cause complications in the human eye. The biodegradable nature of device eliminates the risks associated with permanent valves that may become blocked or influence the aqueous fluid flow rate in the long term. The plug-filter geometry simplifies its integration with commercial shunts. Aqueous humor outflow regulation is achieved by controlling the diameter of a laser-drilled through-hole. The batch compatible fabrication involves a modified SU-8 molding to achieve truncated-cone-shaped pillars, polydimethylsiloxane micromolding, and hot embossing of biodegradable polymers. The developed plug filter is 500 μm long with base and apex plane diameters of 500 and 300 μm, respectively, and incorporates a laser-drilled through-hole with 44-μm effective diameter in the center.

  5. The FPase properties and morphology changes of a cellulolytic bacterium, Sporocytophaga sp. JL-01, on decomposing filter paper cellulose.

    PubMed

    Wang, Xiuran; Peng, Zhongqi; Sun, Xiaoling; Liu, Dongbo; Chen, Shan; Li, Fan; Xia, Hongmei; Lu, Tiancheng

    2012-01-01

    Sporocytophaga sp. JL-01 is a sliding cellulose degrading bacterium that can decompose filter paper (FP), carboxymethyl cellulose (CMC) and cellulose CF11. In this paper, the morphological characteristics of S. sp. JL-01 growing in FP liquid medium was studied by Scanning Electron Microscope (SEM), and one of the FPase components of this bacterium was analyzed. The results showed that the cell shapes were variable during the process of filter paper cellulose decomposition and the rod shape might be connected with filter paper decomposing. After incubating for 120 h, the filter paper was decomposed significantly, and it was degraded absolutely within 144 h. An FPase1 was purified from the supernatant and its characteristics were analyzed. The molecular weight of the FPase1 was 55 kDa. The optimum pH was pH 7.2 and optimum temperature was 50°C under experiment conditions. Zn(2+) and Co(2+) enhanced the enzyme activity, but Fe(3+) inhibited it.

  6. Auditory-nerve single-neuron thresholds to electrical stimulation from scala tympani electrodes.

    PubMed

    Parkins, C W; Colombo, J

    1987-12-31

    Single auditory-nerve neuron thresholds were studied in sensory-deafened squirrel monkeys to determine the effects of electrical stimulus shape and frequency on single-neuron thresholds. Frequency was separated into its components, pulse width and pulse rate, which were analyzed separately. Square and sinusoidal pulse shapes were compared. There were no or questionably significant threshold differences in charge per phase between sinusoidal and square pulses of the same pulse width. There was a small (less than 0.5 dB) but significant threshold advantage for 200 microseconds/phase pulses delivered at low pulse rates (156 pps) compared to higher pulse rates (625 pps and 2500 pps). Pulse width was demonstrated to be the prime determinant of single-neuron threshold, resulting in strength-duration curves similar to other mammalian myelinated neurons, but with longer chronaxies. The most efficient electrical stimulus pulse width to use for cochlear implant stimulation was determined to be 100 microseconds/phase. This pulse width delivers the lowest charge/phase at threshold. The single-neuron strength-duration curves were compared to strength-duration curves of a computer model based on the specific anatomy of auditory-nerve neurons. The membrane capacitance and resulting chronaxie of the model can be varied by altering the length of the unmyelinated termination of the neuron, representing the unmyelinated portion of the neuron between the habenula perforata and the hair cell. This unmyelinated segment of the auditory-nerve neuron may be subject to aminoglycoside damage. Simulating a 10 micron unmyelinated termination for this model neuron produces a strength-duration curve that closely fits the single-neuron data obtained from aminoglycoside deafened animals. Both the model and the single-neuron strength-duration curves differ significantly from behavioral threshold data obtained from monkeys and humans with cochlear implants. This discrepancy can best be explained by the involvement of higher level neurologic processes in the behavioral responses. These findings suggest that the basic principles of neural membrane function must be considered in developing or analyzing electrical stimulation strategies for cochlear prostheses if the appropriate stimulation of frequency specific populations of auditory-nerve neurons is the objective.

  7. An Analysis of The Parameters Used In Speech ABR Assessment Protocols.

    PubMed

    Sanfins, Milaine D; Hatzopoulos, Stavros; Donadon, Caroline; Diniz, Thais A; Borges, Leticia R; Skarzynski, Piotr H; Colella-Santos, Maria Francisca

    2018-04-01

    The aim of this study was to assess the parameters of choice, such as duration, intensity, rate, polarity, number of sweeps, window length, stimulated ear, fundamental frequency, first formant, and second formant, from previously published speech ABR studies. To identify candidate articles, five databases were assessed using the following keyword descriptors: speech ABR, ABR-speech, speech auditory brainstem response, auditory evoked potential to speech, speech-evoked brainstem response, and complex sounds. The search identified 1288 articles published between 2005 and 2015. After filtering the total number of papers according to the inclusion and exclusion criteria, 21 studies were selected. Analyzing the protocol details used in 21 studies suggested that there is no consensus to date on a speech-ABR protocol and that the parameters of analysis used are quite variable between studies. This inhibits the wider generalization and extrapolation of data across languages and studies.

  8. Speech Data Analysis for Semantic Indexing of Video of Simulated Medical Crises

    DTIC Science & Technology

    2015-05-01

    scheduled approximately twice per week and are recorded as video data. During each session, the physician/instructor must manually review and anno - tate...spectrum, y, using regression line: y = ln(1 + Jx), (2.3) where x is the auditory power spectral amplitude, J is a singal-dependent pos- itive constant...The amplitude-warping transform is linear-like for J 1 and logarithmic-like for J 1. 3. RASTA filtering: reintegrate the log critical-band

  9. The Brain as a Mixer, II. A Pilot Study of Central Auditory Integration Abilities of Normal and Retarded Children. Studies in Language and Language Behavior, Progress Report Number VII.

    ERIC Educational Resources Information Center

    Semmel, Melvyn I.; And Others

    To explore the binaural integration abilities of six educable mentally retarded boys (ages 8 to 13) and six normal boys (ages 7 to 12) to detect possible brain inju"y, an adaptation of Matzker's (1958) technique involving separating words into high and low frequencies was used. One frequency filter system presented frequencies from 425 to 1275…

  10. A Layer-specific Corticofugal Input to the Mouse Superior Colliculus.

    PubMed

    Zurita, Hector; Rock, Crystal; Perkins, Jessica; Apicella, Alfonso Junior

    2017-07-05

    In the auditory cortex (AC), corticofugal projections arise from each level of the auditory system and are considered to provide feedback "loops" important to modulate the flow of ascending information. It is well established that the cortex can influence the response of neurons in the superior colliculus (SC) via descending corticofugal projections. However, little is known about the relative contribution of different pyramidal neurons to these projections in the SC. We addressed this question by taking advantage of anterograde and retrograde neuronal tracing to directly examine the laminar distribution, long-range projections, and electrophysiological properties of pyramidal neurons projecting from the AC to the SC of the mouse brain. Here we show that layer 5 cortico-superior-collicular pyramidal neurons act as bandpass filters, resonating with a broad peak at ∼3 Hz, whereas layer 6 neurons act as low-pass filters. The dissimilar subthreshold properties of layer 5 and layer 6 cortico-superior-collicular pyramidal neurons can be described by differences in the hyperpolarization-activated cyclic nucleotide-gated cation h-current (Ih). Ih also reduced the summation of short trains of artificial excitatory postsynaptic potentials injected at the soma of layer 5, but not layer 6, cortico-superior-collicular pyramidal neurons, indicating a differential dampening effect of Ih on these neurons. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. The sensory substrate of multimodal communication in brown-headed cowbirds: are females sensory 'specialists' or 'generalists'?

    PubMed

    Ronald, Kelly L; Sesterhenn, Timothy M; Fernandez-Juricic, Esteban; Lucas, Jeffrey R

    2017-11-01

    Many animals communicate with multimodal signals. While we have an understanding of multimodal signal production, we know relatively less about receiver filtering of multimodal signals and whether filtering capacity in one modality influences filtering in a second modality. Most multimodal signals contain a temporal element, such as change in frequency over time or a dynamic visual display. We examined the relationship in temporal resolution across two modalities to test whether females are (1) sensory 'specialists', where a trade-off exists between the sensory modalities, (2) sensory 'generalists', where a positive relationship exists between the modalities, or (3) whether no relationship exists between modalities. We used female brown-headed cowbirds (Molothrus ater) to investigate this question as males court females with an audiovisual display. We found a significant positive relationship between female visual and auditory temporal resolution, suggesting that females are sensory 'generalists'. Females appear to resolve information well across multiple modalities, which may select for males that signal their quality similarly across modalities.

  12. Spin filter and molecular switch based on bowtie-shaped graphene nanoflake

    NASA Astrophysics Data System (ADS)

    Kang, Jun; Wu, Fengmin; Li, Jingbo

    2012-11-01

    The magnetic and transport properties of bowtie-shaped graphene nanoflake (BGNF) are investigated from first principles calculations. The eigen states of ferromagnetic (FM) BGNF near Fermi level are found to be delocalized over the whole flake, whereas those of antiferromagnetic (AFM) BGNF are localized in one side. The different characters result in different transport properties for FM and AFM BGNFs. FM BGNF exhibits perfect spin filtering effect and can serve as a spin filter. Moreover, the conductance of BGNF is much larger in FM state than in AFM state, thus BGNF can serve as a molecular switch. These results suggest that BGNF is a good candidate for future nanoelectronics.

  13. Picosecond and sub-picosecond flat-top pulse generation using uniform long-period fiber gratings

    NASA Astrophysics Data System (ADS)

    Park, Y.; Kulishov, M.; Slavík, R.; Azaña, J.

    2006-12-01

    We propose a novel linear filtering scheme based on ultrafast all-optical differentiation for re-shaping of ultrashort pulses generated from a mode-locked laser into flat-top pulses. The technique is demonstrated using simple all-fiber optical filters, more specifically uniform long period fiber gratings (LPGs) operated in transmission. The large bandwidth typical for these fiber filters allows scaling the technique to the sub-picosecond regime. In the experiments reported here, 600-fs and 1.8-ps Gaussian-like optical pulses (@ 1535 nm) have been re-shaped into 1-ps and 3.2-ps flat-top pulses, respectively, using a single 9-cm long uniform LPG.

  14. Coding of sounds in the auditory system and its relevance to signal processing and coding in cochlear implants.

    PubMed

    Moore, Brian C J

    2003-03-01

    To review how the properties of sounds are "coded" in the normal auditory system and to discuss the extent to which cochlear implants can and do represent these codes. Data are taken from published studies of the response of the cochlea and auditory nerve to simple and complex stimuli, in both the normal and the electrically stimulated ear. REVIEW CONTENT: The review describes: 1) the coding in the normal auditory system of overall level (which partly determines perceived loudness), spectral shape (which partly determines perceived timbre and the identity of speech sounds), periodicity (which partly determines pitch), and sound location; 2) the role of the active mechanism in the cochlea, and particularly the fast-acting compression associated with that mechanism; 3) the neural response patterns evoked by cochlear implants; and 4) how the response patterns evoked by implants differ from those observed in the normal auditory system in response to sound. A series of specific issues is then discussed, including: 1) how to compensate for the loss of cochlear compression; 2) the effective number of independent channels in a normal ear and in cochlear implantees; 3) the importance of independence of responses across neurons; 4) the stochastic nature of normal neural responses; 5) the possible role of across-channel coincidence detection; and 6) potential benefits of binaural implantation. Current cochlear implants do not adequately reproduce several aspects of the neural coding of sound in the normal auditory system. Improved electrode arrays and coding systems may lead to improved coding and, it is hoped, to better performance.

  15. Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns

    PubMed Central

    Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Valente, Giancarlo; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia

    2017-01-01

    Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2–4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice). PMID:28420788

  16. EGR-1 Expression in Catecholamine-synthesizing Neurons Reflects Auditory Learning and Correlates with Responses in Auditory Processing Areas.

    PubMed

    Dai, Jennifer B; Chen, Yining; Sakata, Jon T

    2018-05-21

    Distinguishing between familiar and unfamiliar individuals is an important task that shapes the expression of social behavior. As such, identifying the neural populations involved in processing and learning the sensory attributes of individuals is important for understanding mechanisms of behavior. Catecholamine-synthesizing neurons have been implicated in sensory processing, but relatively little is known about their contribution to auditory learning and processing across various vertebrate taxa. Here we investigated the extent to which immediate early gene expression in catecholaminergic circuitry reflects information about the familiarity of social signals and predicts immediate early gene expression in sensory processing areas in songbirds. We found that male zebra finches readily learned to differentiate between familiar and unfamiliar acoustic signals ('songs') and that playback of familiar songs led to fewer catecholaminergic neurons in the locus coeruleus (but not in the ventral tegmental area, substantia nigra, or periaqueductal gray) expressing the immediate early gene, EGR-1, than playback of unfamiliar songs. The pattern of EGR-1 expression in the locus coeruleus was similar to that observed in two auditory processing areas implicated in auditory learning and memory, namely the caudomedial nidopallium (NCM) and the caudal medial mesopallium (CMM), suggesting a contribution of catecholamines to sensory processing. Consistent with this, the pattern of catecholaminergic innervation onto auditory neurons co-varied with the degree to which song playback affected the relative intensity of EGR-1 expression. Together, our data support the contention that catecholamines like norepinephrine contribute to social recognition and the processing of social information. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. The effect of aborting ongoing movements on end point position estimation.

    PubMed

    Itaguchi, Yoshihiro; Fukuzawa, Kazuyoshi

    2013-11-01

    The present study investigated the impact of motor commands to abort ongoing movement on position estimation. Participants carried out visually guided reaching movements on a horizontal plane with their eyes open. By setting a mirror above their arm, however, they could not see the arm, only the start and target points. They estimated the position of their fingertip based solely on proprioception after their reaching movement was stopped before reaching the target. The participants stopped reaching as soon as they heard an auditory cue or were mechanically prevented from moving any further by an obstacle in their path. These reaching movements were carried out at two different speeds (fast or slow). It was assumed that additional motor commands to abort ongoing movement were required and that their magnitude was high, low, and zero, in the auditory-fast condition, the auditory-slow condition, and both the obstacle conditions, respectively. There were two main results. (1) When the participants voluntarily stopped a fast movement in response to the auditory cue (the auditory-fast condition), they showed more underestimates than in the other three conditions. This underestimate effect was positively related to movement velocity. (2) An inverted-U-shaped bias pattern as a function of movement distance was observed consistently, except in the auditory-fast condition. These findings indicate that voluntarily stopping fast ongoing movement created a negative bias in the position estimate, supporting the idea that additional motor commands or efforts to abort planned movement are involved with the position estimation system. In addition, spatially probabilistic inference and signal-dependent noise may explain the underestimate effect of aborting ongoing movement.

  18. Additive Routes to Action Learning: Layering Experience Shapes Engagement of the Action Observation Network.

    PubMed

    Kirsch, Louise P; Cross, Emily S

    2015-12-01

    The way in which we perceive others in action is biased by one's prior experience with an observed action. For example, we can have auditory, visual, or motor experience with actions we observe others perform. How action experience via 1, 2, or all 3 of these modalities shapes action perception remains unclear. Here, we combine pre- and post-training functional magnetic resonance imaging measures with a dance training manipulation to address how building experience (from auditory to audiovisual to audiovisual plus motor) with a complex action shapes subsequent action perception. Results indicate that layering experience across these 3 modalities activates a number of sensorimotor cortical regions associated with the action observation network (AON) in such a way that the more modalities through which one experiences an action, the greater the response is within these AON regions during action perception. Moreover, a correlation between left premotor activity and participants' scores for reproducing an action suggests that the better an observer can perform an observed action, the stronger the neural response is. The findings suggest that the number of modalities through which an observer experiences an action impacts AON activity additively, and that premotor cortical activity might serve as an index of embodiment during action observation. © The Author 2015. Published by Oxford University Press.

  19. Environmental filtering drives the shape and breadth of the seed germination niche in coastal plant communities

    PubMed Central

    Pérez-Arcoiza, Adrián; Prieto, José Alberto; Díaz, Tomás E.

    2017-01-01

    Abstract Background and Aims A phylogenetic comparative analysis of the seed germination niche was conducted in coastal plant communities of western Europe. Two hypotheses were tested, that (1) the germination niche shape (i.e. the preference for a set of germination cues as opposed to another) would differ between beaches and cliffs to prevent seedling emergence in the less favourable season (winter and summer, respectively); and (2) the germination niche breadth (i.e. the amplitude of germination cues) would be narrower in the seawards communities, where environmental filtering is stronger. Methods Seeds of 30 specialist species of coastal plant communities were collected in natural populations of northern Spain. Their germination was measured in six laboratory treatments based on field temperatures. Germination niche shape was estimated as the best germination temperature. Germination niche breadth was calculated using Pielou’s evenness index. Differences between plant communities in their germination niche shape and breadth were tested using phylogenetic generalized least squares regression (PGLS). Key Results Germination niche shape differed between communities, being warm-cued in beaches (best germination temperature = 20 °C) and cold-cued in cliffs (14 °C). Germination niche was narrowest in seawards beaches (Pielou’s index = 0·89) and broadest in landwards beaches (0·99). Cliffs had an intermediate germination niche breadth (0·95). The relationship between niche and plant community had a positive phylogenetic signal for shape (Pagel’s λ = 0·64) and a negative one for breadth (Pagel’s λ = −1·71). Conclusion Environmental filters shape the germination niche to prevent emergence in the season of highest threat for seedling establishment. The germination niche breadth is narrower in the communities with stronger environmental filters, but only in beaches. This study provides empirical support to a community-level generalization of the hypotheses about the environmental drivers of the germination niche. It highlights the role of germination traits in community assembly. PMID:28334139

  20. A simulation framework for auditory discrimination experiments: Revealing the importance of across-frequency processing in speech perception.

    PubMed

    Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger

    2016-05-01

    A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.

  1. Auditory integration training and other sound therapies for autism spectrum disorders (ASD).

    PubMed

    Sinha, Yashwant; Silove, Natalie; Hayen, Andrew; Williams, Katrina

    2011-12-07

    Auditory integration therapy was developed as a technique for improving abnormal sound sensitivity in individuals with behavioural disorders including autism spectrum disorders. Other sound therapies bearing similarities to auditory integration therapy include the Tomatis Method and Samonas Sound Therapy. To determine the effectiveness of auditory integration therapy or other methods of sound therapy in individuals with autism spectrum disorders. For this update, we searched the following databases in September 2010: CENTRAL (2010, Issue 2), MEDLINE (1950 to September week 2, 2010), EMBASE (1980 to Week 38, 2010), CINAHL (1937 to current), PsycINFO (1887 to current), ERIC (1966 to current), LILACS (September 2010) and the reference lists of published papers. One new study was found for inclusion. Randomised controlled trials involving adults or children with autism spectrum disorders. Treatment was auditory integration therapy or other sound therapies involving listening to music modified by filtering and modulation. Control groups could involve no treatment, a waiting list, usual therapy or a placebo equivalent. The outcomes were changes in core and associated features of autism spectrum disorders, auditory processing, quality of life and adverse events. Two independent review authors performed data extraction. All outcome data in the included papers were continuous. We calculated point estimates and standard errors from t-test scores and post-intervention means. Meta-analysis was inappropriate for the available data. We identified six randomised comtrolled trials of auditory integration therapy and one of Tomatis therapy, involving a total of 182 individuals aged three to 39 years. Two were cross-over trials. Five trials had fewer than 20 participants. Allocation concealment was inadequate for all studies. Twenty different outcome measures were used and only two outcomes were used by three or more studies. Meta-analysis was not possible due to very high heterogeneity or the presentation of data in unusable forms. Three studies (Bettison 1996; Zollweg 1997; Mudford 2000) did not demonstrate any benefit of auditory integration therapy over control conditions. Three studies (Veale 1993; Rimland 1995; Edelson 1999) reported improvements at three months for the auditory integration therapy group based on the Aberrant Behaviour Checklist, but they used a total score rather than subgroup scores, which is of questionable validity, and Veale's results did not reach statistical significance. Rimland 1995 also reported improvements at three months in the auditory integration therapy group for the Aberrant Behaviour Checklist subgroup scores. The study addressing Tomatis therapy (Corbett 2008) described an improvement in language with no difference between treatment and control conditions and did not report on the behavioural outcomes that were used in the auditory integration therapy trials. There is no evidence that auditory integration therapy or other sound therapies are effective as treatments for autism spectrum disorders. As synthesis of existing data has been limited by the disparate outcome measures used between studies, there is not sufficient evidence to prove that this treatment is not effective. However, of the seven studies including 182 participants that have been reported to date, only two (with an author in common), involving a total of 35 participants, report statistically significant improvements in the auditory intergration therapy group and for only two outcome measures (Aberrant Behaviour Checklist and Fisher's Auditory Problems Checklist). As such, there is no evidence to support the use of auditory integration therapy at this time.

  2. Thermally controlled femtosecond pulse shaping using metasurface based optical filters

    NASA Astrophysics Data System (ADS)

    Rahimi, Eesa; Şendur, Kürşat

    2018-02-01

    Shaping of the temporal distribution of the ultrashort pulses, compensation of pulse deformations due to phase shift in transmission and amplification are of interest in various optical applications. To address these problems, in this study, we have demonstrated an ultra-thin reconfigurable localized surface plasmon (LSP) band-stop optical filter driven by insulator-metal phase transition of vanadium dioxide. A Joule heating mechanism is proposed to control the thermal phase transition of the material. The resulting permittivity variation of vanadium dioxide tailors spectral response of the transmitted pulse from the stack. Depending on how the pulse's spectrum is located with respect to the resonance of the band-stop filter, the thin film stack can dynamically compress/expand the output pulse span up to 20% or shift its phase up to 360°. Multi-stacked filters have shown the ability to dynamically compensate input carrier frequency shifts and pulse span variations besides their higher span expansion rates.

  3. X-ray filter for x-ray powder diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinsheimer, John Jay; Conley, Raymond P.; Bouet, Nathalie C. D.

    Technologies are described for apparatus, methods and systems effective for filtering. The filters may comprise a first plate. The first plate may include an x-ray absorbing material and walls defining first slits. The first slits may include arc shaped openings through the first plate. The walls of the first plate may be configured to absorb at least some of first x-rays when the first x-rays are incident on the x-ray absorbing material, and to output second x-rays. The filters may comprise a second plate spaced from the first plate. The second plate may include the x-ray absorbing material and wallsmore » defining second slits. The second slits may include arc shaped openings through the second plate. The walls of the second plate may be configured to absorb at least some of second x-rays and to output third x-rays.« less

  4. Improvement of the energy resolution via an optimized digital signal processing in GERDA Phase I

    NASA Astrophysics Data System (ADS)

    Agostini, M.; Allardt, M.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Barros, N.; Baudis, L.; Bauer, C.; Becerici-Schmidt, N.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode, T.; Borowicz, D.; Brudanin, V.; Brugnera, R.; Budjáš, D.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; D'Andrea, V.; Demidova, E. V.; Vacri, A. di; Domula, A.; Doroshkevich, E.; Egorov, V.; Falkenstein, R.; Fedorova, O.; Freund, K.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Hegai, A.; Heisel, M.; Hemmer, S.; Heusser, G.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Janicskó Csáthy, J.; Jochum, J.; Junker, M.; Kazalov, V.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Klimenko, A.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, ********************M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Majorovits, B.; Maneschg, W.; Medinaceli, E.; Misiaszek, M.; Moseev, P.; Nemchenok, I.; Palioselitis, D.; Panas, K.; Pandola, L.; Pelczar, K.; Pullia, A.; Riboldi, S.; Rumyantseva, N.; Sada, C.; Salathe, M.; Schmitt, C.; Schneider, B.; Schönert, S.; Schreiner, J.; Schütz, A.-K.; Schulz, O.; Schwingenheuer, B.; Selivanenko, O.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Stepaniuk, M.; Ur, C. A.; Vanhoefer, L.; Vasenko, A. A.; Veresnikova, A.; von Sturm, K.; Wagner, V.; Walter, M.; Wegmann, A.; Wester, T.; Wilsenach, H.; Wojcik, M.; Yanovich, E.; Zavarise, P.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.

    2015-06-01

    An optimized digital shaping filter has been developed for the Gerda experiment which searches for neutrinoless double beta decay in Ge. The Gerda Phase I energy calibration data have been reprocessed and an average improvement of 0.3 keV in energy resolution (FWHM) corresponding to 10 % at the value for decay in Ge is obtained. This is possible thanks to the enhanced low-frequency noise rejection of this Zero Area Cusp (ZAC) signal shaping filter.

  5. Practical aspects of monochromators developed for transmission electron microscopy

    PubMed Central

    Kimoto, Koji

    2014-01-01

    A few practical aspects of monochromators recently developed for transmission electron microscopy are briefly reviewed. The basic structures and properties of four monochromators, a single Wien filter monochromator, a double Wien filter monochromator, an omega-shaped electrostatic monochromator and an alpha-shaped magnetic monochromator, are outlined. The advantages and side effects of these monochromators in spectroscopy and imaging are pointed out. A few properties of the monochromators in imaging, such as spatial or angular chromaticity, are also discussed. PMID:25125333

  6. Optimized method for atmospheric signal reduction in irregular sampled InSAR time series assisted by external atmospheric information

    NASA Astrophysics Data System (ADS)

    Gong, W.; Meyer, F. J.

    2013-12-01

    It is well known that spatio-temporal the tropospheric phase signatures complicate the interpretation and detection of smaller magnitude deformation signals or unstudied motion fields. Several advanced time-series InSAR techniques were developed in the last decade that make assumptions about the stochastic properties of the signal components in interferometric phases to reduce atmospheric delay effects on surface deformation estimates. However, their need for large datasets to successfully separate the different phase contributions limits their performance if data is scarce and irregularly sampled. Limited SAR data coverage is true for many areas affected by geophysical deformation. This is either due to their low priority in mission programming, unfavorable ground coverage condition, or turbulent seasonal weather effects. In this paper, we present new adaptive atmospheric phase filtering algorithms that are specifically designed to reconstruct surface deformation signals from atmosphere-affected and irregularly sampled InSAR time series. The filters take advantage of auxiliary atmospheric delay information that is extracted from various sources, e.g. atmospheric weather models. They are embedded into a model-free Persistent Scatterer Interferometry (PSI) approach that was selected to accommodate non-linear deformation patterns that are often observed near volcanoes and earthquake zones. Two types of adaptive phase filters were developed that operate in the time dimension and separate atmosphere from deformation based on their different temporal correlation properties. Both filter types use the fact that atmospheric models can reliably predict the spatial statistics and signal power of atmospheric phase delay fields in order to automatically optimize the filter's shape parameters. In essence, both filter types will attempt to maximize the linear correlation between a-priori and the extracted atmospheric phase information. Topography-related phase components, orbit errors and the master atmospheric delays are first removed in a pre-processing step before the atmospheric filters are applied. The first adaptive filter type is using a filter kernel of Gaussian shape and is adaptively adjusting the width (defined in days) of this filter until the correlation of extracted and modeled atmospheric signal power is maximized. If atmospheric properties vary along the time series, this approach will lead to filter setting that are adapted to best reproduce atmospheric conditions at a certain observation epoch. Despite the superior performance of this first filter design, its Gaussian shape imposes non-physical relative weights onto acquisitions that ignore the known atmospheric noise in the data. Hence, in our second approach we are using atmospheric a-priori information to adaptively define the full shape of the atmospheric filter. For this process, we use a so-called normalized convolution (NC) approach that is often used in image reconstruction. Several NC designs will be presented in this paper and studied for relative performance. A cross-validation of all developed algorithms was done using both synthetic and real data. This validation showed designed filters are outperforming conventional filter methods that particularly useful for regions with limited data coverage or lack of a deformation field prior.

  7. Effects of blindness on production-perception relationships: Compensation strategies for a lip-tube perturbation of the French [u].

    PubMed

    Ménard, Lucie; Turgeon, Christine; Trudeau-Fisette, Paméla; Bellavance-Courtemanche, Marie

    2016-01-01

    The impact of congenital visual deprivation on speech production in adults was examined in an ultrasound study of compensation strategies for lip-tube perturbation. Acoustic and articulatory analyses of the rounded vowel /u/ produced by 12 congenitally blind adult French speakers and 11 sighted adult French speakers were conducted under two conditions: normal and perturbed (with a 25-mm diameter tube inserted between the lips). Vowels were produced with auditory feedback and without auditory feedback (masked noise) to evaluate the extent to which both groups relied on this type of feedback to control speech movements. The acoustic analyses revealed that all participants mainly altered F2 and F0 and, to a lesser extent, F1 in the perturbed condition - only when auditory feedback was available. There were group differences in the articulatory strategies recruited to compensate; while all speakers moved their tongues more backward in the perturbed condition, blind speakers modified tongue-shape parameters to a greater extent than sighted speakers.

  8. It's about time: Presentation in honor of Ira Hirsh

    NASA Astrophysics Data System (ADS)

    Grant, Ken

    2002-05-01

    Over his long and illustrious career, Ira Hirsh has returned time and time again to his interest in the temporal aspects of pattern perception. Although Hirsh has studied and published articles and books pertaining to many aspects of the auditory system, such as sound conduction in the ear, cochlear mechanics, masking, auditory localization, psychoacoustic behavior in animals, speech perception, medical and audiological applications, coupling between psychophysics and physiology, and ecological acoustics, it is his work on auditory timing of simple and complex rhythmic patterns, the backbone of speech and music, that are at the heart of his more recent work. Here, we will focus on several aspects of temporal processing of simple and complex signals, both within and across sensory systems. Data will be reviewed on temporal order judgments of simple tones, and simultaneity judgments and intelligibility of unimodal and bimodal complex stimuli where stimulus components are presented either synchronously or asynchronously. Differences in the symmetry and shape of ``temporal windows'' derived from these data sets will be highlighted.

  9. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning.

    PubMed

    Almeida, Diogo; Poeppel, David; Corina, David

    The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.

  10. Local inhibition modulates learning-dependent song encoding in the songbird auditory cortex

    PubMed Central

    Thompson, Jason V.; Jeanne, James M.

    2013-01-01

    Changes in inhibition during development are well documented, but the role of inhibition in adult learning-related plasticity is not understood. In songbirds, vocal recognition learning alters the neural representation of songs across the auditory forebrain, including the caudomedial nidopallium (NCM), a region analogous to mammalian secondary auditory cortices. Here, we block local inhibition with the iontophoretic application of gabazine, while simultaneously measuring song-evoked spiking activity in NCM of European starlings trained to recognize sets of conspecific songs. We find that local inhibition differentially suppresses the responses to learned and unfamiliar songs and enhances spike-rate differences between learned categories of songs. These learning-dependent response patterns emerge, in part, through inhibitory modulation of selectivity for song components and the masking of responses to specific acoustic features without altering spectrotemporal tuning. The results describe a novel form of inhibitory modulation of the encoding of learned categories and demonstrate that inhibition plays a central role in shaping the responses of neurons to learned, natural signals. PMID:23155175

  11. DNA tetrominoes: the construction of DNA nanostructures using self-organised heterogeneous deoxyribonucleic acids shapes.

    PubMed

    Ong, Hui San; Rahim, Mohd Syafiq; Firdaus-Raih, Mohd; Ramlan, Effirul Ikhwan

    2015-01-01

    The unique programmability of nucleic acids offers alternative in constructing excitable and functional nanostructures. This work introduces an autonomous protocol to construct DNA Tetris shapes (L-Shape, B-Shape, T-Shape and I-Shape) using modular DNA blocks. The protocol exploits the rich number of sequence combinations available from the nucleic acid alphabets, thus allowing for diversity to be applied in designing various DNA nanostructures. Instead of a deterministic set of sequences corresponding to a particular design, the protocol promotes a large pool of DNA shapes that can assemble to conform to any desired structures. By utilising evolutionary programming in the design stage, DNA blocks are subjected to processes such as sequence insertion, deletion and base shifting in order to enrich the diversity of the resulting shapes based on a set of cascading filters. The optimisation algorithm allows mutation to be exerted indefinitely on the candidate sequences until these sequences complied with all the four fitness criteria. Generated candidates from the protocol are in agreement with the filter cascades and thermodynamic simulation. Further validation using gel electrophoresis indicated the formation of the designed shapes. Thus, supporting the plausibility of constructing DNA nanostructures in a more hierarchical, modular, and interchangeable manner.

  12. Compact triple band-stop filter using novel epsilon-shaped metamaterial with lumped capacitor

    NASA Astrophysics Data System (ADS)

    Ali, W. A. E.; Hamdalla, M. Z. M.

    2018-04-01

    This paper presents the design of a novel epsilon-shaped metamaterial unit cell structure that is applicable for single-band and multi-band applications. A closed-form formulas to control the resonance frequencies of the proposed design are included. The proposed unit cell, which exhibits negative permeability at its frequency bands, is etched from the ground plane to form a band-stop filter. The filter design is constructed to validate the band-notched characteristics of the proposed unit cell. A lumped capacitor is inserted for size reduction purpose in addition to multi-resonance generation. The fundamental resonance frequency is translated from 3.62 GHz to 2.45 GHz, which means that the filter size will be more compact (more than 32% size reduction). The overall size of the proposed filter is 13 × 6 × 1.524 mm3, where the electrical size is 0.221λg × 0.102λg × 0.026λg at the lower frequency band (2.45 GHz). Two other resonance frequencies are generated at 5.3 GHz and 9.2 GHz, which confirm the multi-band behavior of the proposed filter. Good agreement between simulated and measured characteristics of the fabricated filter prototype is achieved.

  13. Tensiometer with removable wick

    DOEpatents

    Gee, Glendon W.; Campbell, Melvin D.

    1992-01-01

    The present invention relates to improvements in tensiometers for measuring soil water tension comprising a rod shaped wick. the rod shaped wick is shoestring, rolled paper towel, rolled glass microfiber filter, or solid ceramic. The rod shaped wick is secured to the tensiometer by a cone washer and a threaded fitting.

  14. FIR Filter of DS-CDMA UWB Modem Transmitter

    NASA Astrophysics Data System (ADS)

    Kang, Kyu-Min; Cho, Sang-In; Won, Hui-Chul; Choi, Sang-Sung

    This letter presents low-complexity digital pulse shaping filter structures of a direct sequence code division multiple access (DS-CDMA) ultra wide-band (UWB) modem transmitter with a ternary spreading code. The proposed finite impulse response (FIR) filter structures using a look-up table (LUT) have the effect of saving the amount of memory by about 50% to 80% in comparison to the conventional FIR filter structures, and consequently are suitable for a high-speed parallel data process.

  15. Simulation of synthetic discriminant function optical implementation

    NASA Astrophysics Data System (ADS)

    Riggins, J.; Butler, S.

    1984-12-01

    The optical implementation of geometrical shape and synthetic discriminant function matched filters is computer modeled. The filter implementation utilizes the Allebach-Keegan computer-generated hologram algorithm. Signal-to-noise and efficiency measurements were made on the resultant correlation planes.

  16. Input current shaped ac-to-dc converters

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Input current shaping techniques for ac-to-dc converters were investigated. Input frequencies much higher than normal, up to 20 kHz were emphasized. Several methods of shaping the input current waveform in ac-to-dc converters were reviewed. The simplest method is the LC filter following the rectifier. The next simplest method is the resistor emulation approach in which the inductor size is determined by the converter switching frequency and not by the line input frequency. Other methods require complicated switch drive algorithms to construct the input current waveshape. For a high-frequency line input, on the order of 20 kHz, the simple LC cannot be discarded so peremptorily, since the inductor size can be compared with that for the resistor emulation method. In fact, since a dc regulator will normally be required after the filter anyway, the total component count is almost the same as for the resistor emulation method, in which the filter is effectively incorporated into the regulator.

  17. Human detection in sensitive security areas through recognition of omega shapes using MACH filters

    NASA Astrophysics Data System (ADS)

    Rehman, Saad; Riaz, Farhan; Hassan, Ali; Liaquat, Muwahida; Young, Rupert

    2015-03-01

    Human detection has gained considerable importance in aggravated security scenarios over recent times. An effective security application relies strongly on detailed information regarding the scene under consideration. A larger accumulation of humans than the number of personal authorized to visit a security controlled area must be effectively detected, amicably alarmed and immediately monitored. A framework involving a novel combination of some existing techniques allows an immediate detection of an undesirable crowd in a region under observation. Frame differencing provides a clear visibility of moving objects while highlighting those objects in each frame acquired by a real time camera. Training of a correlation pattern recognition based filter on desired shapes such as elliptical representations of human faces (variants of an Omega Shape) yields correct detections. The inherent ability of correlation pattern recognition filters caters for angular rotations in the target object and renders decision regarding the existence of the number of persons exceeding an allowed figure in the monitored area.

  18. Union operation image processing of data cubes separately processed by different objective filters and its application to void analysis in an all-solid-state lithium-ion battery.

    PubMed

    Yamamoto, Yuta; Iriyama, Yasutoshi; Muto, Shunsuke

    2016-04-01

    In this article, we propose a smart image-analysis method suitable for extracting target features with hierarchical dimension from original data. The method was applied to three-dimensional volume data of an all-solid lithium-ion battery obtained by the automated sequential sample milling and imaging process using a focused ion beam/scanning electron microscope to investigate the spatial configuration of voids inside the battery. To automatically fully extract the shape and location of the voids, three types of filters were consecutively applied: a median blur filter to extract relatively larger voids, a morphological opening operation filter for small dot-shaped voids and a morphological closing operation filter for small voids with concave contrasts. Three data cubes separately processed by the above-mentioned filters were integrated by a union operation to the final unified volume data, which confirmed the correct extraction of the voids over the entire dimension contained in the original data. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Vestibular Influence on Auditory Metrical Interpretation

    ERIC Educational Resources Information Center

    Phillips-Silver, J.; Trainor, L.J.

    2008-01-01

    When we move to music we feel the beat, and this feeling can shape the sound we hear. Previous studies have shown that when people listen to a metrically ambiguous rhythm pattern, moving the body on a certain beat-adults, by actively bouncing themselves in synchrony with the experimenter, and babies, by being bounced passively in the…

  20. Speech Experience Shapes the Speechreading Network and Subsequent Deafness Facilitates It

    ERIC Educational Resources Information Center

    Suh, Myung-Whan; Lee, Hyo-Jeong; Kim, June Sic; Chung, Chun Kee; Oh, Seung-Ha

    2009-01-01

    Speechreading is a visual communicative skill for perceiving speech. In this study, we tested the effects of speech experience and deafness on the speechreading neural network in normal hearing controls and in two groups of deaf patients who became deaf either before (prelingual deafness) or after (postlingual deafness) auditory language…

  1. Tensiometer with removable wick

    DOEpatents

    Gee, G.W.; Campbell, M.D.

    1992-04-14

    The present invention relates to improvements in tensiometers for measuring soil water tension comprising a rod shaped wick. The rod shaped wick is a shoestring, rolled paper towel, rolled glass microfiber filter, or solid ceramic. The rod shaped wick is secured to the tensiometer by a cone washer and a threaded fitting. 2 figs.

  2. The shadow of a doubt? Evidence for perceptuo-motor linkage during auditory and audiovisual close-shadowing

    PubMed Central

    Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc

    2014-01-01

    One classical argument in favor of a functional role of the motor system in speech perception comes from the close-shadowing task in which a subject has to identify and to repeat as quickly as possible an auditory speech stimulus. The fact that close-shadowing can occur very rapidly and much faster than manual identification of the speech target is taken to suggest that perceptually induced speech representations are already shaped in a motor-compatible format. Another argument is provided by audiovisual interactions often interpreted as referring to a multisensory-motor framework. In this study, we attempted to combine these two paradigms by testing whether the visual modality could speed motor response in a close-shadowing task. To this aim, both oral and manual responses were evaluated during the perception of auditory and audiovisual speech stimuli, clear or embedded in white noise. Overall, oral responses were faster than manual ones, but it also appeared that they were less accurate in noise, which suggests that motor representations evoked by the speech input could be rough at a first processing stage. In the presence of acoustic noise, the audiovisual modality led to both faster and more accurate responses than the auditory modality. No interaction was however, observed between modality and response. Altogether, these results are interpreted within a two-stage sensory-motor framework, in which the auditory and visual streams are integrated together and with internally generated motor representations before a final decision may be available. PMID:25009512

  3. Areas Recruited during Action Understanding Are Not Modulated by Auditory or Sign Language Experience.

    PubMed

    Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao

    2016-01-01

    The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.

  4. Stimulus-specific adaptation and deviance detection in the inferior colliculus

    PubMed Central

    Ayala, Yaneri A.; Malmierca, Manuel S.

    2013-01-01

    Deviancy detection in the continuous flow of sensory information into the central nervous system is of vital importance for animals. The task requires neuronal mechanisms that allow for an efficient representation of the environment by removing statistically redundant signals. Recently, the neuronal principles of auditory deviance detection have been approached by studying the phenomenon of stimulus-specific adaptation (SSA). SSA is a reduction in the responsiveness of a neuron to a common or repetitive sound while the neuron remains highly sensitive to rare sounds (Ulanovsky et al., 2003). This phenomenon could enhance the saliency of unexpected, deviant stimuli against a background of repetitive signals. SSA shares many similarities with the evoked potential known as the “mismatch negativity,” (MMN) and it has been linked to cognitive process such as auditory memory and scene analysis (Winkler et al., 2009) as well as to behavioral habituation (Netser et al., 2011). Neurons exhibiting SSA can be found at several levels of the auditory pathway, from the inferior colliculus (IC) up to the auditory cortex (AC). In this review, we offer an account of the state-of-the art of SSA studies in the IC with the aim of contributing to the growing interest in the single-neuron electrophysiology of auditory deviance detection. The dependence of neuronal SSA on various stimulus features, e.g., probability of the deviant stimulus and repetition rate, and the roles of the AC and inhibition in shaping SSA at the level of the IC are addressed. PMID:23335883

  5. Neural bases of rhythmic entrainment in humans: critical transformation between cortical and lower-level representations of auditory rhythm.

    PubMed

    Nozaradan, Sylvie; Schönwiesner, Marc; Keller, Peter E; Lenc, Tomas; Lehmann, Alexandre

    2018-02-01

    The spontaneous ability to entrain to meter periodicities is central to music perception and production across cultures. There is increasing evidence that this ability involves selective neural responses to meter-related frequencies. This phenomenon has been observed in the human auditory cortex, yet it could be the product of evolutionarily older lower-level properties of brainstem auditory neurons, as suggested by recent recordings from rodent midbrain. We addressed this question by taking advantage of a new method to simultaneously record human EEG activity originating from cortical and lower-level sources, in the form of slow (< 20 Hz) and fast (> 150 Hz) responses to auditory rhythms. Cortical responses showed increased amplitudes at meter-related frequencies compared to meter-unrelated frequencies, regardless of the prominence of the meter-related frequencies in the modulation spectrum of the rhythmic inputs. In contrast, frequency-following responses showed increased amplitudes at meter-related frequencies only in rhythms with prominent meter-related frequencies in the input but not for a more complex rhythm requiring more endogenous generation of the meter. This interaction with rhythm complexity suggests that the selective enhancement of meter-related frequencies does not fully rely on subcortical auditory properties, but is critically shaped at the cortical level, possibly through functional connections between the auditory cortex and other, movement-related, brain structures. This process of temporal selection would thus enable endogenous and motor entrainment to emerge with substantial flexibility and invariance with respect to the rhythmic input in humans in contrast with non-human animals. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Effect of perceptual load on conceptual processing: an extension of Vermeulen's theory.

    PubMed

    Xie, Jiushu; Wang, Ruiming; Sun, Xun; Chang, Song

    2013-10-01

    The effect of color and shape load on conceptual processing was studied. Perceptual load effects have been found in visual and auditory conceptual processing, supporting the theory of embodied cognition. However, whether different types of visual concepts, such as color and shape, share the same perceptual load effects is unknown. In the current experiment, 32 participants were administered simultaneous perceptual and conceptual tasks to assess the relation between perceptual load and conceptual processing. Keeping color load in mind obstructed color conceptual processing. Hence, perceptual processing and conceptual load shared the same resources, suggesting embodied cognition. Color conceptual processing was not affected by shape pictures, indicating that different types of properties within vision were separate.

  7. A tympanal insect ear exploits a critical oscillator for active amplification and tuning.

    PubMed

    Mhatre, Natasha; Robert, Daniel

    2013-10-07

    A dominant theme of acoustic communication is the partitioning of acoustic space into exclusive, species-specific niches to enable efficient information transfer. In insects, acoustic niche partitioning is achieved through auditory frequency filtering, brought about by the mechanical properties of their ears. The tuning of the antennal ears of mosquitoes and flies, however, arises from active amplification, a process similar to that at work in the mammalian cochlea. Yet, the presence of active amplification in the other type of insect ears--tympanal ears--has remained uncertain. Here we demonstrate the presence of active amplification and adaptive tuning in the tympanal ear of a phylogenetically basal insect, a tree cricket. We also show that the tree cricket exploits critical oscillator-like mechanics, enabling high auditory sensitivity and tuning to conspecific songs. These findings imply that sophisticated auditory mechanisms may have appeared even earlier in the evolution of hearing and acoustic communication than currently appreciated. Our findings also raise the possibility that frequency discrimination and directional hearing in tympanal systems may rely on physiological nonlinearities, in addition to mechanical properties, effectively lifting some of the physical constraints placed on insects by their small size [6] and prompting an extensive reexamination of invertebrate audition. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Firing-rate resonances in the peripheral auditory system of the cricket, Gryllus bimaculatus.

    PubMed

    Rau, Florian; Clemens, Jan; Naumov, Victor; Hennig, R Matthias; Schreiber, Susanne

    2015-11-01

    In many communication systems, information is encoded in the temporal pattern of signals. For rhythmic signals that carry information in specific frequency bands, a neuronal system may profit from tuning its inherent filtering properties towards a peak sensitivity in the respective frequency range. The cricket Gryllus bimaculatus evaluates acoustic communication signals of both conspecifics and predators. The song signals of conspecifics exhibit a characteristic pulse pattern that contains only a narrow range of modulation frequencies. We examined individual neurons (AN1, AN2, ON1) in the peripheral auditory system of the cricket for tuning towards specific modulation frequencies by assessing their firing-rate resonance. Acoustic stimuli with a swept-frequency envelope allowed an efficient characterization of the cells' modulation transfer functions. Some of the examined cells exhibited tuned band-pass properties. Using simple computational models, we demonstrate how different, cell-intrinsic or network-based mechanisms such as subthreshold resonances, spike-triggered adaptation, as well as an interplay of excitation and inhibition can account for the experimentally observed firing-rate resonances. Therefore, basic neuronal mechanisms that share negative feedback as a common theme may contribute to selectivity in the peripheral auditory pathway of crickets that is designed towards mate recognition and predator avoidance.

  9. Neural correlates of distraction and conflict resolution for nonverbal auditory events.

    PubMed

    Stewart, Hannah J; Amitay, Sygal; Alain, Claude

    2017-05-09

    In everyday situations auditory selective attention requires listeners to suppress task-irrelevant stimuli and to resolve conflicting information in order to make appropriate goal-directed decisions. Traditionally, these two processes (i.e. distractor suppression and conflict resolution) have been studied separately. In the present study we measured neuroelectric activity while participants performed a new paradigm in which both processes are quantified. In separate block of trials, participants indicate whether two sequential tones share the same pitch or location depending on the block's instruction. For the distraction measure, a positive component peaking at ~250 ms was found - a distraction positivity. Brain electrical source analysis of this component suggests different generators when listeners attended to frequency and location, with the distraction by location more posterior than the distraction by frequency, providing support for the dual-pathway theory. For the conflict resolution measure, a negative frontocentral component (270-450 ms) was found, which showed similarities with that of prior studies on auditory and visual conflict resolution tasks. The timing and distribution are consistent with two distinct neural processes with suppression of task-irrelevant information occurring before conflict resolution. This new paradigm may prove useful in clinical populations to assess impairments in filtering out task-irrelevant information and/or resolving conflicting information.

  10. Roughness modelling based on human auditory perception for sound quality evaluation of vehicle interior noise

    NASA Astrophysics Data System (ADS)

    Wang, Y. S.; Shen, G. Q.; Guo, H.; Tang, X. L.; Hamade, T.

    2013-08-01

    In this paper, a roughness model, which is based on human auditory perception (HAP) and known as HAP-RM, is developed for the sound quality evaluation (SQE) of vehicle noise. First, the interior noise signals are measured for a sample vehicle and prepared for roughness modelling. The HAP-RM model is based on the process of sound transfer and perception in the human auditory system by combining the structural filtering function and nonlinear perception characteristics of the ear. The HAP-RM model is applied to the measured vehicle interior noise signals by considering the factors that affect hearing, such as the modulation and carrier frequencies, the time and frequency maskings and the correlations of the critical bands. The HAP-RM model is validated by jury tests. An anchor-scaled scoring method (ASM) is used for subjective evaluations in the jury tests. The verification results show that the novel developed model can accurately calculate vehicle noise roughness below 0.6 asper. Further investigation shows that the total roughness of the vehicle interior noise can mainly be attributed to frequency components below 12 Bark. The time masking effects of the modelling procedure enable the application of the HAP-RM model to stationary and nonstationary vehicle noise signals and the SQE of other sound-related signals in engineering problems.

  11. Silicon cross-connect filters using microring resonator coupled multimode-interference-based waveguide crossings.

    PubMed

    Xu, Fang; Poon, Andrew W

    2008-06-09

    We report silicon cross-connect filters using microring resonator coupled multimode-interference (MMI) based waveguide crossings. Our experiments reveal that the MMI-based cross-connect filters impose lower crosstalk at the crossing than the conventional cross-connect filters using plain crossings, while offering a nearly symmetric resonance line shape in the drop-port transmission. As a proof-of-concept for cross-connection applications, we demonstrate on a silicon-on-insulator substrate (i) a 4-channel 1 x 4 linear-cascaded MMI-based cross-connect filter, and (ii) a 2-channel 2 x 2 array-cascaded MMI-based cross-connect filter.

  12. Spatial filters for high-peak-power multistage laser amplifiers.

    PubMed

    Potemkin, A K; Barmashova, T V; Kirsanov, A V; Martyanov, M A; Khazanov, E A; Shaykin, A A

    2007-07-10

    We describe spatial filters used in a Nd:glass laser with an output pulse energy up to 300 J and a pulse duration of 1 ns. This laser is designed for pumping of a chirped-pulse optical parametric amplifier. We present data required to choose the shape and diameter of a spatial filter lens, taking into account aberrations caused by spherical surfaces. Calculation of the optimal pinhole diameter is presented. Design features of the spatial filters and the procedure of their alignment are discussed in detail.

  13. Neural practice effect during cross-modal selective attention: Supra-modal and modality-specific effects.

    PubMed

    Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi

    2018-05-16

    Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Contribution of auditory working memory to speech understanding in mandarin-speaking cochlear implant users.

    PubMed

    Tao, Duoduo; Deng, Rui; Jiang, Ye; Galvin, John J; Fu, Qian-Jie; Chen, Bing

    2014-01-01

    To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI) users. Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH) participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a) word-in-sentence recognition in quiet, (b) word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c) Chinese disyllable recognition in quiet, (d) Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork. There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants. Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical importance of voice pitch cues (albeit poorly coded by the CI) did not influence the relationship between working memory and speech perception.

  15. A comparison of auditory brainstem responses across diving bird species

    USGS Publications Warehouse

    Crowell, Sara E.; Berlin, Alicia; Carr, Catherine E.; Olsen, Glenn H.; Therrien, Ronald E.; Yannuzzi, Sally E.; Ketten, Darlene R.

    2015-01-01

    There is little biological data available for diving birds because many live in hard-to-study, remote habitats. Only one species of diving bird, the black-footed penguin (Spheniscus demersus), has been studied in respect to auditory capabilities (Wever et al., Proc Natl Acad Sci USA 63:676–680, 1969). We, therefore, measured in-air auditory threshold in ten species of diving birds, using the auditory brainstem response (ABR). The average audiogram obtained for each species followed the U-shape typical of birds and many other animals. All species tested shared a common region of the greatest sensitivity, from 1000 to 3000 Hz, although audiograms differed significantly across species. Thresholds of all duck species tested were more similar to each other than to the two non-duck species tested. The red-throated loon (Gavia stellata) and northern gannet (Morus bassanus) exhibited the highest thresholds while the lowest thresholds belonged to the duck species, specifically the lesser scaup (Aythya affinis) and ruddy duck (Oxyura jamaicensis). Vocalization parameters were also measured for each species, and showed that with the exception of the common eider (Somateria mollisima), the peak frequency, i.e., frequency at the greatest intensity, of all species' vocalizations measured here fell between 1000 and 3000 Hz, matching the bandwidth of the most sensitive hearing range.

  16. A comparison of auditory brainstem responses across diving bird species

    PubMed Central

    Crowell, Sara E.; Wells-Berlin, Alicia M.; Carr, Catherine E.; Olsen, Glenn H.; Therrien, Ronald E.; Yannuzzi, Sally E.; Ketten, Darlene R.

    2015-01-01

    There is little biological data available for diving birds because many live in hard-to-study, remote habitats. Only one species of diving bird, the black-footed penguin (Spheniscus demersus), has been studied in respect to auditory capabilities (Wever et al. 1969). We therefore measured in-air auditory threshold in ten species of diving birds, using the auditory brainstem response (ABR). The average audiogram obtained for each species followed the U-shape typical of birds and many other animals. All species tested shared a common region of greatest sensitivity, from 1000 to 3000 Hz, although audiograms differed significantly across species. Thresholds of all duck species tested were more similar to each other than to the two non-duck species tested. The red-throated loon (Gavia stellata) and northern gannet (Morus bassanus) exhibited the highest thresholds while the lowest thresholds belonged to the duck species, specifically the lesser scaup (Aythya affinis) and ruddy duck (Oxyura jamaicensis). Vocalization parameters were also measured for each species, and showed that with the exception of the common eider (Somateria mollisima), the peak frequency, i.e. frequency at the greatest intensity, of all species’ vocalizations measured here fell between 1000 and 3000 Hz, matching the bandwidth of the most sensitive hearing range. PMID:26156644

  17. Enhanced auditory spatial localization in blind echolocators.

    PubMed

    Vercillo, Tiziana; Milne, Jennifer L; Gori, Monica; Goodale, Melvyn A

    2015-01-01

    Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Prestimulus influences on auditory perception from sensory representations and decision processes.

    PubMed

    Kayser, Stephanie J; McNair, Steven W; Kayser, Christoph

    2016-04-26

    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task.

  19. Prestimulus influences on auditory perception from sensory representations and decision processes

    PubMed Central

    McNair, Steven W.

    2016-01-01

    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task. PMID:27071110

  20. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    PubMed Central

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  1. Hearing shapes our perception of time: temporal discrimination of tactile stimuli in deaf people.

    PubMed

    Bolognini, Nadia; Cecchetto, Carlo; Geraci, Carlo; Maravita, Angelo; Pascual-Leone, Alvaro; Papagno, Costanza

    2012-02-01

    Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.

  2. Musical experience sharpens human cochlear tuning.

    PubMed

    Bidelman, Gavin M; Nelms, Caitlin; Bhagat, Shaum P

    2016-05-01

    The mammalian cochlea functions as a filter bank that performs a spectral, Fourier-like decomposition on the acoustic signal. While tuning can be compromised (e.g., broadened with hearing impairment), whether or not human cochlear frequency resolution can be sharpened through experiential factors (e.g., training or learning) has not yet been established. Previous studies have demonstrated sharper psychophysical tuning curves in trained musicians compared to nonmusicians, implying superior peripheral tuning. However, these findings are based on perceptual masking paradigms, and reflect engagement of the entire auditory system rather than cochlear tuning, per se. Here, by directly mapping physiological tuning curves from stimulus frequency otoacoustic emissions (SFOAEs)-cochlear emitted sounds-we show that estimates of human cochlear tuning in a high-frequency cochlear region (4 kHz) is further sharpened (by a factor of 1.5×) in musicians and improves with the number of years of their auditory training. These findings were corroborated by measurements of psychophysical tuning curves (PTCs) derived via simultaneous masking, which similarly showed sharper tuning in musicians. Comparisons between SFOAE and PTCs revealed closer correspondence between physiological and behavioral curves in musicians, indicating that tuning is also more consistent between different levels of auditory processing in trained ears. Our findings demonstrate an experience-dependent enhancement in the resolving power of the cochlear sensory epithelium and the spectral resolution of human hearing and provide a peripheral account for the auditory perceptual benefits observed in musicians. Both local and feedback (e.g., medial olivocochlear efferent) mechanisms are discussed as potential mechanisms for experience-dependent tuning. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users.

    PubMed

    Rutkowski, Tomasz M; Mori, Hiromu

    2015-04-15

    The paper presents a report on the recently developed BCI alternative for users suffering from impaired vision (lack of focus or eye-movements) or from the so-called "ear-blocking-syndrome" (limited hearing). We report on our recent studies of the extents to which vibrotactile stimuli delivered to the head of a user can serve as a platform for a brain computer interface (BCI) paradigm. In the proposed tactile and bone-conduction auditory BCI novel multiple head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) P300 brain responses, in order to define a multimodal tactile and bone-conduction auditory brain computer interface (tbcaBCI). In order to further remove EEG interferences and to improve P300 response classification synchrosqueezing transform (SST) is applied. SST outperforms the classical time-frequency analysis methods of the non-linear and non-stationary signals such as EEG. The proposed method is also computationally more effective comparing to the empirical mode decomposition. The SST filtering allows for online EEG preprocessing application which is essential in the case of BCI. Experimental results with healthy BCI-naive users performing online tbcaBCI, validate the paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies. We present a comparison of the proposed SST-based preprocessing method, combined with a logistic regression (LR) classifier, together with classical preprocessing and LDA-based classification BCI techniques. The proposed tbcaBCI paradigm together with data-driven preprocessing methods are a step forward in robust BCI applications research. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Pendulation control system and method for rotary boom cranes

    DOEpatents

    Robinett, III, Rush D.; Groom, Kenneth N.; Feddema, John T.; Parker, Gordon G.

    2002-01-01

    A command shaping control system and method for rotary boom cranes provides a way to reduce payload pendulation caused by real-time input signals, from either operator command or automated crane maneuvers. The method can take input commands and can apply a command shaping filter to reduce contributors to payload pendulation due to rotation, elevation, and hoisting movements in order to control crane response and reduce tangential and radial payload pendulation. A filter can be applied to a pendulation excitation frequency to reduce residual radial pendulation and tangential pendulation amplitudes.

  5. Technology optimization techniques for multicomponent optical band-pass filter manufacturing

    NASA Astrophysics Data System (ADS)

    Baranov, Yuri P.; Gryaznov, Georgiy M.; Rodionov, Andrey Y.; Obrezkov, Andrey V.; Medvedev, Roman V.; Chivanov, Alexey N.

    2016-04-01

    Narrowband optical devices (like IR-sensing devices, celestial navigation systems, solar-blind UV-systems and many others) are one of the most fast-growing areas in optical manufacturing. However, signal strength in this type of applications is quite low and performance of devices depends on attenuation level of wavelengths out of operating range. Modern detectors (photodiodes, matrix detectors, photomultiplier tubes and others) usually do not have required selectivity or have higher sensitivity to background spectrum at worst. Manufacturing of a single component band-pass filter with high attenuation level of wavelength is resource-intensive task. Sometimes it's not possible to find solution for this problem using existing technologies. Different types of filters have technology variations of transmittance profile shape due to various production factors. At the same time there are multiple tasks with strict requirements for background spectrum attenuation in narrowband optical devices. For example, in solar-blind UV-system wavelengths above 290-300 nm must be attenuated by 180dB. In this paper techniques of multi-component optical band-pass filters assembly from multiple single elements with technology variations of transmittance profile shape for optimal signal-tonoise ratio (SNR) were proposed. Relationships between signal-to-noise ratio and different characteristics of transmittance profile shape were shown. Obtained practical results were in rather good agreement with our calculations.

  6. Pulse shape discrimination of Cs2LiYCl6:Ce3+ detectors at high count rate based on triangular and trapezoidal filters

    NASA Astrophysics Data System (ADS)

    Wen, Xianfei; Enqvist, Andreas

    2017-09-01

    Cs2LiYCl6:Ce3+ (CLYC) detectors have demonstrated the capability to simultaneously detect γ-rays and thermal and fast neutrons with medium energy resolution, reasonable detection efficiency, and substantially high pulse shape discrimination performance. A disadvantage of CLYC detectors is the long scintillation decay times, which causes pulse pile-up at moderate input count rate. Pulse processing algorithms were developed based on triangular and trapezoidal filters to discriminate between neutrons and γ-rays at high count rate. The algorithms were first tested using low-rate data. They exhibit a pulse-shape discrimination performance comparable to that of the charge comparison method, at low rate. Then, they were evaluated at high count rate. Neutrons and γ-rays were adequately identified with high throughput at rates of up to 375 kcps. The algorithm developed using the triangular filter exhibits discrimination capability marginally higher than that of the trapezoidal filter based algorithm irrespective of low or high rate. The algorithms exhibit low computational complexity and are executable on an FPGA in real-time. They are also suitable for application to other radiation detectors whose pulses are piled-up at high rate owing to long scintillation decay times.

  7. All-fiber optical filter with an ultranarrow and rectangular spectral response.

    PubMed

    Zou, Xihua; Li, Ming; Pan, Wei; Yan, Lianshan; Azaña, José; Yao, Jianping

    2013-08-15

    Optical filters with an ultranarrow and rectangular spectral response are highly desired for high-resolution optical/electrical signal processing. An all-fiber optical filter based on a fiber Bragg grating with a large number of phase shifts is designed and fabricated. The measured spectral response shows a 3 dB bandwidth of 650 MHz and a rectangular shape factor of 0.513 at the 25 dB bandwidth. This is the narrowest rectangular bandpass response ever reported for an all-fiber filter, to the best of our knowledge. The filter has also the intrinsic advantages of an all-fiber implementation.

  8. Analysis of silicon on insulator (SOI) optical microring add-drop filter based on waveguide intersections

    NASA Astrophysics Data System (ADS)

    Kaźmierczak, Andrzej; Bogaerts, Wim; Van Thourhout, Dries; Drouard, Emmanuel; Rojo-Romeo, Pedro; Giannone, Domenico; Gaffiot, Frederic

    2008-04-01

    We present a compact passive optical add-drop filter which incorporates two microring resonators and a waveguide intersection in silicon-on-insulator (SOI) technology. Such a filter is a key element for designing simple layouts of highly integrated complex optical networks-on-chip. The filter occupies an area smaller than 10μm×10μm and exhibits relatively high quality factors (up to 4000) and efficient signal dropping capabilities. In the present work, the influence of filter parameters such as the microring-resonators radii and the coupling section shape are analyzed theoretically and experimentally

  9. Hair cell regeneration in the chick inner ear following acoustic trauma: ultrastructural and immunohistochemical studies.

    PubMed

    Umemoto, M; Sakagami, M; Fukazawa, K; Ashida, K; Kubo, T; Senda, T; Yoneda, Y

    1995-09-01

    The regeneration of hair cells in the chick inner ear following acoustic trauma was examined using transmission electron microscopy. In addition, the localization of proliferation cell nuclear antigen (PCNA) and basic fibroblast growth factor (b-FGF) was demonstrated immunohistochemically. The auditory sensory epithelium of the normal chick consists of short and tall hair cells and supporting cells. Immediately after noise exposure to a 1500-Hz pure tone at a sound pressure level of 120 decibels for 48 h, all the short hair cells disappeared in the middle region of the auditory epithelium. Twelve hours to 1 day after exposure, mitotic cells, binucleate cells and PCNA-positive supporting cells were observed, and b-FGF immunoreactivity was shown in the supporting cells and glial cells near the habenula perforata. Spindle-shaped hair cells with immature stereocilia and a kinocilium appeared 3 days after exposure; these cells had synaptic connections with the newly developed nerve endings. The spindle-shaped hair cell is considered to be a transitional cell in the lineage of the supporting cell to the mature short hair cell. These results indicate that, after acoustic trauma, the supporting cells divide and differentiate into new short hair cells via spindle-shaped hair cells. Furthermore, it is suggested that b-FGF is related to the proliferation of the supporting cells and the extension of the nerve fibers.

  10. Attention selectively modulates cortical entrainment in different regions of the speech spectrum

    PubMed Central

    Baltzell, Lucas S.; Horton, Cort; Shen, Yi; Richards, Virginia M.; D'Zmura, Michael; Srinivasan, Ramesh

    2016-01-01

    Recent studies have uncovered a neural response that appears to track the envelope of speech, and have shown that this tracking process is mediated by attention. It has been argued that this tracking reflects a process of phase-locking to the fluctuations of stimulus energy, ensuring that this energy arrives during periods of high neuronal excitability. Because all acoustic stimuli are decomposed into spectral channels at the cochlea, and this spectral decomposition is maintained along the ascending auditory pathway and into auditory cortex, we hypothesized that the overall stimulus envelope is not as relevant to cortical processing as the individual frequency channels; attention may be mediating envelope tracking differentially across these spectral channels. To test this we reanalyzed data reported by Horton et al. (2013), where high-density EEG was recorded while adults attended to one of two competing naturalistic speech streams. In order to simulate cochlear filtering, the stimuli were passed through a gammatone filterbank, and temporal envelopes were extracted at each filter output. Following Horton et al. (2013), the attended and unattended envelopes were cross-correlated with the EEG, and local maxima were extracted at three different latency ranges corresponding to distinct peaks in the cross-correlation function (N1, P2, and N2). We found that the ratio between the attended and unattended cross-correlation functions varied across frequency channels in the N1 latency range, consistent with the hypothesis that attention differentially modulates envelope-tracking activity across spectral channels. PMID:27195825

  11. Tcf4 transgenic female mice display delayed adaptation in an auditory latent inhibition paradigm.

    PubMed

    Brzózka, M M; Rossner, M J; de Hoz, L

    2016-09-01

    Schizophrenia (SZ) is a severe mental disorder affecting about 1 % of the human population. Patients show severe deficits in cognitive processing often characterized by an improper filtering of environmental stimuli. Independent genome-wide association studies confirmed a number of risk variants for SZ including several associated with the gene encoding the transcription factor 4 (TCF4). TCF4 is widely expressed in the central nervous system of mice and humans and seems to be important for brain development. Transgenic mice overexpressing murine Tcf4 (Tcf4tg) in the adult brain display cognitive impairments and sensorimotor gating disturbances. To address the question of whether increased Tcf4 gene dosage may affect cognitive flexibility in an auditory associative task, we tested latent inhibition (LI) in female Tcf4tg mice. LI is a widely accepted translational endophenotype of SZ and results from a maladaptive delay in switching a response to a previously unconditioned stimulus when this becomes conditioned. Using an Audiobox, we pre-exposed Tcf4tg mice and their wild-type littermates to either a 3- or a 12-kHz tone before conditioning them to a 12-kHz tone. Tcf4tg animals pre-exposed to a 12-kHz tone showed significantly delayed conditioning when the previously unconditioned tone became associated with an air puff. These results support findings that associate TCF4 dysfunction with cognitive inflexibility and improper filtering of sensory stimuli observed in SZ patients.

  12. Rerouting the external auditory canal. A method of correcting congenital stenosis.

    PubMed

    Baron, S H

    1975-04-01

    An hourglass or funnel-shaped, stenosed, external auditory meatus with a normal tympanic membrane, middle and inner ear is one of the congenital anomalies that occasionally occurs. Such abnormality was present in both ears of a woman and caused chromic otitis externa and deafness. A routine meatoplasty on the right ear failed because of an unusual cephalad position of the drumhead in relation to a "downhill" position of the stenosed outer meatus. Rerouting the ear canal to a horizontal position by removing bone of the canal superiorly, posteriorly, and inferiorly, and grafting the now horizontal canal with skin taken from the postauricular fold produced a good result. This is a satisfactory procedure for a woman, but would be cosmetically unacceptable for a man.

  13. Ultra-wide bandpass filter based on long-period fiber gratings and the evanescent field coupling between two fibers.

    PubMed

    Kim, Myoung Jin; Jung, Yong Min; Kim, Bok Hyeon; Han, Won-Taek; Lee, Byeong Ha

    2007-08-20

    We demonstrate a fiber-based bandpass filter with an ultra-wide spectral bandwidth. The ultra-wide band feature is achieved by inscribing a long-period fiber grating (LPG) in a specially-designed low index core single mode fiber. To get the bandpass function, the evanescent field coupling between two attached fibers is utilized. By applying strain, the spectral shape of the pass-band is adjusted to flat-top and Gaussian shapes. For the flat-top case, the bandwidth is obtained ~ 160 nm with an insertion loss of ~ 2 dB. With strain, the spectral shape is switched into a Gaussian one, which has ~ 120 nm FWHM and ~ 4.18 dB insertion loss at the peak.

  14. Tunable band-stop plasmonic waveguide filter with symmetrical multiple-teeth-shaped structure.

    PubMed

    Wang, Hongqing; Yang, Junbo; Zhang, Jingjing; Huang, Jie; Wu, Wenjun; Chen, Dingbo; Xiao, Gongli

    2016-03-15

    A nanometeric plasmonic filter with a symmetrical multiple-teeth-shaped structure is investigated theoretically and numerically. A tunable wide bandgap is achievable by adjusting the depth and number of teeth. This phenomenon can be attributed to the interference superposition of the reflected and transmitted waves from each tooth. Moreover, the effects of varying the number of identical teeth are also discussed. It is found that the bandgap width increases continuously with the increasing number of teeth. The finite difference time domain method is used to simulate and compute the coupling of surface plasmon polariton waves with different structures in this Letter. The plasmonic waveguide filter that we propose here may have meaningful applications in ultra-fine spectrum analysis and high-density nanoplasmonic integration circuits.

  15. Simultaneous shape and deformation measurements in a blood vessel model by two wavelength interferometry

    NASA Astrophysics Data System (ADS)

    Andrés, Nieves; Pinto, Cristina; Lobera, Julia; Palero, Virginia; Arroyo, M. Pilar

    2017-06-01

    Holographic techniques have been used to measure the shape and the radial deformation of a blood vessel model and a real sheep aorta. Measurements are obtained from several holograms recorded for different object states. For each object state, two holograms with two different wavelengths are multiplexed in the same digital recording. Thus both holograms are simultaneously recorded but the information from each of them is separately obtained. The shape analysis gives a wrapped phase map whose fringes are related to a synthetic wavelength. After a filtering and unwrapping process, the 3D shape can be obtained. The shape data for each line are fitted to a circumference in order to determine the local vessel radius and center. The deformation analysis also results in a wrapped phase map, but the fringes are related to the laser wavelength used in the corresponding hologram. After the filtering and unwrapping process, a 2D map of the deformation in an out-of-plane direction is reconstructed. The radial deformation is then calculated by using the shape information.

  16. Neural Changes Associated with Nonspeech Auditory Category Learning Parallel Those of Speech Category Acquisition

    ERIC Educational Resources Information Center

    Liu, Ran; Holt, Lori L.

    2011-01-01

    Native language experience plays a critical role in shaping speech categorization, but the exact mechanisms by which it does so are not well understood. Investigating category learning of nonspeech sounds with which listeners have no prior experience allows their experience to be systematically controlled in a way that is impossible to achieve by…

  17. Effects of the Presence of Audio and Type of Game Controller on Learning of Rhythmic Accuracy

    ERIC Educational Resources Information Center

    Thomas, James William

    2017-01-01

    "Guitar Hero III" and similar games potentially offer a vehicle for improvement of musical rhythmic accuracy with training delivered in both visual and auditory formats and by use of its novel guitar-shaped interface; however, some theories regarding multimedia learning suggest sound is a possible source of extraneous cognitive load…

  18. Removal of central obscuration and spider arm effects with beam-shaping coronagraphy

    NASA Astrophysics Data System (ADS)

    Abe, L.; Murakami, N.; Nishikawa, J.; Tamura, M.

    2006-05-01

    This paper describes a method for removing the effect of a centrally obscured aperture with additional spider arms in arbitrary geometrical configurations. The proposed method is based on a two-stage process where the light beam is first shaped to remove the central obscuration and spider arms, in order to feed a second, highly efficient coronagraph. The beam-shaping stage is a combination of a diffraction mask in the first focal plane and a complex amplitude filter located in the conjugate pupil. This paper specifically describes the case of using Lyot occulting masks and circular phase-shifting masks as diffracting components. The basic principle of the method is given along with an analytical description and numerical simulations. Substantial improvement in the performance of high-contrast coronagraphs can be obtained with this method, even if the beam-shaping filter is not perfectly manufactured.

  19. Binocular contrast-gain control for natural scenes: Image structure and phase alignment.

    PubMed

    Huang, Pi-Chun; Dai, Yu-Ming

    2018-05-01

    In the context of natural scenes, we applied the pattern-masking paradigm to investigate how image structure and phase alignment affect contrast-gain control in binocular vision. We measured the discrimination thresholds of bandpass-filtered natural-scene images (targets) under various types of pedestals. Our first experiment had four pedestal types: bandpass-filtered pedestals, unfiltered pedestals, notch-filtered pedestals (which enabled removal of the spatial frequency), and misaligned pedestals (which involved rotation of unfiltered pedestals). Our second experiment featured six types of pedestals: bandpass-filtered, unfiltered, and notch-filtered pedestals, and the corresponding phase-scrambled pedestals. The thresholds were compared for monocular, binocular, and dichoptic viewing configurations. The bandpass-filtered pedestal and unfiltered pedestals showed classic dipper shapes; the dipper shapes of the notch-filtered, misaligned, and phase-scrambled pedestals were weak. We adopted a two-stage binocular contrast-gain control model to describe our results. We deduced that the phase-alignment information influenced the contrast-gain control mechanism before the binocular summation stage and that the phase-alignment information and structural misalignment information caused relatively strong divisive inhibition in the monocular and interocular suppression stages. When the pedestals were phase-scrambled, the elimination of the interocular suppression processing was the most convincing explanation of the results. Thus, our results indicated that both phase-alignment information and similar image structures cause strong interocular suppression. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. New similarity of triangular fuzzy number and its application.

    PubMed

    Zhang, Xixiang; Ma, Weimin; Chen, Liping

    2014-01-01

    The similarity of triangular fuzzy numbers is an important metric for application of it. There exist several approaches to measure similarity of triangular fuzzy numbers. However, some of them are opt to be large. To make the similarity well distributed, a new method SIAM (Shape's Indifferent Area and Midpoint) to measure triangular fuzzy number is put forward, which takes the shape's indifferent area and midpoint of two triangular fuzzy numbers into consideration. Comparison with other similarity measurements shows the effectiveness of the proposed method. Then, it is applied to collaborative filtering recommendation to measure users' similarity. A collaborative filtering case is used to illustrate users' similarity based on cloud model and triangular fuzzy number; the result indicates that users' similarity based on triangular fuzzy number can obtain better discrimination. Finally, a simulated collaborative filtering recommendation system is developed which uses cloud model and triangular fuzzy number to express users' comprehensive evaluation on items, and result shows that the accuracy of collaborative filtering recommendation based on triangular fuzzy number is higher.

  1. The sound of music: differentiating musicians using a fast, musical multi-feature mismatch negativity paradigm.

    PubMed

    Vuust, Peter; Brattico, Elvira; Seppänen, Miia; Näätänen, Risto; Tervaniemi, Mari

    2012-06-01

    Musicians' skills in auditory processing depend highly on instrument, performance practice, and on level of expertise. Yet, it is not known though whether the style/genre of music might shape auditory processing in the brains of musicians. Here, we aimed at tackling the role of musical style/genre on modulating neural and behavioral responses to changes in musical features. Using a novel, fast and musical sounding multi-feature paradigm, we measured the mismatch negativity (MMN), a pre-attentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, rock/pop) and in non-musicians. Jazz and classical musicians scored higher in the musical aptitude test than band musicians and non-musicians, especially with regards to tonal abilities. These results were extended by the MMN findings: jazz musicians had larger MMN-amplitude than all other experimental groups across the six different sound features, indicating a greater overall sensitivity to auditory outliers. In particular, we found enhanced processing of pith and sliding up to pitches in jazz musicians only. Furthermore, we observed a more frontal MMN to pitch and location compared to the other deviants in jazz musicians and left lateralization of the MMN to timbre in classical musicians. These findings indicate that the characteristics of the style/genre of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in a musical context. Musicians' brain is hence shaped by the type of training, musical style/genre, and listening experiences. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Development of echolocation calls and neural selectivity for echolocation calls in the pallid bat.

    PubMed

    Razak, Khaleel A; Fuzessery, Zoltan M

    2015-10-01

    Studies of birdsongs and neural selectivity for songs have provided important insights into principles of concurrent behavioral and auditory system development. Relatively little is known about mammalian auditory system development in terms of vocalizations or other behaviorally relevant sounds. This review suggests echolocating bats are suitable mammalian model systems to understand development of auditory behaviors. The simplicity of echolocation calls with known behavioral relevance and strong neural selectivity provides a platform to address how natural experience shapes cortical receptive field (RF) mechanisms. We summarize recent studies in the pallid bat that followed development of echolocation calls and cortical processing of such calls. We also discuss similar studies in the mustached bat for comparison. These studies suggest: (1) there are different developmental sensitive periods for different acoustic features of the same vocalization. The underlying basis is the capacity for some components of the RF to be modified independent of others. Some RF computations and maps involved in call processing are present even before the cochlea is mature and well before use of echolocation in flight. Others develop over a much longer time course. (2) Normal experience is required not just for refinement, but also for maintenance, of response properties that develop in an experience independent manner. (3) Experience utilizes millisecond range changes in timing of inhibitory and excitatory RF components as substrates to shape vocalization selectivity. We suggest that bat species and call diversity provide a unique opportunity to address developmental constraints in the evolution of neural mechanisms of vocalization processing. © 2014 Wiley Periodicals, Inc.

  3. Development of echolocation calls and neural selectivity for echolocation calls in the pallid bat

    PubMed Central

    Razak, Khaleel A.; Fuzessery, Zoltan M.

    2014-01-01

    Studies of birdsongs and neural selectivity for songs have provided important insights into principles of concurrent behavioral and auditory system development. Relatively little is known about mammalian auditory system development in terms of vocalizations, or other behaviorally relevant sounds. This review suggests echolocating bats are suitable mammalian model systems to understand development of auditory behaviors. The simplicity of echolocation calls with known behavioral relevance and strong neural selectivity provides a platform to address how natural experience shapes cortical receptive field (RF) mechanisms. We summarize recent studies in the pallid bat that followed development of echolocation calls and cortical processing of such calls. We also discuss similar studies in the mustached bat for comparison. These studies suggest: (1) there are different developmental sensitive periods for different acoustic features of the same vocalization. The underlying basis is the capacity for some components of the RF to be modified independent of others. Some RF computations and maps involved in call processing are present even before the cochlea is mature and well before use of echolocation in flight. Others develop over a much longer time course. (2) Normal experience is required not just for refinement, but also for maintenance, of response properties that develop in an experience independent manner. (3) Experience utilizes millisecond range changes in timing of inhibitory and excitatory RF components as substrates to shape vocalization selectivity. We suggest that bat species and call diversity provide a unique opportunity to address developmental constraints in the evolution of neural mechanisms of vocalization processing. PMID:25142131

  4. Effect of the environment on the dendritic morphology of the rat auditory cortex

    PubMed Central

    Bose, Mitali; Muñoz-Llancao, Pablo; Roychowdhury, Swagata; Nichols, Justin A.; Jakkamsetti, Vikram; Porter, Benjamin; Byrapureddy, Rajasekhar; Salgado, Humberto; Kilgard, Michael P.; Aboitiz, Francisco; Dagnino-Subiabre, Alexies; Atzori, Marco

    2010-01-01

    The present study aimed to identify morphological correlates of environment-induced changes at excitatory synapses of the primary auditory cortex (A1). We used the Golgi-Cox stain technique to compare pyramidal cells dendritic properties of Sprague-Dawley rats exposed to different environmental manipulations. Sholl analysis, dendritic length measures, and spine density counts were used to monitor the effects of sensory deafness and an auditory version of environmental enrichment (EE). We found that deafness decreased apical dendritic length leaving basal dendritic length unchanged, whereas EE selectively increased basal dendritic length without changing apical dendritic length. On the contrary, deafness decreased while EE increased spine density in both basal and apical dendrites of A1 layer 2/3 (LII/III) neurons. To determine whether stress contributed to the observed morphological changes in A1, we studied neural morphology in a restraint-induced model that lacked behaviorally relevant acoustic cues. We found that stress selectively decreased apical dendritic length in the auditory but not in the visual primary cortex. Similar to the acoustic manipulation, stress-induced changes in dendritic length possessed a layer specific pattern displaying LII/III neurons from stressed animals with normal apical dendrites but shorter basal dendrites, while infragranular neurons (layers V and VI) displayed shorter apical dendrites but normal basal dendrites. The same treatment did not induce similar changes in the visual cortex, demonstrating that the auditory cortex is an exquisitely sensitive target of neocortical plasticity, and that prolonged exposure to different acoustic as well as emotional environmental manipulation may produce specific changes in dendritic shape and spine density. PMID:19771593

  5. Talking back: Development of the olivocochlear efferent system.

    PubMed

    Frank, Michelle M; Goodrich, Lisa V

    2018-06-26

    Developing sensory systems must coordinate the growth of neural circuitry spanning from receptors in the peripheral nervous system (PNS) to multilayered networks within the central nervous system (CNS). This breadth presents particular challenges, as nascent processes must navigate across the CNS-PNS boundary and coalesce into a tightly intermingled wiring pattern, thereby enabling reliable integration from the PNS to the CNS and back. In the auditory system, feedforward spiral ganglion neurons (SGNs) from the periphery collect sound information via tonotopically organized connections in the cochlea and transmit this information to the brainstem for processing via the VIII cranial nerve. In turn, feedback olivocochlear neurons (OCNs) housed in the auditory brainstem send projections into the periphery, also through the VIII nerve. OCNs are motor neuron-like efferent cells that influence auditory processing within the cochlea and protect against noise damage in adult animals. These aligned feedforward and feedback systems develop in parallel, with SGN central axons reaching the developing auditory brainstem around the same time that the OCN axons extend out toward the developing inner ear. Recent findings have begun to unravel the genetic and molecular mechanisms that guide OCN development, from their origins in a generic pool of motor neuron precursors to their specialized roles as modulators of cochlear activity. One recurrent theme is the importance of efferent-afferent interactions, as afferent SGNs guide OCNs to their final locations within the sensory epithelium, and efferent OCNs shape the activity of the developing auditory system. This article is categorized under: Nervous System Development > Vertebrates: Regional Development. © 2018 Wiley Periodicals, Inc.

  6. Sensori-Motor Learning with Movement Sonification: Perspectives from Recent Interdisciplinary Studies.

    PubMed

    Bevilacqua, Frédéric; Boyer, Eric O; Françoise, Jules; Houix, Olivier; Susini, Patrick; Roby-Brami, Agnès; Hanneton, Sylvain

    2016-01-01

    This article reports on an interdisciplinary research project on movement sonification for sensori-motor learning. First, we describe different research fields which have contributed to movement sonification, from music technology including gesture-controlled sound synthesis, sonic interaction design, to research on sensori-motor learning with auditory-feedback. In particular, we propose to distinguish between sound-oriented tasks and movement-oriented tasks in experiments involving interactive sound feedback. We describe several research questions and recently published results on movement control, learning and perception. In particular, we studied the effect of the auditory feedback on movements considering several cases: from experiments on pointing and visuo-motor tracking to more complex tasks where interactive sound feedback can guide movements, or cases of sensory substitution where the auditory feedback can inform on object shapes. We also developed specific methodologies and technologies for designing the sonic feedback and movement sonification. We conclude with a discussion on key future research challenges in sensori-motor learning with movement sonification. We also point out toward promising applications such as rehabilitation, sport training or product design.

  7. Transmittance measurements of ultra violet and visible wavelength interference filters flown aboard LDEF

    NASA Technical Reports Server (NTRS)

    Mooney, Thomas A.; Smajkiewicz, Ali

    1991-01-01

    A set of ten interference filters for the UV and VIS spectral region were flown on the surface of the Long Duration Exposure Facility (LDEF) Tray B-8 along with earth radiation budget (ERB) components from the Eppley Laboratory. Transmittance changes and other degradation observed after the return of the filters to Barr are reported. Substrates, coatings, and (where applicable) cement materials are identified. In general, all filters except those containing lead compounds survived well. Metal dielectric filters for the UV developed large numbers of pinholes which caused an increase in transmittance. Band shapes and spectral positioning, however, did not change.

  8. Multiple filters affect tree species assembly in mid-latitude forest communities.

    PubMed

    Kubota, Y; Kusumoto, B; Shiono, T; Ulrich, W

    2018-05-01

    Species assembly patterns of local communities are shaped by the balance between multiple abiotic/biotic filters and dispersal that both select individuals from species pools at the regional scale. Knowledge regarding functional assembly can provide insight into the relative importance of the deterministic and stochastic processes that shape species assembly. We evaluated the hierarchical roles of the α niche and β niches by analyzing the influence of environmental filtering relative to functional traits on geographical patterns of tree species assembly in mid-latitude forests. Using forest plot datasets, we examined the α niche traits (leaf and wood traits) and β niche properties (cold/drought tolerance) of tree species, and tested non-randomness (clustering/over-dispersion) of trait assembly based on null models that assumed two types of species pools related to biogeographical regions. For most plots, species assembly patterns fell within the range of random expectation. However, particularly for cold/drought tolerance-related β niche properties, deviation from randomness was frequently found; non-random clustering was predominant in higher latitudes with harsh climates. Our findings demonstrate that both randomness and non-randomness in trait assembly emerged as a result of the α and β niches, although we suggest the potential role of dispersal processes and/or species equalization through trait similarities in generating the prevalence of randomness. Clustering of β niche traits along latitudinal climatic gradients provides clear evidence of species sorting by filtering particular traits. Our results reveal that multiple filters through functional niches and stochastic processes jointly shape geographical patterns of species assembly across mid-latitude forests.

  9. Methods and apparatuses using filter banks for multi-carrier spread-spectrum signals

    DOEpatents

    Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A

    2014-10-14

    A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to the synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.

  10. Methods and apparatuses using filter banks for multi-carrier spread-spectrum signals

    DOEpatents

    Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A

    2014-05-20

    A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to the synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.

  11. Effect of production variables on microbiological removal in locally-produced ceramic filters for household water treatment.

    PubMed

    Lantagne, Daniele; Klarman, Molly; Mayer, Ally; Preston, Kelsey; Napotnik, Julie; Jellison, Kristen

    2010-06-01

    Diarrhoeal diseases cause an estimated 1.87 million child deaths per year. Point-of-use filtration using locally made ceramic filters improves microbiological quality of stored drinking water and prevents diarrhoeal disease. Scaling-up ceramic filtration is inhibited by lack of universal quality control standards. We investigated filter production variables to determine their affect on microbiological removal during 5-6 weeks of simulated normal use. Decreases in the clay:sawdust ratio and changes in the burnable decreased effectiveness of the filter. Method of silver application and shape of filter did not impact filter effectiveness. A maximum flow rate of 1.7 l(-hr) was established as a potential quality control measure for one particular filter to ensure 99% (2- log(10)) removal of total coliforms. Further research is indicated to determine additional production variables associated with filter effectiveness and develop standardized filter production procedures prior to scaling-up.

  12. Climate tolerances and trait choices shape continental patterns of urban tree biodiversity

    Treesearch

    G. Darrel Jenerette; Lorraine W. Clarke; Meghan L. Avolio; Diane E. Pataki; Thomas W. Gillespie; Stephanie Pincetl; Dave J. Nowak; Lucy R. Hutyra; Melissa McHale; Joseph P. McFadden; Michael Alonzo

    2016-01-01

    Aim. We propose and test a climate tolerance and trait choice hypothesis of urban macroecological variation in which strong filtering associated with low winter temperatures restricts urban biodiversity while weak filtering associated with warmer temperatures and irrigation allows dispersal of species from a global source pool, thereby...

  13. Far-infrared bandpass filters from cross-shaped grids

    NASA Technical Reports Server (NTRS)

    Tomaselli, V. P.; Edewaard, D. C.; Gillan, P.; Moller, K. D.

    1981-01-01

    The optical transmission characteristics of electroformed metal grids with inductive and capacitive cross patterns have been investigated in the far-infrared spectral region. The transmission characteristics of one- and two-grid devices are represented by transmission line theory parameters. Results are used to suggest construction guidelines for two-grid bandpass filters.

  14. Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task.

    PubMed

    Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald

    2017-12-15

    The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some individuals showed BEAM benefits relative to KEMAR. Under dynamic conditions, BEAM and BEAMAR performance dropped significantly immediately following a target location transition. However, performance recovered by the second word in the sequence and was sustained until the next transition. When performance was assessed using an auditory-visual word congruence task, the benefits of beamforming reported previously were generally preserved under dynamic conditions in which the target source could move unpredictably from one location to another (i.e., performance recovered rapidly following source transitions) while the observer steered the beamforming via eye gaze, for both young NH and young HI groups.

  15. Class III myosins shape the auditory hair bundles by limiting microvilli and stereocilia growth

    PubMed Central

    Lelli, Andrea; Michel, Vincent; Boutet de Monvel, Jacques; Cortese, Matteo; Bosch-Grau, Montserrat; Aghaie, Asadollah; Perfettini, Isabelle; Dupont, Typhaine; Avan, Paul

    2016-01-01

    The precise architecture of hair bundles, the arrays of mechanosensitive microvilli-like stereocilia crowning the auditory hair cells, is essential to hearing. Myosin IIIa, defective in the late-onset deafness form DFNB30, has been proposed to transport espin-1 to the tips of stereocilia, thereby promoting their elongation. We show that Myo3a−/−Myo3b−/− mice lacking myosin IIIa and myosin IIIb are profoundly deaf, whereas Myo3a-cKO Myo3b−/− mice lacking myosin IIIb and losing myosin IIIa postnatally have normal hearing. Myo3a−/−Myo3b−/− cochlear hair bundles display robust mechanoelectrical transduction currents with normal kinetics but show severe embryonic abnormalities whose features rapidly change. These include abnormally tall and numerous microvilli or stereocilia, ungraded stereocilia bundles, and bundle rounding and closure. Surprisingly, espin-1 is properly targeted to Myo3a−/−Myo3b−/− stereocilia tips. Our results uncover the critical role that class III myosins play redundantly in hair-bundle morphogenesis; they unexpectedly limit the elongation of stereocilia and of subsequently regressing microvilli, thus contributing to the early hair bundle shaping. PMID:26754646

  16. Soil bacterial communities are shaped by temporal and environmental filtering: evidence from a long-term chronosequence.

    PubMed

    Freedman, Zachary; Zak, Donald R

    2015-09-01

    Soil microbial communities are abundant, hyper-diverse and mediate global biogeochemical cycles, but we do not yet understand the processes mediating their assembly. Current hypothetical frameworks suggest temporal (e.g. dispersal limitation) and environmental (e.g. soil pH) filters shape microbial community composition; however, there is limited empirical evidence supporting this framework in the hyper-diverse soil environment, particularly at large spatial (i.e. regional to continental) and temporal (i.e. 100 to 1000 years) scales. Here, we present evidence from a long-term chronosequence (4000 years) that temporal and environmental filters do indeed shape soil bacterial community composition. Furthermore, nearly 20 years of environmental monitoring allowed us to control for potentially confounding environmental variation. Soil bacterial communities were phylogenetically distinct across the chronosequence. We determined that temporal and environmental factors accounted for significant portions of bacterial phylogenetic structure using distance-based linear models. Environmental factors together accounted for the majority of phylogenetic structure, namely, soil temperature (19%), pH (17%) and litter carbon:nitrogen (C:N; 17%). However, of all individual factors, time since deglaciation accounted for the greatest proportion of bacterial phylogenetic structure (20%). Taken together, our results provide empirical evidence that temporal and environmental filters act together to structure soil bacterial communities across large spatial and long-term temporal scales. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.

  17. Enhanced Vertical Perception through Head-Related Impulse Response Customization Based on Pinna Response Tuning in the Median Plane

    NASA Astrophysics Data System (ADS)

    Shin, Ki Hoon; Park, Youngjin

    Human's ability to perceive elevation of a sound and distinguish whether a sound is coming from the front or rear strongly depends on the monaural spectral features of the pinnae. In order to realize an effective virtual auditory display by HRTF (head-related transfer function) customization, the pinna responses were isolated from the median HRIRs (head-related impulse responses) of 45 individual HRIRs in the CIPIC HRTF database and modeled as linear combinations of 4 or 5 basic temporal shapes (basis functions) per each elevation on the median plane by PCA (principal components analysis) in the time domain. By tuning the weight of each basis function computed for a specific height to replace the pinna response in the KEMAR HRIR at the same height with the resulting customized pinna response and listening to the filtered stimuli over headphones, 4 individuals with normal hearing sensitivity were able to create a set of HRIRs that outperformed the KEMAR HRIRs in producing vertical effects with reduced front/back ambiguity in the median plane. Since the monaural spectral features of the pinnae are almost independent of azimuthal variation of the source direction, similar vertical effects could also be generated at different azimuthal directions simply by varying the ITD (interaural time difference) according to the direction as well as the size of each individual's own head.

  18. Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Squires, N.; Galambos, R.

    1975-01-01

    Short latency (under 10 msec) evoked responses elicited by bursts of white noise were recorded from the scalp of human subjects. Response alterations produced by changes in the noise burst duration (on-time) inter-burst interval (off-time), and onset and offset shapes are reported and evaluated. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise-time but was unaffected by changes in fall-time. The amplitude of wave V was insensitive to changes in signal rise-and-fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It is concluded that wave V of the human auditory brainstem evoked response is solely an onset response.

  19. Differential Receptive Field Properties of Parvalbumin and Somatostatin Inhibitory Neurons in Mouse Auditory Cortex.

    PubMed

    Li, Ling-Yun; Xiong, Xiaorui R; Ibrahim, Leena A; Yuan, Wei; Tao, Huizhong W; Zhang, Li I

    2015-07-01

    Cortical inhibitory circuits play important roles in shaping sensory processing. In auditory cortex, however, functional properties of genetically identified inhibitory neurons are poorly characterized. By two-photon imaging-guided recordings, we specifically targeted 2 major types of cortical inhibitory neuron, parvalbumin (PV) and somatostatin (SOM) expressing neurons, in superficial layers of mouse auditory cortex. We found that PV cells exhibited broader tonal receptive fields with lower intensity thresholds and stronger tone-evoked spike responses compared with SOM neurons. The latter exhibited similar frequency selectivity as excitatory neurons. The broader/weaker frequency tuning of PV neurons was attributed to a broader range of synaptic inputs and stronger subthreshold responses elicited, which resulted in a higher efficiency in the conversion of input to output. In addition, onsets of both the input and spike responses of SOM neurons were significantly delayed compared with PV and excitatory cells. Our results suggest that PV and SOM neurons engage in auditory cortical circuits in different manners: while PV neurons may provide broadly tuned feedforward inhibition for a rapid control of ascending inputs to excitatory neurons, the delayed and more selective inhibition from SOM neurons may provide a specific modulation of feedback inputs on their distal dendrites. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Disrupting vagal feedback affects birdsong motor control.

    PubMed

    Méndez, Jorge M; Dall'asén, Analía G; Goller, Franz

    2010-12-15

    Coordination of different motor systems for sound production involves the use of feedback mechanisms. Song production in oscines is a well-established animal model for studying learned vocal behavior. Whereas the online use of auditory feedback has been studied in the songbird model, very little is known about the role of other feedback mechanisms. Auditory feedback is required for the maintenance of stereotyped adult song. In addition, the use of somatosensory feedback to maintain pressure during song has been demonstrated with experimentally induced fluctuations in air sac pressure. Feedback information mediating this response is thought to be routed to the central nervous system via afferent fibers of the vagus nerve. Here, we tested the effects of unilateral vagotomy on the peripheral motor patterns of song production and the acoustic features. Unilateral vagotomy caused a variety of disruptions and alterations to the respiratory pattern of song, some of which affected the acoustic structure of vocalizations. These changes were most pronounced a few days after nerve resection and varied between individuals. In the most extreme cases, the motor gestures of respiration were so severely disrupted that individual song syllables or the song motif were atypically terminated. Acoustic changes also suggest altered use of the two sound generators and upper vocal tract filtering, indicating that the disruption of vagal feedback caused changes to the motor program of all motor systems involved in song production and modification. This evidence for the use of vagal feedback by the song system with disruption of song during the first days after nerve cut provides a contrast to the longer-term effects of auditory feedback disruption. It suggests a significant role for somatosensory feedback that differs from that of auditory feedback.

  1. Disrupting vagal feedback affects birdsong motor control

    PubMed Central

    Méndez, Jorge M.; Dall'Asén, Analía G.; Goller, Franz

    2010-01-01

    Coordination of different motor systems for sound production involves the use of feedback mechanisms. Song production in oscines is a well-established animal model for studying learned vocal behavior. Whereas the online use of auditory feedback has been studied in the songbird model, very little is known about the role of other feedback mechanisms. Auditory feedback is required for the maintenance of stereotyped adult song. In addition, the use of somatosensory feedback to maintain pressure during song has been demonstrated with experimentally induced fluctuations in air sac pressure. Feedback information mediating this response is thought to be routed to the central nervous system via afferent fibers of the vagus nerve. Here, we tested the effects of unilateral vagotomy on the peripheral motor patterns of song production and the acoustic features. Unilateral vagotomy caused a variety of disruptions and alterations to the respiratory pattern of song, some of which affected the acoustic structure of vocalizations. These changes were most pronounced a few days after nerve resection and varied between individuals. In the most extreme cases, the motor gestures of respiration were so severely disrupted that individual song syllables or the song motif were atypically terminated. Acoustic changes also suggest altered use of the two sound generators and upper vocal tract filtering, indicating that the disruption of vagal feedback caused changes to the motor program of all motor systems involved in song production and modification. This evidence for the use of vagal feedback by the song system with disruption of song during the first days after nerve cut provides a contrast to the longer-term effects of auditory feedback disruption. It suggests a significant role for somatosensory feedback that differs from that of auditory feedback. PMID:21113000

  2. A rapid form of activity-dependent recovery from short-term synaptic depression in the intensity pathway of the auditory brainstem

    PubMed Central

    Horiuchi, Timothy K.

    2011-01-01

    Short-term synaptic plasticity acts as a time- and firing rate-dependent filter that mediates the transmission of information across synapses. In the avian auditory brainstem, specific forms of plasticity are expressed at different terminals of the same auditory nerve fibers and contribute to the divergence of acoustic timing and intensity information. To identify key differences in the plasticity properties, we made patch-clamp recordings from neurons in the cochlear nucleus responsible for intensity coding, nucleus angularis, and measured the time course of the recovery of excitatory postsynaptic currents following short-term synaptic depression. These synaptic responses showed a very rapid recovery, following a bi-exponential time course with a fast time constant of ~40 ms and a dependence on the presynaptic activity levels, resulting in a crossing over of the recovery trajectories following high-rate versus low-rate stimulation trains. We also show that the recorded recovery in the intensity pathway differs from similar recordings in the timing pathway, specifically the cochlear nucleus magnocellularis, in two ways: (1) a fast recovery that was not due to recovery from postsynaptic receptor desensitization and (2) a recovery trajectory that was characterized by a non-monotonic bump that may be due in part to facilitation mechanisms more prevalent in the intensity pathway. We tested whether a previously proposed model of synaptic transmission based on vesicle depletion and sequential steps of vesicle replenishment could account for the recovery responses, and found it was insufficient, suggesting an activity-dependent feedback mechanism is present. We propose that the rapid recovery following depression allows improved coding of natural auditory signals that often consist of sound bursts separated by short gaps. PMID:21409439

  3. Functional and structural aspects of tinnitus-related enhancement and suppression of auditory cortex activity.

    PubMed

    Diesch, Eugen; Andermann, Martin; Flor, Herta; Rupp, Andre

    2010-05-01

    The steady-state auditory evoked magnetic field was recorded in tinnitus patients and controls, both either musicians or non-musicians, all of them with high-frequency hearing loss. Stimuli were AM-tones with two modulation frequencies and three carrier frequencies matching the "audiometric edge", i.e. the frequency above which hearing loss increases more rapidly, the tinnitus frequency or the frequency 1 1/2 octaves above the audiometric edge in controls, and a frequency 1 1/2 octaves below the audiometric edge. Stimuli equated in carrier frequency, but differing in modulation frequency, were simultaneously presented to the two ears. The modulation frequency-specific components of the dual steady-state response were recovered by bandpass filtering. In both hemispheres, the source amplitude of the response was larger for contralateral than ipsilateral input. In non-musicians with tinnitus, this laterality effect was enhanced in the hemisphere contralateral and reduced in the hemisphere ipsilateral to the tinnitus ear, especially for the tinnitus frequency. The hemisphere-by-input laterality dominance effect was smaller in musicians than in non-musicians. In both patient groups, source amplitude change over time, i.e. amplitude slope, was increasing with tonal frequency for contralateral input and decreasing for ipsilateral input. However, slope was smaller for musicians than non-musicians. In patients, source amplitude was negatively correlated with the MRI-determined volume of the medial partition of Heschl's gyrus. Tinnitus patients show an altered excitatory-inhibitory balance reflecting the downregulation of inhibition and resulting in a steeper dominance hierarchy among simultaneous processes in auditory cortex. Direction and extent of this alteration are modulated by musicality and auditory cortex volume. 2010 Elsevier Inc. All rights reserved.

  4. Combination of binaural and harmonic masking release effects in the detection of a single component in complex tones.

    PubMed

    Klein-Hennig, Martin; Dietz, Mathias; Hohmann, Volker

    2018-03-01

    Both harmonic and binaural signal properties are relevant for auditory processing. To investigate how these cues combine in the auditory system, detection thresholds for an 800-Hz tone masked by a diotic (i.e., identical between the ears) harmonic complex tone were measured in six normal-hearing subjects. The target tone was presented either diotically or with an interaural phase difference (IPD) of 180° and in either harmonic or "mistuned" relationship to the diotic masker. Three different maskers were used, a resolved and an unresolved complex tone (fundamental frequency: 160 and 40 Hz) with four components below and above the target frequency and a broadband unresolved complex tone with 12 additional components. The target IPD provided release from masking in most masker conditions, whereas mistuning led to a significant release from masking only in the diotic conditions with the resolved and the narrowband unresolved maskers. A significant effect of mistuning was neither found in the diotic condition with the wideband unresolved masker nor in any of the dichotic conditions. An auditory model with a single analysis frequency band and different binaural processing schemes was employed to predict the data of the unresolved masker conditions. Sensitivity to modulation cues was achieved by including an auditory-motivated modulation filter in the processing pathway. The predictions of the diotic data were in line with the experimental results and literature data in the narrowband condition, but not in the broadband condition, suggesting that across-frequency processing is involved in processing modulation information. The experimental and model results in the dichotic conditions show that the binaural processor cannot exploit modulation information in binaurally unmasked conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Study on the fabrication of low-pass metal powder filters for use at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Lee, Sung Hoon; Lee, Soon-Gul

    2016-08-01

    We fabricated compact low-pass stainless-steel powder filters for use in low-noise measurements at cryogenic temperatures and investigated their attenuation characteristics for different wire lengths, filter shapes, and preparation methods at frequencies up to 20 GHz. We used nominally 30- μm-sized SUS 304L powder and mixed it with Stycast 2850FT (Emerson and Cumming) with catalyst 23LV. A 0.1-mm insulated copper wire was wound on preformed powder-mixture spools in the shape of a right-circular cylinder, a flattened elliptic cylinder and a toroid, and the coils were encapsulated in metal tubes or boxes filled with the powder mixture. All the fabricated powder filters showed a large attenuation at high frequencies with a cut-off frequency near 1 GHz. However, the toroidal filter showed prominent ripples corresponding to resonance modes in the 0.5-m-long coil wire. A filter with a 2:1 powder/epoxy mixture mass ratio and a wire length of 1.53 m showed an attenuation of -93 dB at 4 GHz, and the attenuation was linearly proportional to the wire's length. As the powder-to-epoxy ratio was increased, the high-frequency attenuation increased. An equally-spaced single-layer coil structure was found to be more efficient in attenuation than a double-layer coil. The geometry of the metal filter's case affected the noise ripples, with the least noise being found for a circular tube.

  6. Functional modeling of the human auditory brainstem response to broadband stimulationa)

    PubMed Central

    Verhulst, Sarah; Bharadwaj, Hari M.; Mehraei, Golbarg; Shera, Christopher A.; Shinn-Cunningham, Barbara G.

    2015-01-01

    Population responses such as the auditory brainstem response (ABR) are commonly used for hearing screening, but the relationship between single-unit physiology and scalp-recorded population responses are not well understood. Computational models that integrate physiologically realistic models of single-unit auditory-nerve (AN), cochlear nucleus (CN) and inferior colliculus (IC) cells with models of broadband peripheral excitation can be used to simulate ABRs and thereby link detailed knowledge of animal physiology to human applications. Existing functional ABR models fail to capture the empirically observed 1.2–2 ms ABR wave-V latency-vs-intensity decrease that is thought to arise from level-dependent changes in cochlear excitation and firing synchrony across different tonotopic sections. This paper proposes an approach where level-dependent cochlear excitation patterns, which reflect human cochlear filter tuning parameters, drive AN fibers to yield realistic level-dependent properties of the ABR wave-V. The number of free model parameters is minimal, producing a model in which various sources of hearing-impairment can easily be simulated on an individualized and frequency-dependent basis. The model fits latency-vs-intensity functions observed in human ABRs and otoacoustic emissions while maintaining rate-level and threshold characteristics of single-unit AN fibers. The simulations help to reveal which tonotopic regions dominate ABR waveform peaks at different stimulus intensities. PMID:26428802

  7. Spherical harmonics coefficients for ligand-based virtual screening of cyclooxygenase inhibitors.

    PubMed

    Wang, Quan; Birod, Kerstin; Angioni, Carlo; Grösch, Sabine; Geppert, Tim; Schneider, Petra; Rupp, Matthias; Schneider, Gisbert

    2011-01-01

    Molecular descriptors are essential for many applications in computational chemistry, such as ligand-based similarity searching. Spherical harmonics have previously been suggested as comprehensive descriptors of molecular structure and properties. We investigate a spherical harmonics descriptor for shape-based virtual screening. We introduce and validate a partially rotation-invariant three-dimensional molecular shape descriptor based on the norm of spherical harmonics expansion coefficients. Using this molecular representation, we parameterize molecular surfaces, i.e., isosurfaces of spatial molecular property distributions. We validate the shape descriptor in a comprehensive retrospective virtual screening experiment. In a prospective study, we virtually screen a large compound library for cyclooxygenase inhibitors, using a self-organizing map as a pre-filter and the shape descriptor for candidate prioritization. 12 compounds were tested in vitro for direct enzyme inhibition and in a whole blood assay. Active compounds containing a triazole scaffold were identified as direct cyclooxygenase-1 inhibitors. This outcome corroborates the usefulness of spherical harmonics for representation of molecular shape in virtual screening of large compound collections. The combination of pharmacophore and shape-based filtering of screening candidates proved to be a straightforward approach to finding novel bioactive chemotypes with minimal experimental effort.

  8. Computer simulations and models for the performance characteristics of spectrally equivalent X-ray beams in medical diagnostic radiology

    PubMed Central

    Okunade, Akintunde A.

    2007-01-01

    In order to achieve uniformity in radiological imaging, it is recommended that the concept of equivalence in shape (quality) and size (quantity) of clinical Xray beams should be used for carrying out the comparative evaluation of image and patient dose. When used under the same irradiation geometry, X-ray beams that are strictly or relatively equivalent in terms of shape and size will produce identical or relatively identical image quality and patient dose. Simple mathematical models and software program EQSPECT.FOR were developed for the comparative evaluation of the performance characteristics in terms of contrast (C), contrast to noise ratio (CNR) and figure-of-merit (FOM = CNR2/DOSE) for spectrally equivalent beams transmitted through filter materials referred to as conventional and k-edged. At the same value of operating potential (kVp), results show that spectrally equivalent beam transmitted through conventional filter with higher atomic number (Z-value) in comparison with that transmitted through conventional filter with lower Z-value resulted in the same value of C and FOM. However, in comparison with the spectrally equivalent beam transmitted through filter of lower Z-value, the beam through filter of higher Z-value produced higher value of CNR and DOSE at equal tube loading (mAs) and kVp. Under the condition of equivalence of spectrum, at scaled (or reduced) tube loading and same kVp, filter materials of higher Z-value can produce the same values of C, CNR, DOSE and FOM as filter materials of lower Z-value. Unlike the case of comparison of spectrally equivalent beam transmitted through one conventional filter and that through another conventional filter, it is not possible to derive simple mathematical formulations for the relative performance of spectrally equivalent beam transmitted through a given conventional filter material and that through kedge filter material. PMID:21224928

  9. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid.

    PubMed

    Kidd, Gerald

    2017-10-17

    Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This listening situation forms the classic "cocktail party problem" described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. This approach, embodied in a prototype "visually guided hearing aid" (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based "spatial filter" operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in "informational masking." The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in "energetic masking." Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. http://cred.pubs.asha.org/article.aspx?articleid=2601621.

  10. Biased relevance filtering in the auditory system: A test of confidence-weighted first-impressions.

    PubMed

    Mullens, D; Winkler, I; Damaso, K; Heathcote, A; Whitson, L; Provost, A; Todd, J

    2016-03-01

    Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behavior. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable vs. unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal stability of the transition-statistics are violated. Both studies provide compelling evidence that the auditory system extrapolates patterns on multiple timescales to adjust its response to prediction-errors, while profoundly distorting the effects of transition-statistics by the assumptions formed on the basis of first-impressions. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Language experience enhances early cortical pitch-dependent responses

    PubMed Central

    Krishnan, Ananthanarayan; Gandour, Jackson T.; Ananthakrishnan, Saradha; Vijayaraghavan, Venkatakrishnan

    2014-01-01

    Pitch processing at cortical and subcortical stages of processing is shaped by language experience. We recently demonstrated that specific components of the cortical pitch response (CPR) index the more rapidly-changing portions of the high rising Tone 2 of Mandarin Chinese, in addition to marking pitch onset and sound offset. In this study, we examine how language experience (Mandarin vs. English) shapes the processing of different temporal attributes of pitch reflected in the CPR components using stimuli representative of within-category variants of Tone 2. Results showed that the magnitude of CPR components (Na-Pb and Pb-Nb) and the correlation between these two components and pitch acceleration were stronger for the Chinese listeners compared to English listeners for stimuli that fell within the range of Tone 2 citation forms. Discriminant function analysis revealed that the Na-Pb component was more than twice as important as Pb-Nb in grouping listeners by language affiliation. In addition, a stronger stimulus-dependent, rightward asymmetry was observed for the Chinese group at the temporal, but not frontal, electrode sites. This finding may reflect selective recruitment of experience-dependent, pitch-specific mechanisms in right auditory cortex to extract more complex, time-varying pitch patterns. Taken together, these findings suggest that long-term language experience shapes early sensory level processing of pitch in the auditory cortex, and that the sensitivity of the CPR may vary depending on the relative linguistic importance of specific temporal attributes of dynamic pitch. PMID:25506127

  12. Phage-based biomolecular filter for the capture of bacterial pathogens in liquid streams

    NASA Astrophysics Data System (ADS)

    Du, Songtao; Chen, I.-Hsuan; Horikawa, Shin; Lu, Xu; Liu, Yuzhe; Wikle, Howard C.; Suh, Sang Jin; Chin, Bryan A.

    2017-05-01

    This paper investigates a phage-based biomolecular filter that enables the evaluation of large volumes of liquids for the presence of small quantities of bacterial pathogens. The filter is a planar arrangement of phage-coated, strip-shaped magnetoelastic (ME) biosensors (4 mm × 0.8 mm × 0.03 mm), magnetically coupled to a filter frame structure, through which a liquid of interest flows. This "phage filter" is designed to capture specific bacterial pathogens and allow non-specific debris to pass, eliminating the common clogging issue in conventional bead filters. ANSYS Maxwell was used to simulate the magnetic field pattern required to hold ME biosensors densely and to optimize the frame design. Based on the simulation results, a phage filter structure was constructed, and a proof-in-concept experiment was conducted where a Salmonella solution of known concentration were passed through the filter, and the number of captured Salmonella was quantified by plate counting.

  13. Integrated programmable photonic filter on the silicon-on-insulator platform.

    PubMed

    Liao, Shasha; Ding, Yunhong; Peucheret, Christophe; Yang, Ting; Dong, Jianji; Zhang, Xinliang

    2014-12-29

    We propose and demonstrate a silicon-on-insulator (SOI) on-chip programmable filter based on a four-tap finite impulse response structure. The photonic filter is programmable thanks to amplitude and phase modulation of each tap controlled by thermal heaters. We further demonstrate the tunability of the filter central wavelength, bandwidth and variable passband shape. The tuning range of the central wavelength is at least 42% of the free spectral range. The bandwidth tuning range is at least half of the free spectral range. Our scheme has distinct advantages of compactness, capability for integrating with electronics.

  14. Particle filters, a quasi-Monte-Carlo-solution for segmentation of coronaries.

    PubMed

    Florin, Charles; Paragios, Nikos; Williams, Jim

    2005-01-01

    In this paper we propose a Particle Filter-based approach for the segmentation of coronary arteries. To this end, successive planes of the vessel are modeled as unknown states of a sequential process. Such states consist of the orientation, position, shape model and appearance (in statistical terms) of the vessel that are recovered in an incremental fashion, using a sequential Bayesian filter (Particle Filter). In order to account for bifurcations and branchings, we consider a Monte Carlo sampling rule that propagates in parallel multiple hypotheses. Promising results on the segmentation of coronary arteries demonstrate the potential of the proposed approach.

  15. Neuroestrogen signaling in the songbird auditory cortex propagates into a sensorimotor network via an `interface' nucleus

    PubMed Central

    Pawlisch, Benjamin A.; Remage-Healey, Luke

    2014-01-01

    Neuromodulators rapidly alter activity of neural circuits and can therefore shape higher-order functions, such as sensorimotor integration. Increasing evidence suggests that brain-derived estrogens, such as 17-β-estradiol, can act rapidly to modulate sensory processing. However, less is known about how rapid estrogen signaling can impact downstream circuits. Past studies have demonstrated that estradiol levels increase within the songbird auditory cortex (the caudomedial nidopallium, NCM) during social interactions. Local estradiol signaling enhances the auditory-evoked firing rate of neurons in NCM to a variety of stimuli, while also enhancing the selectivity of auditory-evoked responses of neurons in a downstream sensorimotor brain region, HVC (proper name). Since these two brain regions are not directly connected, we employed dual extracellular recordings in HVC and the upstream nucleus interfacialis of the nidopallium (NIf) during manipulations of estradiol within NCM to better understand the pathway by which estradiol signaling propagates to downstream circuits. NIf has direct input into HVC, passing auditory information into the vocal motor output pathway, and is a possible source of the neural selectivity within HVC. Here, during acute estradiol administration in NCM, NIf neurons showed increases in baseline firing rates and auditory-evoked firing rates to all stimuli. Furthermore, when estradiol synthesis was blocked in NCM, we observed simultaneous decreases in the selectivity of NIf and HVC neurons. These effects were not due to direct estradiol actions because NIf has little to no capability for local estrogen synthesis or estrogen receptors, and these effects were specific to NIf because other neurons immediately surrounding NIf did not show these changes. Our results demonstrate that transsynaptic, rapid fluctuations in neuroestrogens are transmitted into NIf and subsequently HVC, both regions important for sensorimotor integration. Overall, these findings support the hypothesis that acute neurosteroid actions can propagate within and between neural circuits to modulate their functional connectivity. PMID:25453773

  16. Neuroestrogen signaling in the songbird auditory cortex propagates into a sensorimotor network via an 'interface' nucleus.

    PubMed

    Pawlisch, B A; Remage-Healey, L

    2015-01-22

    Neuromodulators rapidly alter activity of neural circuits and can therefore shape higher order functions, such as sensorimotor integration. Increasing evidence suggests that brain-derived estrogens, such as 17-β-estradiol, can act rapidly to modulate sensory processing. However, less is known about how rapid estrogen signaling can impact downstream circuits. Past studies have demonstrated that estradiol levels increase within the songbird auditory cortex (the caudomedial nidopallium, NCM) during social interactions. Local estradiol signaling enhances the auditory-evoked firing rate of neurons in NCM to a variety of stimuli, while also enhancing the selectivity of auditory-evoked responses of neurons in a downstream sensorimotor brain region, HVC (proper name). Since these two brain regions are not directly connected, we employed dual extracellular recordings in HVC and the upstream nucleus interfacialis of the nidopallium (NIf) during manipulations of estradiol within NCM to better understand the pathway by which estradiol signaling propagates to downstream circuits. NIf has direct input into HVC, passing auditory information into the vocal motor output pathway, and is a possible source of the neural selectivity within HVC. Here, during acute estradiol administration in NCM, NIf neurons showed increases in baseline firing rates and auditory-evoked firing rates to all stimuli. Furthermore, when estradiol synthesis was blocked in NCM, we observed simultaneous decreases in the selectivity of NIf and HVC neurons. These effects were not due to direct estradiol actions because NIf has little to no capability for local estrogen synthesis or estrogen receptors, and these effects were specific to NIf because other neurons immediately surrounding NIf did not show these changes. Our results demonstrate that transsynaptic, rapid fluctuations in neuroestrogens are transmitted into NIf and subsequently HVC, both regions important for sensorimotor integration. Overall, these findings support the hypothesis that acute neurosteroid actions can propagate within and between neural circuits to modulate their functional connectivity. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. 14 CFR 25.1385 - Position light system installation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) Light covers and color filters. Each light cover or color filter must be at least flame resistant and may not change color or shape or lose any appreciable light transmission during normal use. [Doc. No... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Position light system installation. 25.1385...

  18. Optical Implementation Of The Synthetic Discrimination Function

    NASA Astrophysics Data System (ADS)

    Butler, Steve; Riggins, James

    1985-01-01

    Computer-generated holograms of geometrical shape and synthetic discriminant function (SDF) matched filters are modeled and produced. The models include ideal correlations and Allebach-Keegan binary holograms. A distinction between Phase-Only-Information and Phase-Only-Material Filters is demonstrated. Signal-to-noise and efficiency measurements were made on the resultant correlation planes.

  19. Power line interference attenuation in multi-channel sEMG signals: Algorithms and analysis.

    PubMed

    Soedirdjo, S D H; Ullah, K; Merletti, R

    2015-08-01

    Electromyogram (EMG) recordings are often corrupted by power line interference (PLI) even though the skin is prepared and well-designed instruments are used. This study focuses on the analysis of some of the recent and classical existing digital signal processing approaches have been used to attenuate, if not eliminate, the power line interference from EMG signals. A comparison of the signal to interference ratio (SIR) of the output signals is presented, for four methods: classical notch filter, spectral interpolation, adaptive noise canceller with phase locked loop (ANC-PLL) and adaptive filter, applied to simulated multichannel monopolar EMG signals with different SIR. The effect of each method on the shape of the EMG signals is also analyzed. The results show that ANC-PLL method gives the best output SIR and lowest shape distortion compared to the other methods. Classical notch filtering is the simplest method but some information might be lost as it removes both the interference and the EMG signals. Thus, it is obvious that notch filter has the lowest performance and it introduces distortion into the resulting signals.

  20. An evaluation on the design of beam shaping assembly based on the D-T reaction for BNCT

    NASA Astrophysics Data System (ADS)

    Asnal, M.; Liamsuwan, T.; Onjun, T.

    2015-05-01

    Boron Neutron Capture Therapy (BNCT) can be achieved by using a compact neutron generator such as a compact D-T neutron source, in which neutron energy must be in the epithermal energy range with sufficient flux. For these requirements, a Beam Shaping Assembly (BSA) is needed. In this paper, three BSA designs based on the D-T reaction for BNCT are discussed. It is found that the BSA configuration designed by Rasouli et al. satisfies all of the International Atomic Energy Agency (IAEA) criteria. It consists of 14 cm uranium as multiplier, 23 cm TiF3 and 36 cm Fluental as moderator, 4 cm Fe as fast neutron filter, 1 mm Li as thermal neutron filter, 2.6 cm Bi as gamma ray filter, and Pb as collimator and reflector. It is also found that use of specific filters is important for removing the fast and thermal neutrons and gamma contamination. Moreover, an appropriate neutron source plays a key role in providing a proper epithermal flux.

  1. Sensorineural hearing loss degrades behavioral and physiological measures of human spatial selective auditory attention

    PubMed Central

    Dai, Lengshi; Best, Virginia; Shinn-Cunningham, Barbara G.

    2018-01-01

    Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing. PMID:29555752

  2. Dynamic Granger causality based on Kalman filter for evaluation of functional network connectivity in fMRI data

    PubMed Central

    Havlicek, Martin; Jan, Jiri; Brazdil, Milan; Calhoun, Vince D.

    2015-01-01

    Increasing interest in understanding dynamic interactions of brain neural networks leads to formulation of sophisticated connectivity analysis methods. Recent studies have applied Granger causality based on standard multivariate autoregressive (MAR) modeling to assess the brain connectivity. Nevertheless, one important flaw of this commonly proposed method is that it requires the analyzed time series to be stationary, whereas such assumption is mostly violated due to the weakly nonstationary nature of functional magnetic resonance imaging (fMRI) time series. Therefore, we propose an approach to dynamic Granger causality in the frequency domain for evaluating functional network connectivity in fMRI data. The effectiveness and robustness of the dynamic approach was significantly improved by combining a forward and backward Kalman filter that improved estimates compared to the standard time-invariant MAR modeling. In our method, the functional networks were first detected by independent component analysis (ICA), a computational method for separating a multivariate signal into maximally independent components. Then the measure of Granger causality was evaluated using generalized partial directed coherence that is suitable for bivariate as well as multivariate data. Moreover, this metric provides identification of causal relation in frequency domain, which allows one to distinguish the frequency components related to the experimental paradigm. The procedure of evaluating Granger causality via dynamic MAR was demonstrated on simulated time series as well as on two sets of group fMRI data collected during an auditory sensorimotor (SM) or auditory oddball discrimination (AOD) tasks. Finally, a comparison with the results obtained from a standard time-invariant MAR model was provided. PMID:20561919

  3. Nyquist-WDM filter shaping with a high-resolution colorless photonic spectral processor.

    PubMed

    Sinefeld, David; Ben-Ezra, Shalva; Marom, Dan M

    2013-09-01

    We employ a spatial-light-modulator-based colorless photonic spectral processor with a spectral addressability of 100 MHz along 100 GHz bandwidth, for multichannel, high-resolution reshaping of Gaussian channel response to square-like shape, compatible with Nyquist WDM requirements.

  4. Mapping Frequency-Specific Tone Predictions in the Human Auditory Cortex at High Spatial Resolution.

    PubMed

    Berlot, Eva; Formisano, Elia; De Martino, Federico

    2018-05-23

    Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels. SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such prediction signaling; for example, whether a predicted tone is encoded with similar specificity as a perceived tone. Here, we show that early auditory regions encode the frequency of a tone that is predicted yet omitted. Our findings contribute to the understanding of how expectations shape sound processing in the human auditory cortex and provide further insights into how contextual information influences computations in neuronal circuits. Copyright © 2018 the authors 0270-6474/18/384934-09$15.00/0.

  5. In-group biases and oculomotor responses: beyond simple approach motivation.

    PubMed

    Moradi, Zahra Zargol; Manohar, Sanjay; Duta, Mihaela; Enock, Florence; Humphreys, Glyn W

    2018-05-01

    An in-group bias describes an individual's bias towards a group that they belong to. Previous studies suggest that in-group bias facilitates approach motor responses, but disrupts avoidance ones. Such motor biases are shown to be more robust when the out-group is threatening. We investigated whether, under controlled visual familiarity and complexity, in-group biases still promote pro-saccade and hinder anti-saccades oculomotor responses. Participants first learned to associate an in-group or out-group label with an arbitrary shape. They were then instructed to listen to the group-relevant auditory cue (name of own and a rival university) followed by one of the shapes. Half of the participants were instructed to look towards the visual target if it matched the preceding group-relevant auditory cue and to look away from it if it did not match. The other half of the participants received reversed instructions. This design allowed us to orthogonally manipulate the effect of in-group bias and cognitive control demand on oculomotor responses. Both pro- and anti-saccades were faster and more accurate following the in-group auditory cue. Independently, pro-saccades were performed better than anti-saccades, and match judgements were faster and more accurate than non-match judgements. Our findings indicate that under higher cognitive control demands individuals' oculomotor responses improved following the motivationally salient cue (in-group). Our findings have important implications for learning and cognitive control in a social context. As we included rival groups, our results might to some extent reflect the effects of out-group threat. Future studies could extend our findings using non-threatening out-groups instead.

  6. Modeling source-filter interaction in belting and high-pitched operatic male singing

    PubMed Central

    Titze, Ingo R.; Worley, Albert S.

    2009-01-01

    Nonlinear source-filter theory is applied to explain some acoustic differences between two contrasting male singing productions at high pitches: operatic style versus jazz belt or theater belt. Several stylized vocal tract shapes (caricatures) are discussed that form the bases of these styles. It is hypothesized that operatic singing uses vowels that are modified toward an inverted megaphone mouth shape for transitioning into the high-pitch range. This allows all the harmonics except the fundamental to be “lifted” over the first formant. Belting, on the other hand, uses vowels that are consistently modified toward the megaphone (trumpet-like) mouth shape. Both the fundamental and the second harmonic are then kept below the first formant. The vocal tract shapes provide collective reinforcement to multiple harmonics in the form of inertive supraglottal reactance and compliant subglottal reactance. Examples of lip openings from four well-known artists are used to infer vocal tract area functions and the corresponding reactances. PMID:19739766

  7. Robust Audio Watermarking by Using Low-Frequency Histogram

    NASA Astrophysics Data System (ADS)

    Xiang, Shijun

    In continuation to earlier work where the problem of time-scale modification (TSM) has been studied [1] by modifying the shape of audio time domain histogram, here we consider the additional ingredient of resisting additive noise-like operations, such as Gaussian noise, lossy compression and low-pass filtering. In other words, we study the problem of the watermark against both TSM and additive noises. To this end, in this paper we extract the histogram from a Gaussian-filtered low-frequency component for audio watermarking. The watermark is inserted by shaping the histogram in a way that the use of two consecutive bins as a group is exploited for hiding a bit by reassigning their population. The watermarked signals are perceptibly similar to the original one. Comparing with the previous time-domain watermarking scheme [1], the proposed watermarking method is more robust against additive noise, MP3 compression, low-pass filtering, etc.

  8. Design Optimization of Vena Cava Filters: An application to dual filtration devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, M A; Wang, S L; Diachin, D P

    Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped modelmore » thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.« less

  9. The effects of postnatal phthalate exposure on the development of auditory temporal processing in rats.

    PubMed

    Kim, Bong Jik; Kim, Jungyoon; Keoboutdy, Vanhnansy; Kwon, Ho-Jang; Oh, Seung-Ha; Jung, Jae Yun; Park, Il Yong; Paik, Ki Chung

    2017-06-01

    The central auditory pathway is known to continue its development during the postnatal critical periods and is shaped by experience and sensory inputs. Phthalate, a known neurotoxic material, has been reported to be associated with attention deficits in children, impacting many infant neurobehaviors. The objective of this study was to investigate the potential effects of neonatal phthalate exposure on the development of auditory temporal processing. Neonatal Sprague-Dawley rats were randomly assigned into two groups: The phthalate group (n = 6), and the control group (n = 6). Phthalate was given once per day from postnatal day 8 (P8) to P28. Upon completion, at P28, the Auditory Brainstem Response (ABR) and Gap Prepulse Inhibition of Acoustic Startle response (GPIAS) at each gap duration (2, 5, 10, 20, 50 and 80 ms) were measured, and gap detection threshold (GDT) was calculated. These outcomes were compared between the two groups. Hearing thresholds by ABR showed no significant differences at all frequencies between the two groups. Regarding GPIAS, no significant difference was observed, except at a gap duration of 20 ms (p = 0.037). The mean GDT of the phthalate group (44.0 ms) was higher than that of the control group (20.0 ms), but without statistical significance (p = 0.065). Moreover, the phthalate group tended to demonstrate more of a scattered distribution in the GDT group than the in the control group. Neonatal phthalate exposure may disrupt the development of auditory temporal processing in rats. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A novel shape from focus method based on 3D steerable filters for improved performance on treating textureless region

    NASA Astrophysics Data System (ADS)

    Fan, Tiantian; Yu, Hongbin

    2018-03-01

    A novel shape from focus method combining 3D steerable filter for improved performance on treating textureless region was proposed in this paper. Different from conventional spatial methods focusing on the search of maximum edges' response to estimate the depth map, the currently proposed method took both of the edges' response and the axial imaging blur degree into consideration during treatment. As a result, more robust and accurate identification for the focused location can be achieved, especially when treating textureless objects. Improved performance in depth measurement has been successfully demonstrated from both of the simulation and experiment results.

  11. Signal Statistics and Maximum Likelihood Sequence Estimation in Intensity Modulated Fiber Optic Links Containing a Single Optical Pre-amplifier.

    PubMed

    Alić, Nikola; Papen, George; Saperstein, Robert; Milstein, Laurence; Fainman, Yeshaiahu

    2005-06-13

    Exact signal statistics for fiber-optic links containing a single optical pre-amplifier are calculated and applied to sequence estimation for electronic dispersion compensation. The performance is evaluated and compared with results based on the approximate chi-square statistics. We show that detection in existing systems based on exact statistics can be improved relative to using a chi-square distribution for realistic filter shapes. In contrast, for high-spectral efficiency systems the difference between the two approaches diminishes, and performance tends to be less dependent on the exact shape of the filter used.

  12. Characterizing Rapidly Rotating Asteroids with Filtered Photometry

    NASA Astrophysics Data System (ADS)

    Arion, Douglas

    2018-01-01

    It is challenging to characterize rapidly rotating asteroids, as their aspect changes significantly between exposures using different filters. Indeed, small asteroids may very well be agglomerations of smaller components that may have differing compositions, and thus the shape and composition of the body may be incorrectly inferred. We have observed a number of smaller, rapidly rotating bodies to try to separate compositional and shape elements from light curves in B, V, R, and I. Results from these observations will be presented, as well as identifying the challenges in conducting this research will be discussed. This work has been supported by the Wisconsin Space Grant Consortium.

  13. Multilayer modal actuator-based piezoelectric transformers.

    PubMed

    Huang, Yao-Tien; Wu, Wen-Jong; Wang, Yen-Chieh; Lee, Chih-Kung

    2007-02-01

    An innovative, multilayer piezoelectric transformer equipped with a full modal filtering input electrode is reported herein. This modal-shaped electrode, based on the orthogonal property of structural vibration modes, is characterized by full modal filtering to ensure that only the desired vibration mode is excited during operation. The newly developed piezoelectric transformer is comprised of three layers: a multilayered input layer, an insulation layer, and a single output layer. The electrode shape of the input layer is derived from its structural vibration modal shape, which takes advantage of the orthogonal property of the vibration modes to achieve a full modal filtering effect. The insulation layer possesses two functions: first, to couple the mechanical vibration energy between the input and output, and second, to provide electrical insulation between the two layers. To meet the two functions, a low temperature, co-fired ceramic (LTCC) was used to provide the high mechanical rigidity and high electrical insulation. It can be shown that this newly developed piezoelectric transformer has the advantage of possessing a more efficient energy transfer and a wider optimal working frequency range when compared to traditional piezoelectric transformers. A multilayer piezoelectric, transformer-based inverter applicable for use in LCD monitors or portable displays is presented as well.

  14. A strain energy filter for 3D vessel enhancement with application to pulmonary CT images.

    PubMed

    Xiao, Changyan; Staring, Marius; Shamonin, Denis; Reiber, Johan H C; Stolk, Jan; Stoel, Berend C

    2011-02-01

    The traditional Hessian-related vessel filters often suffer from detecting complex structures like bifurcations due to an over-simplified cylindrical model. To solve this problem, we present a shape-tuned strain energy density function to measure vessel likelihood in 3D medical images. This method is initially inspired by established stress-strain principles in mechanics. By considering the Hessian matrix as a stress tensor, the three invariants from orthogonal tensor decomposition are used independently or combined to formulate distinctive functions for vascular shape discrimination, brightness contrast and structure strength measuring. Moreover, a mathematical description of Hessian eigenvalues for general vessel shapes is obtained, based on an intensity continuity assumption, and a relative Hessian strength term is presented to ensure the dominance of second-order derivatives as well as suppress undesired step-edges. Finally, we adopt the multi-scale scheme to find an optimal solution through scale space. The proposed method is validated in experiments with a digital phantom and non-contrast-enhanced pulmonary CT data. It is shown that our model performed more effectively in enhancing vessel bifurcations and preserving details, compared to three existing filters. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. High-Resolution Radar Waveforms Based on Randomized Latin Square Sequences

    DTIC Science & Technology

    2017-04-18

    familiar Costas sequence [17]. The ambiguity function first introduced by Woodward in [13] is used to evaluate the matched filter output of a Radar waveform...the zero-delay cut that the result takes the shape of a sinc function which shows, even for significant Doppler shifts, the matched filter output...bad feature as the high ridge of the LFM waveform will still result in a large matched filter response from the target, just not at the correct delay

  16. Real-Time Wavelength Discrimination for Improved Neutron Discrimination in CLYC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hornback, Donald Eric; Hu, Michael Z.; Bell, Zane W.

    We investigated the effects of optical filters on the pulse shape discrimination properties of Cs 2LiYCl 6:Ce (CLYC) scintillator crystals. By viewing the scintillation light through various optical filters, we attempted to better distinguish between neutron and gamma ray events in the crystal. We applied commercial interference and colored glass filters in addition to fabricating quantum dot (QD) filters by suspending QDs in plastic films and glass. QD filters ultimately failed because of instability of the QDs with respect to oxidation when exposed to ambient air, and the tendency of the QDs to aggregate in the plastic. Of the commercialmore » filters, the best results were obtained with a bandpass interference filter covering the spectral region containing core-valence luminescence (CVL) light. However, the PSD response of filtered CLYC light was always poorer than the response exhibited by unfiltered light because filters always reduced the amount of light available for signal processing.« less

  17. Methods and apparatuses using filter banks for multi-carrier spread spectrum signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A

    2017-01-31

    A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to themore » synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.« less

  18. Methods and apparatuses using filter banks for multi-carrier spread spectrum signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moradi, Hussein; Farhang, Behrouz; Kutsche, Carl A.

    2016-06-14

    A transmitter includes a synthesis filter bank to spread a data symbol to a plurality of frequencies by encoding the data symbol on each frequency, apply a common pulse-shaping filter, and apply gains to the frequencies such that a power level of each frequency is less than a noise level of other communication signals within the spectrum. Each frequency is modulated onto a different evenly spaced subcarrier. A demodulator in a receiver converts a radio frequency input to a spread-spectrum signal in a baseband. A matched filter filters the spread-spectrum signal with a common filter having characteristics matched to themore » synthesis filter bank in the transmitter by filtering each frequency to generate a sequence of narrow pulses. A carrier recovery unit generates control signals responsive to the sequence of narrow pulses suitable for generating a phase-locked loop between the demodulator, the matched filter, and the carrier recovery unit.« less

  19. Presynaptic Neuronal Nicotinic Receptors Differentially Shape Select Inputs to Auditory Thalamus and Are Negatively Impacted by Aging.

    PubMed

    Sottile, Sarah Y; Hackett, Troy A; Cai, Rui; Ling, Lynne; Llano, Daniel A; Caspary, Donald M

    2017-11-22

    Acetylcholine (ACh) is a potent neuromodulator capable of modifying patterns of acoustic information flow. In auditory cortex, cholinergic systems have been shown to increase salience/gain while suppressing extraneous information. However, the mechanism by which cholinergic circuits shape signal processing in the auditory thalamus (medial geniculate body, MGB) is poorly understood. The present study, in male Fischer Brown Norway rats, seeks to determine the location and function of presynaptic neuronal nicotinic ACh receptors (nAChRs) at the major inputs to MGB and characterize how nAChRs change during aging. In vitro electrophysiological/optogenetic methods were used to examine responses of MGB neurons after activation of nAChRs during a paired-pulse paradigm. Presynaptic nAChR activation increased responses evoked by stimulation of excitatory corticothalamic and inhibitory tectothalamic terminals. Conversely, nAChR activation appeared to have little effect on evoked responses from inhibitory thalamic reticular nucleus and excitatory tectothalamic terminals. In situ hybridization data showed nAChR subunit transcripts in GABAergic inferior colliculus neurons and glutamatergic auditory cortical neurons supporting the present slice findings. Responses to nAChR activation at excitatory corticothalamic and inhibitory tectothalamic inputs were diminished by aging. These findings suggest that cholinergic input to the MGB increases the strength of tectothalamic inhibitory projections, potentially improving the signal-to-noise ratio and signal detection while increasing corticothalamic gain, which may facilitate top-down identification of stimulus identity. These mechanisms appear to be affected negatively by aging, potentially diminishing speech perception in noisy environments. Cholinergic inputs to the MGB appear to maximize sensory processing by adjusting both top-down and bottom-up mechanisms in conditions of attention and arousal. SIGNIFICANCE STATEMENT The pedunculopontine tegmental nucleus is the source of cholinergic innervation for sensory thalamus and is a critical part of an ascending arousal system that controls the firing mode of thalamic cells based on attentional demand. The present study describes the location and impact of aging on presynaptic neuronal nicotinic acetylcholine receptors (nAChRs) within the circuitry of the auditory thalamus (medial geniculate body, MGB). We show that nAChRs are located on ascending inhibitory and descending excitatory presynaptic inputs onto MGB neurons, likely increasing gain selectively and improving temporal clarity. In addition, we show that aging has a deleterious effect on nAChR efficacy. Cholinergic dysfunction at the level of MGB may affect speech understanding negatively in the elderly population. Copyright © 2017 the authors 0270-6474/17/3711378-13$15.00/0.

  20. Presynaptic Neuronal Nicotinic Receptors Differentially Shape Select Inputs to Auditory Thalamus and Are Negatively Impacted by Aging

    PubMed Central

    Sottile, Sarah Y.; Hackett, Troy A.

    2017-01-01

    Acetylcholine (ACh) is a potent neuromodulator capable of modifying patterns of acoustic information flow. In auditory cortex, cholinergic systems have been shown to increase salience/gain while suppressing extraneous information. However, the mechanism by which cholinergic circuits shape signal processing in the auditory thalamus (medial geniculate body, MGB) is poorly understood. The present study, in male Fischer Brown Norway rats, seeks to determine the location and function of presynaptic neuronal nicotinic ACh receptors (nAChRs) at the major inputs to MGB and characterize how nAChRs change during aging. In vitro electrophysiological/optogenetic methods were used to examine responses of MGB neurons after activation of nAChRs during a paired-pulse paradigm. Presynaptic nAChR activation increased responses evoked by stimulation of excitatory corticothalamic and inhibitory tectothalamic terminals. Conversely, nAChR activation appeared to have little effect on evoked responses from inhibitory thalamic reticular nucleus and excitatory tectothalamic terminals. In situ hybridization data showed nAChR subunit transcripts in GABAergic inferior colliculus neurons and glutamatergic auditory cortical neurons supporting the present slice findings. Responses to nAChR activation at excitatory corticothalamic and inhibitory tectothalamic inputs were diminished by aging. These findings suggest that cholinergic input to the MGB increases the strength of tectothalamic inhibitory projections, potentially improving the signal-to-noise ratio and signal detection while increasing corticothalamic gain, which may facilitate top-down identification of stimulus identity. These mechanisms appear to be affected negatively by aging, potentially diminishing speech perception in noisy environments. Cholinergic inputs to the MGB appear to maximize sensory processing by adjusting both top-down and bottom-up mechanisms in conditions of attention and arousal. SIGNIFICANCE STATEMENT The pedunculopontine tegmental nucleus is the source of cholinergic innervation for sensory thalamus and is a critical part of an ascending arousal system that controls the firing mode of thalamic cells based on attentional demand. The present study describes the location and impact of aging on presynaptic neuronal nicotinic acetylcholine receptors (nAChRs) within the circuitry of the auditory thalamus (medial geniculate body, MGB). We show that nAChRs are located on ascending inhibitory and descending excitatory presynaptic inputs onto MGB neurons, likely increasing gain selectively and improving temporal clarity. In addition, we show that aging has a deleterious effect on nAChR efficacy. Cholinergic dysfunction at the level of MGB may affect speech understanding negatively in the elderly population. PMID:29061702

  1. Active field control (AFC) -electro-acoustic enhancement system using acoustical feedback control

    NASA Astrophysics Data System (ADS)

    Miyazaki, Hideo; Watanabe, Takayuki; Kishinaga, Shinji; Kawakami, Fukushi

    2003-10-01

    AFC is an electro-acoustic enhancement system using FIR filters to optimize auditory impressions, such as liveness, loudness, and spaciousness. This system has been under development at Yamaha Corporation for more than 15 years and has been installed in approximately 50 venues in Japan to date. AFC utilizes feedback control techniques for recreation of reverberation from the physical reverberation of the room. In order to prevent coloration problems caused by a closed loop condition, two types of time-varying control techniques are implemented in the AFC system to ensure smooth loop gain and a sufficient margin in frequency characteristics to prevent instability. Those are: (a) EMR (electric microphone rotator) -smoothing frequency responses between microphones and speakers by changing the combinations of inputs and outputs periodically; (b) fluctuating-FIR -smoothing frequency responses of FIR filters and preventing coloration problems caused by fixed FIR filters, by moving each FIR tap periodically on time axis with a different phase and time period. In this paper, these techniques are summarized. A block diagram of AFC using new equipment named AFC1, which has been developed at Yamaha Corporation and released recently in the US, is also presented.

  2. 14 CFR 29.1385 - Position light system installation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... position lights and the rear position light must make a single circuit. (e) Light covers and color filters. Each light cover or color filter must be at least flame resistant and may not change color or shape or... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Position light system installation. 29.1385...

  3. 14 CFR 27.1385 - Position light system installation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... position lights and the rear position light must make a single circuit. (e) Light covers and color filters. Each light cover or color filter must be at least flame resistant and may not change color or shape or... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Position light system installation. 27.1385...

  4. The Development of Performance-Based Auditory Aviation Classification Standards in the U.S. Navy,

    DTIC Science & Technology

    1987-12-01

    Gerontology, Vol. 24(2), pp. 189-192, 1969. 10. Palva, A. and Jokinen, K., ’The Role of the Binaural Test in Filtered Speech Audiometry." Acta Oto...BEAD BEAT BEAN REEL HEEL EEL PAVE PALE PAY WIG RIG FIG GALE MALE TALE PAGE PANE PACE PIG BIG DIG PALE SALE BALE DID DIN DIP SAP SAG SAD SIN WIN...SEEN SEED SEEK CAME GAME SAME NEAT BEAT SEAT SEEM SEETHE SEEP PAD PASS PATH PARK MARK HARK SIP RIP TIP PACK PAN PAT DARK LARK BARK LIP HIP DIP LED

  5. Sex differences present in auditory looming perception, absent in auditory recession

    NASA Astrophysics Data System (ADS)

    Neuhoff, John G.; Seifritz, Erich

    2005-04-01

    When predicting the arrival time of an approaching sound source, listeners typically exhibit an anticipatory bias that affords a margin of safety in dealing with looming objects. The looming bias has been demonstrated behaviorally in the laboratory and in the field (Neuhoff 1998, 2001), neurally in fMRI studies (Seifritz et al., 2002), and comparatively in non-human primates (Ghazanfar, Neuhoff, and Logothetis, 2002). In the current work, male and female listeners were presented with three-dimensional looming sound sources and asked to press a button when the source was at the point of closest approach. Females exhibited a significantly greater anticipatory bias than males. Next, listeners were presented with sounds that either approached or receded and then stopped at three different terminal distances. Consistent with the time-to-arrival judgments, female terminal distance judgments for looming sources were significantly closer than male judgments. However, there was no difference between male and female terminal distance judgments for receding sounds. Taken together with the converging behavioral, neural, and comparative evidence, the current results illustrate the environmental salience of looming sounds and suggest that the anticipatory bias for auditory looming may have been shaped by evolution to provide a selective advantage in dealing with looming objects.

  6. Psychophysics of human echolocation.

    PubMed

    Schörnich, Sven; Wallmeier, Ludwig; Gessele, Nikodemus; Nagy, Andreas; Schranner, Michael; Kish, Daniel; Wiegrebe, Lutz

    2013-01-01

    The skills of some blind humans orienting in their environment through the auditory analysis of reflections from self-generated sounds have received only little scientific attention to date. Here we present data from a series of formal psychophysical experiments with sighted subjects trained to evaluate features of a virtual echo-acoustic space, allowing for rigid and fine-grain control of the stimulus parameters. The data show how subjects shape both their vocalisations and auditory analysis of the echoes to serve specific echo-acoustic tasks. First, we show that humans can echo-acoustically discriminate target distances with a resolution of less than 1 m for reference distances above 3.4 m. For a reference distance of 1.7 m, corresponding to an echo delay of only 10 ms, distance JNDs were typically around 0.5 m. Second, we explore the interplay between the precedence effect and echolocation. We show that the strong perceptual asymmetry between lead and lag is weakened during echolocation. Finally, we show that through the auditory analysis of self-generated sounds, subjects discriminate room-size changes as small as 10%.In summary, the current data confirm the practical efficacy of human echolocation, and they provide a rigid psychophysical basis for addressing its neural foundations.

  7. Promises of formal and informal musical activities in advancing neurocognitive development throughout childhood.

    PubMed

    Putkinen, Vesa; Tervaniemi, Mari; Saarikivi, Katri; Huotilainen, Minna

    2015-03-01

    Adult musicians show superior neural sound discrimination when compared to nonmusicians. However, it is unclear whether these group differences reflect the effects of experience or preexisting neural enhancement in individuals who seek out musical training. Tracking how brain function matures over time in musically trained and nontrained children can shed light on this issue. Here, we review our recent longitudinal event-related potential (ERP) studies that examine how formal musical training and less formal musical activities influence the maturation of brain responses related to sound discrimination and auditory attention. These studies found that musically trained school-aged children and preschool-aged children attending a musical playschool show more rapid maturation of neural sound discrimination than their control peers. Importantly, we found no evidence for pretraining group differences. In a related cross-sectional study, we found ERP and behavioral evidence for improved executive functions and control over auditory novelty processing in musically trained school-aged children and adolescents. Taken together, these studies provide evidence for the causal role of formal musical training and less formal musical activities in shaping the development of important neural auditory skills and suggest transfer effects with domain-general implications. © 2015 New York Academy of Sciences.

  8. Computational Design of Tunable UV-Vis-IR Filters Based on Silver Nanoparticle Arrays

    NASA Astrophysics Data System (ADS)

    Waters, Michael; Shi, Guangsha; Kioupakis, Emmanouil

    We propose design strategies to develop selective optical filters in the UV-Vis-IR spectrum using the surface plasmon response of silver nanoparticle arrays. Our finite-difference time-domain simulations allow us to rapidly evaluate many nanostructures comprising simple geometries while varying their shape, height, width, and spacing. Our results allow us to identify trends in the filtering spectra as well as the relative amount of absorption and reflection. Optical filtering with nanoparticles is applicable to any transparent substrate and can be easily adapted to existing manufacturing processes while keeping the total cost of materials low. This work was supported by Guardian Industries Corp.

  9. Encoding frequency contrast in primate auditory cortex

    PubMed Central

    Scott, Brian H.; Semple, Malcolm N.

    2014-01-01

    Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM stimuli (e.g., modulation frequency, modulation depth, carrier frequency) were robustly encoded in the temporal dynamics of the spike trains. For example, changes in the carrier frequency produced highly reproducible changes in shapes of the modulation period histogram, consistent with the notion that the instantaneous probability of discharge mirrors the moment-by-moment spectrum at low modulation rates. The upper limit for phase locking was similar across SAM and SFM within neurons, suggesting shared biophysical constraints on temporal processing. Using spike train classification methods, we found that neural thresholds for modulation depth discrimination are typically far lower than would be predicted from frequency tuning to static tones. This “dynamic hyperacuity” suggests a substantial central enhancement of the neural representation of frequency changes relative to the auditory periphery. Spike timing information was superior to average rate information when discriminating among SFM signals, and even when discriminating among static tones varying in frequency. This finding held even when differences in total spike count across stimuli were normalized, indicating both the primacy and generality of temporal response dynamics in cortical auditory processing. PMID:24598525

  10. Dynamic Reweighting of Auditory Modulation Filters.

    PubMed

    Joosten, Eva R M; Shamma, Shihab A; Lorenzi, Christian; Neri, Peter

    2016-07-01

    Sound waveforms convey information largely via amplitude modulations (AM). A large body of experimental evidence has provided support for a modulation (bandpass) filterbank. Details of this model have varied over time partly reflecting different experimental conditions and diverse datasets from distinct task strategies, contributing uncertainty to the bandwidth measurements and leaving important issues unresolved. We adopt here a solely data-driven measurement approach in which we first demonstrate how different models can be subsumed within a common 'cascade' framework, and then proceed to characterize the cascade via system identification analysis using a single stimulus/task specification and hence stable task rules largely unconstrained by any model or parameters. Observers were required to detect a brief change in level superimposed onto random level changes that served as AM noise; the relationship between trial-by-trial noisy fluctuations and corresponding human responses enables targeted identification of distinct cascade elements. The resulting measurements exhibit a dynamic complex picture in which human perception of auditory modulations appears adaptive in nature, evolving from an initial lowpass to bandpass modes (with broad tuning, Q∼1) following repeated stimulus exposure.

  11. Spatiotemporal dynamics of auditory attention synchronize with speech

    PubMed Central

    Wöstmann, Malte; Herrmann, Björn; Maess, Burkhard

    2016-01-01

    Attention plays a fundamental role in selectively processing stimuli in our environment despite distraction. Spatial attention induces increasing and decreasing power of neural alpha oscillations (8–12 Hz) in brain regions ipsilateral and contralateral to the locus of attention, respectively. This study tested whether the hemispheric lateralization of alpha power codes not just the spatial location but also the temporal structure of the stimulus. Participants attended to spoken digits presented to one ear and ignored tightly synchronized distracting digits presented to the other ear. In the magnetoencephalogram, spatial attention induced lateralization of alpha power in parietal, but notably also in auditory cortical regions. This alpha power lateralization was not maintained steadily but fluctuated in synchrony with the speech rate and lagged the time course of low-frequency (1–5 Hz) sensory synchronization. Higher amplitude of alpha power modulation at the speech rate was predictive of a listener’s enhanced performance of stream-specific speech comprehension. Our findings demonstrate that alpha power lateralization is modulated in tune with the sensory input and acts as a spatiotemporal filter controlling the read-out of sensory content. PMID:27001861

  12. SABRE: ligand/structure-based virtual screening approach using consensus molecular-shape pattern recognition.

    PubMed

    Wei, Ning-Ning; Hamza, Adel

    2014-01-27

    We present an efficient and rational ligand/structure shape-based virtual screening approach combining our previous ligand shape-based similarity SABRE (shape-approach-based routines enhanced) and the 3D shape of the receptor binding site. Our approach exploits the pharmacological preferences of a number of known active ligands to take advantage of the structural diversities and chemical similarities, using a linear combination of weighted molecular shape density. Furthermore, the algorithm generates a consensus molecular-shape pattern recognition that is used to filter and place the candidate structure into the binding pocket. The descriptor pool used to construct the consensus molecular-shape pattern consists of four dimensional (4D) fingerprints generated from the distribution of conformer states available to a molecule and the 3D shapes of a set of active ligands computed using SABRE software. The virtual screening efficiency of SABRE was validated using the Database of Useful Decoys (DUD) and the filtered version (WOMBAT) of 10 DUD targets. The ligand/structure shape-based similarity SABRE algorithm outperforms several other widely used virtual screening methods which uses the data fusion of multiscreening tools (2D and 3D fingerprints) and demonstrates a superior early retrieval rate of active compounds (EF(0.1%) = 69.0% and EF(1%) = 98.7%) from a large size of ligand database (∼95,000 structures). Therefore, our developed similarity approach can be of particular use for identifying active compounds that are similar to reference molecules and predicting activity against other targets (chemogenomics). An academic license of the SABRE program is available on request.

  13. Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.

    PubMed

    Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S

    2018-02-21

    Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from sound-level distributions with different modes (15 vs 45 dB). Auditory cortex neurons adapted to sound-level statistics in younger and older adults, but adaptation was incomplete in older people. The data suggest that the aging auditory system does not fully capitalize on the statistics available in sound environments to tune the perceptual system dynamically. Copyright © 2018 the authors 0270-6474/18/381989-11$15.00/0.

  14. A target detection multi-layer matched filter for color and hyperspectral cameras

    NASA Astrophysics Data System (ADS)

    Miyanishi, Tomoya; Preece, Bradley L.; Reynolds, Joseph P.

    2018-05-01

    In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.

  15. The Influence of Visual and Auditory Information on the Perception of Speech and Non-Speech Oral Movements in Patients with Left Hemisphere Lesions

    ERIC Educational Resources Information Center

    Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram

    2009-01-01

    Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands…

  16. Background noise can enhance cortical auditory evoked potentials under certain conditions

    PubMed Central

    Papesh, Melissa A.; Billings, Curtis J.; Baltzell, Lucas S.

    2017-01-01

    Objective To use cortical auditory evoked potentials (CAEPs) to understand neural encoding in background noise and the conditions under which noise enhances CAEP responses. Methods CAEPs from 16 normal-hearing listeners were recorded using the speech syllable/ba/presented in quiet and speech-shaped noise at signal-to-noise ratios of 10 and 30 dB. The syllable was presented binaurally and monaurally at two presentation rates. Results The amplitudes of N1 and N2 peaks were often significantly enhanced in the presence of low-level background noise relative to quiet conditions, while P1 and P2 amplitudes were consistently reduced in noise. P1 and P2 amplitudes were significantly larger during binaural compared to monaural presentations, while N1 and N2 peaks were similar between binaural and monaural conditions. Conclusions Methodological choices impact CAEP peaks in very different ways. Negative peaks can be enhanced by background noise in certain conditions, while positive peaks are generally enhanced by binaural presentations. Significance Methodological choices significantly impact CAEPs acquired in quiet and in noise. If CAEPs are to be used as a tool to explore signal encoding in noise, scientists must be cognizant of how differences in acquisition and processing protocols selectively shape CAEP responses. PMID:25453611

  17. Speech Processing to Improve the Perception of Speech in Background Noise for Children With Auditory Processing Disorder and Typically Developing Peers.

    PubMed

    Flanagan, Sheila; Zorilă, Tudor-Cătălin; Stylianou, Yannis; Moore, Brian C J

    2018-01-01

    Auditory processing disorder (APD) may be diagnosed when a child has listening difficulties but has normal audiometric thresholds. For adults with normal hearing and with mild-to-moderate hearing impairment, an algorithm called spectral shaping with dynamic range compression (SSDRC) has been shown to increase the intelligibility of speech when background noise is added after the processing. Here, we assessed the effect of such processing using 8 children with APD and 10 age-matched control children. The loudness of the processed and unprocessed sentences was matched using a loudness model. The task was to repeat back sentences produced by a female speaker when presented with either speech-shaped noise (SSN) or a male competing speaker (CS) at two signal-to-background ratios (SBRs). Speech identification was significantly better with SSDRC processing than without, for both groups. The benefit of SSDRC processing was greater for the SSN than for the CS background. For the SSN, scores were similar for the two groups at both SBRs. For the CS, the APD group performed significantly more poorly than the control group. The overall improvement produced by SSDRC processing could be useful for enhancing communication in a classroom where the teacher's voice is broadcast using a wireless system.

  18. A correlational method to concurrently measure envelope and temporal fine structure weights: effects of age, cochlear pathology, and spectral shaping.

    PubMed

    Fogerty, Daniel; Humes, Larry E

    2012-09-01

    The speech signal may be divided into spectral frequency-bands, each band containing temporal properties of the envelope and fine structure. This study measured the perceptual weights for the envelope and fine structure in each of three frequency bands for sentence materials in young normal-hearing listeners, older normal-hearing listeners, aided older hearing-impaired listeners, and spectrally matched young normal-hearing listeners. The availability of each acoustic property was independently varied through noisy signal extraction. Thus, the full speech stimulus was presented with noise used to mask six different auditory channels. Perceptual weights were determined by correlating a listener's performance with the signal-to-noise ratio of each acoustic property on a trial-by-trial basis. Results demonstrate that temporal fine structure perceptual weights remain stable across the four listener groups. However, a different weighting typography was observed across the listener groups for envelope cues. Results suggest that spectral shaping used to preserve the audibility of the speech stimulus may alter the allocation of perceptual resources. The relative perceptual weighting of envelope cues may also change with age. Concurrent testing of sentences repeated once on a previous day demonstrated that weighting strategies for all listener groups can change, suggesting an initial stabilization period or susceptibility to auditory training.

  19. Sound symbolism scaffolds language development in preverbal infants.

    PubMed

    Asano, Michiko; Imai, Mutsumi; Kita, Sotaro; Kitajo, Keiichi; Okada, Hiroyuki; Thierry, Guillaume

    2015-02-01

    A fundamental question in language development is how infants start to assign meaning to words. Here, using three Electroencephalogram (EEG)-based measures of brain activity, we establish that preverbal 11-month-old infants are sensitive to the non-arbitrary correspondences between language sounds and concepts, that is, to sound symbolism. In each trial, infant participants were presented with a visual stimulus (e.g., a round shape) followed by a novel spoken word that either sound-symbolically matched ("moma") or mismatched ("kipi") the shape. Amplitude increase in the gamma band showed perceptual integration of visual and auditory stimuli in the match condition within 300 msec of word onset. Furthermore, phase synchronization between electrodes at around 400 msec revealed intensified large-scale, left-hemispheric communication between brain regions in the mismatch condition as compared to the match condition, indicating heightened processing effort when integration was more demanding. Finally, event-related brain potentials showed an increased adult-like N400 response - an index of semantic integration difficulty - in the mismatch as compared to the match condition. Together, these findings suggest that 11-month-old infants spontaneously map auditory language onto visual experience by recruiting a cross-modal perceptual processing system and a nascent semantic network within the first year of life. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Using Simplistic Shape/Surface Models to Predict Brightness in Estimation Filters

    NASA Astrophysics Data System (ADS)

    Wetterer, C.; Sheppard, D.; Hunt, B.

    The prerequisite for using brightness (radiometric flux intensity) measurements in an estimation filter is to have a measurement function that accurately predicts a space objects brightness for variations in the parameters of interest. These parameters include changes in attitude and articulations of particular components (e.g. solar panel east-west offsets to direct sun-tracking). Typically, shape models and bidirectional reflectance distribution functions are combined to provide this forward light curve modeling capability. To achieve precise orbit predictions with the inclusion of shape/surface dependent forces such as radiation pressure, relatively complex and sophisticated modeling is required. Unfortunately, increasing the complexity of the models makes it difficult to estimate all those parameters simultaneously because changes in light curve features can now be explained by variations in a number of different properties. The classic example of this is the connection between the albedo and the area of a surface. If, however, the desire is to extract information about a single and specific parameter or feature from the light curve, a simple shape/surface model could be used. This paper details an example of this where a complex model is used to create simulated light curves, and then a simple model is used in an estimation filter to extract out a particular feature of interest. In order for this to be successful, however, the simple model must be first constructed using training data where the feature of interest is known or at least known to be constant.

Top