Vonderschen, Katrin; Wagner, Hermann
2012-04-25
Birds and mammals exploit interaural time differences (ITDs) for sound localization. Subsequent to ITD detection by brainstem neurons, ITD processing continues in parallel midbrain and forebrain pathways. In the barn owl, both ITD detection and processing in the midbrain are specialized to extract ITDs independent of frequency, which amounts to a pure time delay representation. Recent results have elucidated different mechanisms of ITD detection in mammals, which lead to a representation of small ITDs in high-frequency channels and large ITDs in low-frequency channels, resembling a phase delay representation. However, the detection mechanism does not prevent a change in ITD representation at higher processing stages. Here we analyze ITD tuning across frequency channels with pure tone and noise stimuli in neurons of the barn owl's auditory arcopallium, a nucleus at the endpoint of the forebrain pathway. To extend the analysis of ITD representation across frequency bands to a large neural population, we employed Fourier analysis for the spectral decomposition of ITD curves recorded with noise stimuli. This method was validated using physiological as well as model data. We found that low frequencies convey sensitivity to large ITDs, whereas high frequencies convey sensitivity to small ITDs. Moreover, different linear phase frequency regimes in the high-frequency and low-frequency ranges suggested an independent convergence of inputs from these frequency channels. Our results are consistent with ITD being remodeled toward a phase delay representation along the forebrain pathway. This indicates that sensory representations may undergo substantial reorganization, presumably in relation to specific behavioral output.
Forebrain pathway for auditory space processing in the barn owl.
Cohen, Y E; Miller, G L; Knudsen, E I
1998-02-01
The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.
2017-01-01
Abstract While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space. PMID:28674698
Rapid Effects of Hearing Song on Catecholaminergic Activity in the Songbird Auditory Pathway
Matragrano, Lisa L.; Beaulieu, Michaël; Phillip, Jessica O.; Rae, Ali I.; Sanford, Sara E.; Sockman, Keith W.; Maney, Donna L.
2012-01-01
Catecholaminergic (CA) neurons innervate sensory areas and affect the processing of sensory signals. For example, in birds, CA fibers innervate the auditory pathway at each level, including the midbrain, thalamus, and forebrain. We have shown previously that in female European starlings, CA activity in the auditory forebrain can be enhanced by exposure to attractive male song for one week. It is not known, however, whether hearing song can initiate that activity more rapidly. Here, we exposed estrogen-primed, female white-throated sparrows to conspecific male song and looked for evidence of rapid synthesis of catecholamines in auditory areas. In one hemisphere of the brain, we used immunohistochemistry to detect the phosphorylation of tyrosine hydroxylase (TH), a rate-limiting enzyme in the CA synthetic pathway. We found that immunoreactivity for TH phosphorylated at serine 40 increased dramatically in the auditory forebrain, but not the auditory thalamus and midbrain, after 15 min of song exposure. In the other hemisphere, we used high pressure liquid chromatography to measure catecholamines and their metabolites. We found that two dopamine metabolites, dihydroxyphenylacetic acid and homovanillic acid, increased in the auditory forebrain but not the auditory midbrain after 30 min of exposure to conspecific song. Our results are consistent with the hypothesis that exposure to a behaviorally relevant auditory stimulus rapidly induces CA activity, which may play a role in auditory responses. PMID:22724011
Matragrano, Lisa L.; Sanford, Sara E.; Salvante, Katrina G.; Beaulieu, Michaël; Sockman, Keith W.; Maney, Donna L.
2011-01-01
Because no organism lives in an unchanging environment, sensory processes must remain plastic so that in any context, they emphasize the most relevant signals. As the behavioral relevance of sociosexual signals changes along with reproductive state, the perception of those signals is altered by reproductive hormones such as estradiol (E2). We showed previously that in white-throated sparrows, immediate early gene responses in the auditory pathway of females are selective for conspecific male song only when plasma E2 is elevated to breeding-typical levels. In this study, we looked for evidence that E2-dependent modulation of auditory responses is mediated by serotonergic systems. In female nonbreeding white-throated sparrows treated with E2, the density of fibers immunoreactive for serotonin transporter innervating the auditory midbrain and rostral auditory forebrain increased compared with controls. E2 treatment also increased the concentration of the serotonin metabolite 5-HIAA in the caudomedial mesopallium of the auditory forebrain. In a second experiment, females exposed to 30 min of conspecific male song had higher levels of 5-HIAA in the caudomedial nidopallium of the auditory forebrain than birds not exposed to song. Overall, we show that in this seasonal breeder, (1) serotonergic fibers innervate auditory areas; (2) the density of those fibers is higher in females with breeding-typical levels of E2 than in nonbreeding, untreated females; and (3) serotonin is released in the auditory forebrain within minutes in response to conspecific vocalizations. Our results are consistent with the hypothesis that E2 acts via serotonin systems to alter auditory processing. PMID:21942431
Parthasarathy, Aravindakshan; Bartlett, Edward
2012-07-01
Auditory brainstem responses (ABRs), and envelope and frequency following responses (EFRs and FFRs) are widely used to study aberrant auditory processing in conditions such as aging. We have previously reported age-related deficits in auditory processing for rapid amplitude modulation (AM) frequencies using EFRs recorded from a single channel. However, sensitive testing of EFRs along a wide range of modulation frequencies is required to gain a more complete understanding of the auditory processing deficits. In this study, ABRs and EFRs were recorded simultaneously from two electrode configurations in young and old Fischer-344 rats, a common auditory aging model. Analysis shows that the two channels respond most sensitively to complementary AM frequencies. Channel 1, recorded from Fz to mastoid, responds better to faster AM frequencies in the 100-700 Hz range of frequencies, while Channel 2, recorded from the inter-aural line to the mastoid, responds better to slower AM frequencies in the 16-100 Hz range. Simultaneous recording of Channels 1 and 2 using AM stimuli with varying sound levels and modulation depths show that age-related deficits in temporal processing are not present at slower AM frequencies but only at more rapid ones, which would not have been apparent recording from either channel alone. Comparison of EFRs between un-anesthetized and isoflurane-anesthetized recordings in young animals, as well as comparison with previously published ABR waveforms, suggests that the generators of Channel 1 may emphasize more caudal brainstem structures while those of Channel 2 may emphasize more rostral auditory nuclei including the inferior colliculus and the forebrain, with the boundary of separation potentially along the cochlear nucleus/superior olivary complex. Simultaneous two-channel recording of EFRs help to give a more complete understanding of the properties of auditory temporal processing over a wide range of modulation frequencies which is useful in understanding neural representations of sound stimuli in normal, developmental or pathological conditions. Copyright © 2012 Elsevier B.V. All rights reserved.
The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex
Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J
2014-01-01
Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. PMID:24945075
The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex.
Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J
2014-09-01
Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. © 2014 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Transcriptional maturation of the mouse auditory forebrain.
Hackett, Troy A; Guo, Yan; Clause, Amanda; Hackett, Nicholas J; Garbett, Krassimira; Zhang, Pan; Polley, Daniel B; Mirnics, Karoly
2015-08-14
The maturation of the brain involves the coordinated expression of thousands of genes, proteins and regulatory elements over time. In sensory pathways, gene expression profiles are modified by age and sensory experience in a manner that differs between brain regions and cell types. In the auditory system of altricial animals, neuronal activity increases markedly after the opening of the ear canals, initiating events that culminate in the maturation of auditory circuitry in the brain. This window provides a unique opportunity to study how gene expression patterns are modified by the onset of sensory experience through maturity. As a tool for capturing these features, next-generation sequencing of total RNA (RNAseq) has tremendous utility, because the entire transcriptome can be screened to index expression of any gene. To date, whole transcriptome profiles have not been generated for any central auditory structure in any species at any age. In the present study, RNAseq was used to profile two regions of the mouse auditory forebrain (A1, primary auditory cortex; MG, medial geniculate) at key stages of postnatal development (P7, P14, P21, adult) before and after the onset of hearing (~P12). Hierarchical clustering, differential expression, and functional geneset enrichment analyses (GSEA) were used to profile the expression patterns of all genes. Selected genesets related to neurotransmission, developmental plasticity, critical periods and brain structure were highlighted. An accessible repository of the entire dataset was also constructed that permits extraction and screening of all data from the global through single-gene levels. To our knowledge, this is the first whole transcriptome sequencing study of the forebrain of any mammalian sensory system. Although the data are most relevant for the auditory system, they are generally applicable to forebrain structures in the visual and somatosensory systems, as well. The main findings were: (1) Global gene expression patterns were tightly clustered by postnatal age and brain region; (2) comparing A1 and MG, the total numbers of differentially expressed genes were comparable from P7 to P21, then dropped to nearly half by adulthood; (3) comparing successive age groups, the greatest numbers of differentially expressed genes were found between P7 and P14 in both regions, followed by a steady decline in numbers with age; (4) maturational trajectories in expression levels varied at the single gene level (increasing, decreasing, static, other); (5) between regions, the profiles of single genes were often asymmetric; (6) GSEA revealed that genesets related to neural activity and plasticity were typically upregulated from P7 to adult, while those related to structure tended to be downregulated; (7) GSEA and pathways analysis of selected functional networks were not predictive of expression patterns in the auditory forebrain for all genes, reflecting regional specificity at the single gene level. Gene expression in the auditory forebrain during postnatal development is in constant flux and becomes increasingly stable with age. Maturational changes are evident at the global through single gene levels. Transcriptome profiles in A1 and MG are distinct at all ages, and differ from other brain regions. The database generated by this study provides a rich foundation for the identification of novel developmental biomarkers, functional gene pathways, and targeted studies of postnatal maturation in the auditory forebrain.
Song exposure regulates known and novel microRNAs in the zebra finch auditory forebrain
2011-01-01
Background In an important model for neuroscience, songbirds learn to discriminate songs they hear during tape-recorded playbacks, as demonstrated by song-specific habituation of both behavioral and neurogenomic responses in the auditory forebrain. We hypothesized that microRNAs (miRNAs or miRs) may participate in the changing pattern of gene expression induced by song exposure. To test this, we used massively parallel Illumina sequencing to analyse small RNAs from auditory forebrain of adult zebra finches exposed to tape-recorded birdsong or silence. Results In the auditory forebrain, we identified 121 known miRNAs conserved in other vertebrates. We also identified 34 novel miRNAs that do not align to human or chicken genomes. Five conserved miRNAs showed significant and consistent changes in copy number after song exposure across three biological replications of the song-silence comparison, with two increasing (tgu-miR-25, tgu-miR-192) and three decreasing (tgu-miR-92, tgu-miR-124, tgu-miR-129-5p). We also detected a locus on the Z sex chromosome that produces three different novel miRNAs, with supporting evidence from Northern blot and TaqMan qPCR assays for differential expression in males and females and in response to song playbacks. One of these, tgu-miR-2954-3p, is predicted (by TargetScan) to regulate eight song-responsive mRNAs that all have functions in cellular proliferation and neuronal differentiation. Conclusions The experience of hearing another bird singing alters the profile of miRNAs in the auditory forebrain of zebra finches. The response involves both known conserved miRNAs and novel miRNAs described so far only in the zebra finch, including a novel sex-linked, song-responsive miRNA. These results indicate that miRNAs are likely to contribute to the unique behavioural biology of learned song communication in songbirds. PMID:21627805
Male song quality modulates c-Fos expression in the auditory forebrain of the female canary
Monbureau, Marie; Barker, Jennifer M.; Leboucher, Gérard; Balthazart, Jacques
2015-01-01
In canaries, specific phrases of male song (sexy songs, SS) that are difficult to produce are especially attractive for females. Females exposed to SS produce more copulation displays and deposit more testosterone into their eggs than females exposed to non-sexy songs (NS). Increased expression of the immediate early genes c-Fos or zenk (a.k.a. egr-1) has been observed in the auditory forebrain of female songbirds hearing attractive songs. C-Fos immunoreactive (Fos-ir) cell numbers were quantified here in the brain of female canaries that had been collected 30 min after they had been exposed for 60 min to the playback of SS or NS or control white noise. Fos-ir cell numbers increased in the caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM) of SS birds as compared to controls. Song playback (pooled SS and NS) also tended to increase average Fos-ir cell numbers in the mediobasal hypothalamus (MBH) but this effect did not reach full statistical significance. At the individual level, Fos expression in CMM was correlated with its expression in NCM and in MBH but also with the frequency of calls that females produced in response to the playbacks. These data thus indicate that male songs of different qualities induce a differential metabolic activation of NCM and CMM. The correlation between activation of auditory regions and of the MBH might reflect the link between auditory stimulation and changes in behavior and reproductive physiology. PMID:25846435
Yoder, Kathleen M.; Vicario, David S.
2012-01-01
Gonadal hormones modulate behavioral responses to sexual stimuli, and communication signals can also modulate circulating hormone levels. In several species, these combined effects appear to underlie a two-way interaction between circulating gonadal hormones and behavioral responses to socially salient stimuli. Recent work in songbirds has shown that manipulating local estradiol levels in the auditory forebrain produces physiological changes that affect discrimination of conspecific vocalizations and can affect behavior. These studies provide new evidence that estrogens can directly alter auditory processing and indirectly alter the behavioral response to a stimulus. These studies show that: 1. Local estradiol action within an auditory area is necessary for socially-relevant sounds to induce normal physiological responses in the brains of both sexes; 2. These physiological effects occur much more quickly than predicted by the classical time-frame for genomic effects; 3. Estradiol action within the auditory forebrain enables behavioral discrimination among socially-relevant sounds in males; and 4. Estradiol is produced locally in the male brain during exposure to particular social interactions. The accumulating evidence suggests a socio-neuro-endocrinology framework in which estradiol is essential to auditory processing, is increased by a socially relevant stimulus, acts rapidly to shape perception of subsequent stimuli experienced during social interactions, and modulates behavioral responses to these stimuli. Brain estrogens are likely to function similarly in both songbird sexes because aromatase and estrogen receptors are present in both male and female forebrain. Estrogenic modulation of perception in songbirds and perhaps other animals could fine-tune male advertising signals and female ability to discriminate them, facilitating mate selection by modulating behaviors. Keywords: Estrogens, Songbird, Social Context, Auditory Perception PMID:22201281
Estradiol selectively enhances auditory function in avian forebrain neurons
Caras, Melissa L.; O’Brien, Matthew; Brenowitz, Eliot A.; Rubel, Edwin W
2012-01-01
Sex steroids modulate vertebrate sensory processing, but the impact of circulating hormone levels on forebrain function remains unclear. We tested the hypothesis that circulating sex steroids modulate single-unit responses in the avian telencephalic auditory nucleus, field L. We mimicked breeding or non-breeding conditions by manipulating plasma 17β-estradiol levels in wild-caught female Gambel’s white-crowned sparrows (Zonotrichia leucophrys gambelii). Extracellular responses of single neurons to tones and conspecific songs presented over a range of intensities revealed that estradiol selectively enhanced auditory function in cells that exhibited monotonic rate-level functions to pure tones. In these cells, estradiol treatment increased spontaneous and maximum evoked firing rates, increased pure tone response strengths and sensitivity, and expanded the range of intensities over which conspecific song stimuli elicited significant responses. Estradiol did not significantly alter the sensitivity or dynamic ranges of cells that exhibited non-monotonic rate-level functions. Notably, there was a robust correlation between plasma estradiol concentrations in individual birds and physiological response properties in monotonic, but not non-monotonic neurons. These findings demonstrate that functionally distinct classes of anatomically overlapping forebrain neurons are differentially regulated by sex steroid hormones in a dose-dependent manner. PMID:23223283
Pinaud, Raphael; Terleph, Thomas A.; Tremere, Liisa A.; Phan, Mimi L.; Dagostin, André A.; Leão, Ricardo M.; Mello, Claudio V.; Vicario, David S.
2008-01-01
The role of GABA in the central processing of complex auditory signals is not fully understood. We have studied the involvement of GABAA-mediated inhibition in the processing of birdsong, a learned vocal communication signal requiring intact hearing for its development and maintenance. We focused on caudomedial nidopallium (NCM), an area analogous to parts of the mammalian auditory cortex with selective responses to birdsong. We present evidence that GABAA-mediated inhibition plays a pronounced role in NCM's auditory processing of birdsong. Using immunocytochemistry, we show that approximately half of NCM's neurons are GABAergic. Whole cell patch-clamp recordings in a slice preparation demonstrate that, at rest, spontaneously active GABAergic synapses inhibit excitatory inputs onto NCM neurons via GABAA receptors. Multi-electrode electrophysiological recordings in awake birds show that local blockade of GABAA-mediated inhibition in NCM markedly affects the temporal pattern of song-evoked responses in NCM without modifications in frequency tuning. Surprisingly, this blockade increases the phasic and largely suppresses the tonic response component, reflecting dynamic relationships of inhibitory networks that could include disinhibition. Thus processing of learned natural communication sounds in songbirds, and possibly other vocal learners, may depend on complex interactions of inhibitory networks. PMID:18480371
Vicario, David S.
2017-01-01
Sensory and motor brain structures work in collaboration during perception. To evaluate their respective contributions, the present study recorded neural responses to auditory stimulation at multiple sites simultaneously in both the higher-order auditory area NCM and the premotor area HVC of the songbird brain in awake zebra finches (Taeniopygia guttata). Bird’s own song (BOS) and various conspecific songs (CON) were presented in both blocked and shuffled sequences. Neural responses showed plasticity in the form of stimulus-specific adaptation, with markedly different dynamics between the two structures. In NCM, the response decrease with repetition of each stimulus was gradual and long-lasting and did not differ between the stimuli or the stimulus presentation sequences. In contrast, HVC responses to CON stimuli decreased much more rapidly in the blocked than in the shuffled sequence. Furthermore, this decrease was more transient in HVC than in NCM, as shown by differential dynamics in the shuffled sequence. Responses to BOS in HVC decreased more gradually than to CON stimuli. The quality of neural representations, computed as the mutual information between stimuli and neural activity, was higher in NCM than in HVC. Conversely, internal functional correlations, estimated as the coherence between recording sites, were greater in HVC than in NCM. The cross-coherence between the two structures was weak and limited to low frequencies. These findings suggest that auditory communication signals are processed according to very different but complementary principles in NCM and HVC, a contrast that may inform study of the auditory and motor pathways for human speech processing. NEW & NOTEWORTHY Neural responses to auditory stimulation in sensory area NCM and premotor area HVC of the songbird forebrain show plasticity in the form of stimulus-specific adaptation with markedly different dynamics. These two structures also differ in stimulus representations and internal functional correlations. Accordingly, NCM seems to process the individually specific complex vocalizations of others based on prior familiarity, while HVC responses appear to be modulated by transitions and/or timing in the ongoing sequence of sounds. PMID:28031398
Endoplasmic Reticulum Stress as a Mediator of Neurotoxin-Induced Dopamine Neuron Death
2006-07-01
reversible reduction in choline acetyl- transferase concentration in rat hypoglossal nucleus after hypoglossal nerve transection. Nature 275, 324–325...cally, analogs were evaluated for their ability to enhance choline acetyltransferase (ChAT) activity in embryonic rat spinal cord and basal forebrain...of ibotenate, CEP1347 protected basal forebrain cholinergic neurons.102 In a model of apoptosis induced in auditory hair cells by noise trauma, CEP1347
Mechanisms of spectral and temporal integration in the mustached bat inferior colliculus
Wenstrup, Jeffrey James; Nataraj, Kiran; Sanchez, Jason Tait
2012-01-01
This review describes mechanisms and circuitry underlying combination-sensitive response properties in the auditory brainstem and midbrain. Combination-sensitive neurons, performing a type of auditory spectro-temporal integration, respond to specific, properly timed combinations of spectral elements in vocal signals and other acoustic stimuli. While these neurons are known to occur in the auditory forebrain of many vertebrate species, the work described here establishes their origin in the auditory brainstem and midbrain. Focusing on the mustached bat, we review several major findings: (1) Combination-sensitive responses involve facilitatory interactions, inhibitory interactions, or both when activated by distinct spectral elements in complex sounds. (2) Combination-sensitive responses are created in distinct stages: inhibition arises mainly in lateral lemniscal nuclei of the auditory brainstem, while facilitation arises in the inferior colliculus (IC) of the midbrain. (3) Spectral integration underlying combination-sensitive responses requires a low-frequency input tuned well below a neuron's characteristic frequency (ChF). Low-ChF neurons in the auditory brainstem project to high-ChF regions in brainstem or IC to create combination sensitivity. (4) At their sites of origin, both facilitatory and inhibitory combination-sensitive interactions depend on glycinergic inputs and are eliminated by glycine receptor blockade. Surprisingly, facilitatory interactions in IC depend almost exclusively on glycinergic inputs and are largely independent of glutamatergic and GABAergic inputs. (5) The medial nucleus of the trapezoid body (MNTB), the lateral lemniscal nuclei, and the IC play critical roles in creating combination-sensitive responses. We propose that these mechanisms, based on work in the mustached bat, apply to a broad range of mammals and other vertebrates that depend on temporally sensitive integration of information across the audible spectrum. PMID:23109917
How the songbird brain listens to its own songs
NASA Astrophysics Data System (ADS)
Hahnloser, Richard
2010-03-01
Songbirds are capable of vocal learning and communication and are ideally suited to the study of neural mechanisms of auditory feedback processing. When a songbird is deafened in the early sensorimotor phase after tutoring, it fails to imitate the song of its tutor and develops a highly aberrant song. It is also known that birds are capable of storing a long-term memory of tutor song and that they need intact auditory feedback to match their own vocalizations to the tutor's song. Based on these behavioral observations, we investigate feedback processing in single auditory forebrain neurons of juvenile zebra finches that are in a late developmental stage of song learning. We implant birds with miniature motorized microdrives that allow us to record the electrical activity of single neurons while birds are freely moving and singing in their cages. Occasionally, we deliver a brief sound through a loudspeaker to perturb the auditory feedback the bird experiences during singing. These acoustic perturbations of auditory feedback reveal complex sensitivity that cannot be predicted from passive playback responses. Some neurons are highly feedback sensitive in that they respond vigorously to song perturbations, but not to unperturbed songs or perturbed playback. These findings suggest that a computational function of forebrain auditory areas may be to detect errors between actual feedback and mirrored feedback deriving from an internal model of the bird's own song or that of its tutor.
Acoustic imprinting leads to differential 2-deoxy-D-glucose uptake in the chick forebrain.
Maier, V; Scheich, H
1983-01-01
This report describes experiments in which successful acoustic imprinting correlates with differential uptake of D-2-deoxy[14C]glucose in particular forebrain areas that are not considered primarily auditory. Newly hatched guinea chicks (Numida meleagris meleagris) were imprinted by playing 1.8-kHz or 2.5-kHz tone bursts for prolonged periods. Those chicks were considered to be imprinted who approached the imprinting stimulus (emitted from a loudspeaker) and preferred it over a new stimulus in a simultaneous discrimination test. In the 2-deoxy-D-glucose experiment all chicks, imprinted and naive, were exposed to 1.8-kHz tone bursts for 1 hr. As shown by the autoradiographic analysis of the brains, neurons in the 1.8-kHz isofrequency plane of the auditory "cortex" (field L) were activated in all chicks, whether imprinted or not. However, in the most rostral forebrain striking differences were found. Imprinted chicks showed an increased 2-deoxy-D-glucose uptake in three areas, as compared to naive chicks: (i) the lateral neostriatum and hyperstriatum ventrale, (ii) a medial magnocellular field (medial neostriatum/hyperstriatum ventrale), and (iii) the most dorsal layers of the hyperstriatum. Based on these findings we conclude that these areas are involved in the processing of auditory stimuli once they have become meaningful by experience. Images PMID:6574519
Song decrystallization in adult zebra finches does not require the song nucleus NIf.
Roy, Arani; Mooney, Richard
2009-08-01
In adult male zebra finches, transecting the vocal nerve causes previously stable (i.e., crystallized) song to slowly degrade, presumably because of the resulting distortion in auditory feedback. How and where distorted feedback interacts with song motor networks to induce this process of song decrystallization remains unknown. The song premotor nucleus HVC is a potential site where auditory feedback signals could interact with song motor commands. Although the forebrain nucleus interface of the nidopallium (NIf) appears to be the primary auditory input to HVC, NIf lesions made in adult zebra finches do not trigger song decrystallization. One possibility is that NIf lesions do not interfere with song maintenance, but do compromise the adult zebra finch's ability to express renewed vocal plasticity in response to feedback perturbations. To test this idea, we bilaterally lesioned NIf and then transected the vocal nerve in adult male zebra finches. We found that bilateral NIf lesions did not prevent nerve section-induced song decrystallization. To test the extent to which the NIf lesions disrupted auditory processing in the song system, we made in vivo extracellular recordings in HVC and a downstream anterior forebrain pathway (AFP) in NIf-lesioned birds. We found strong and selective auditory responses to the playback of the birds' own song persisted in HVC and the AFP following NIf lesions. These findings suggest that auditory inputs to the song system other than NIf, such as the caudal mesopallium, could act as a source of auditory feedback signals to the song motor network.
Beckers, Gabriël J L; Gahr, Manfred
2012-08-01
Auditory systems bias responses to sounds that are unexpected on the basis of recent stimulus history, a phenomenon that has been widely studied using sequences of unmodulated tones (mismatch negativity; stimulus-specific adaptation). Such a paradigm, however, does not directly reflect problems that neural systems normally solve for adaptive behavior. We recorded multiunit responses in the caudomedial auditory forebrain of anesthetized zebra finches (Taeniopygia guttata) at 32 sites simultaneously, to contact calls that recur probabilistically at a rate that is used in communication. Neurons in secondary, but not primary, auditory areas respond preferentially to calls when they are unexpected (deviant) compared with the same calls when they are expected (standard). This response bias is predominantly due to sites more often not responding to standard events than to deviant events. When two call stimuli alternate between standard and deviant roles, most sites exhibit a response bias to deviant events of both stimuli. This suggests that biases are not based on a use-dependent decrease in response strength but involve a more complex mechanism that is sensitive to auditory deviance per se. Furthermore, between many secondary sites, responses are tightly synchronized, a phenomenon that is driven by internal neuronal interactions rather than by the timing of stimulus acoustic features. We hypothesize that this deviance-sensitive, internally synchronized network of neurons is involved in the involuntary capturing of attention by unexpected and behaviorally potentially relevant events in natural auditory scenes.
Song Decrystallization in Adult Zebra Finches Does Not Require the Song Nucleus NIf
Roy, Arani; Mooney, Richard
2009-01-01
In adult male zebra finches, transecting the vocal nerve causes previously stable (i.e., crystallized) song to slowly degrade, presumably because of the resulting distortion in auditory feedback. How and where distorted feedback interacts with song motor networks to induce this process of song decrystallization remains unknown. The song premotor nucleus HVC is a potential site where auditory feedback signals could interact with song motor commands. Although the forebrain nucleus interface of the nidopallium (NIf) appears to be the primary auditory input to HVC, NIf lesions made in adult zebra finches do not trigger song decrystallization. One possibility is that NIf lesions do not interfere with song maintenance, but do compromise the adult zebra finch's ability to express renewed vocal plasticity in response to feedback perturbations. To test this idea, we bilaterally lesioned NIf and then transected the vocal nerve in adult male zebra finches. We found that bilateral NIf lesions did not prevent nerve section–induced song decrystallization. To test the extent to which the NIf lesions disrupted auditory processing in the song system, we made in vivo extracellular recordings in HVC and a downstream anterior forebrain pathway (AFP) in NIf-lesioned birds. We found strong and selective auditory responses to the playback of the birds' own song persisted in HVC and the AFP following NIf lesions. These findings suggest that auditory inputs to the song system other than NIf, such as the caudal mesopallium, could act as a source of auditory feedback signals to the song motor network. PMID:19515953
Tellers, Philipp; Lehmann, Jessica; Führ, Hartmut; Wagner, Hermann
2017-09-01
Birds and mammals use the interaural time difference (ITD) for azimuthal sound localization. While barn owls can use the ITD of the stimulus carrier frequency over nearly their entire hearing range, mammals have to utilize the ITD of the stimulus envelope to extend the upper frequency limit of ITD-based sound localization. ITD is computed and processed in a dedicated neural circuit that consists of two pathways. In the barn owl, ITD representation is more complex in the forebrain than in the midbrain pathway because of the combination of two inputs that represent different ITDs. We speculated that one of the two inputs includes an envelope contribution. To estimate the envelope contribution, we recorded ITD response functions for correlated and anticorrelated noise stimuli in the barn owl's auditory arcopallium. Our findings indicate that barn owls, like mammals, represent both carrier and envelope ITDs of overlapping frequency ranges, supporting the hypothesis that carrier and envelope ITD-based localization are complementary beyond a mere extension of the upper frequency limit. NEW & NOTEWORTHY The results presented in this study show for the first time that the barn owl is able to extract and represent the interaural time difference (ITD) information conveyed by the envelope of a broadband acoustic signal. Like mammals, the barn owl extracts the ITD of the envelope and the carrier of a signal from the same frequency range. These results are of general interest, since they reinforce a trend found in neural signal processing across different species. Copyright © 2017 the American Physiological Society.
Thalamic and cortical pathways supporting auditory processing
Lee, Charles C.
2012-01-01
The neural processing of auditory information engages pathways that begin initially at the cochlea and that eventually reach forebrain structures. At these higher levels, the computations necessary for extracting auditory source and identity information rely on the neuroanatomical connections between the thalamus and cortex. Here, the general organization of these connections in the medial geniculate body (thalamus) and the auditory cortex is reviewed. In addition, we consider two models organizing the thalamocortical pathways of the non-tonotopic and multimodal auditory nuclei. Overall, the transfer of information to the cortex via the thalamocortical pathways is complemented by the numerous intracortical and corticocortical pathways. Although interrelated, the convergent interactions among thalamocortical, corticocortical, and commissural pathways enable the computations necessary for the emergence of higher auditory perception. PMID:22728130
Developmental Experience Alters Information Coding in Auditory Midbrain and Forebrain Neurons
Woolley, Sarah M. N.; Hauber, Mark E.; Theunissen, Frederic E.
2010-01-01
In songbirds, species identity and developmental experience shape vocal behavior and behavioral responses to vocalizations. The interaction of species identity and developmental experience may also shape the coding properties of sensory neurons. We tested whether responses of auditory midbrain and forebrain neurons to songs differed between species and between groups of conspecific birds with different developmental exposure to song. We also compared responses of individual neurons to conspecific and heterospecific songs. Zebra and Bengalese finches that were raised and tutored by conspecific birds, and zebra finches that were cross-tutored by Bengalese finches were studied. Single-unit responses to zebra and Bengalese finch songs were recorded and analyzed by calculating mutual information, response reliability, mean spike rate, fluctuations in time-varying spike rate, distributions of time-varying spike rates, and neural discrimination of individual songs. Mutual information quantifies a response’s capacity to encode information about a stimulus. In midbrain and forebrain neurons, mutual information was significantly higher in normal zebra finch neurons than in Bengalese finch and cross-tutored zebra finch neurons, but not between Bengalese finch and cross-tutored zebra finch neurons. Information rate differences were largely due to spike rate differences. Mutual information did not differ between responses to conspecific and heterospecific songs. Therefore, neurons from normal zebra finches encoded more information about songs than did neurons from other birds, but conspecific and heterospecific songs were encoded equally. Neural discrimination of songs and mutual information were highly correlated. Results demonstrate that developmental exposure to vocalizations shapes the information coding properties of songbird auditory neurons. PMID:20039264
Cardin, Jessica A; Raksin, Jonathan N; Schmidt, Marc F
2005-04-01
Sensorimotor integration in the avian song system is crucial for both learning and maintenance of song, a vocal motor behavior. Although a number of song system areas demonstrate both sensory and motor characteristics, their exact roles in auditory and premotor processing are unclear. In particular, it is unknown whether input from the forebrain nucleus interface of the nidopallium (NIf), which exhibits both sensory and premotor activity, is necessary for both auditory and premotor processing in its target, HVC. Here we show that bilateral NIf lesions result in long-term loss of HVC auditory activity but do not impair song production. NIf is thus a major source of auditory input to HVC, but an intact NIf is not necessary for motor output in adult zebra finches.
Estradiol-dependent modulation of auditory processing and selectivity in songbirds
Maney, Donna; Pinaud, Raphael
2011-01-01
The steroid hormone estradiol plays an important role in reproductive development and behavior and modulates a wide array of physiological and cognitive processes. Recently, reports from several research groups have converged to show that estradiol also powerfully modulates sensory processing, specifically, the physiology of central auditory circuits in songbirds. These investigators have discovered that (1) behaviorally-relevant auditory experience rapidly increases estradiol levels in the auditory forebrain; (2) estradiol instantaneously enhances the responsiveness and coding efficiency of auditory neurons; (3) these changes are mediated by a non-genomic effect of brain-generated estradiol on the strength of inhibitory neurotransmission; and (4) estradiol regulates biochemical cascades that induce the expression of genes involved in synaptic plasticity. Together, these findings have established estradiol as a central regulator of auditory function and intensified the need to consider brain-based mechanisms, in addition to peripheral organ dysfunction, in hearing pathologies associated with estrogen deficiency. PMID:21146556
Shared neural substrates for song discrimination in parental and parasitic songbirds.
Louder, Matthew I M; Voss, Henning U; Manna, Thomas J; Carryl, Sophia S; London, Sarah E; Balakrishnan, Christopher N; Hauber, Mark E
2016-05-27
In many social animals, early exposure to conspecific stimuli is critical for the development of accurate species recognition. Obligate brood parasitic songbirds, however, forego parental care and young are raised by heterospecific hosts in the absence of conspecific stimuli. Having evolved from non-parasitic, parental ancestors, how brood parasites recognize their own species remains unclear. In parental songbirds (e.g. zebra finch Taeniopygia guttata), the primary and secondary auditory forebrain areas are known to be critical in the differential processing of conspecific vs. heterospecific songs. Here we demonstrate that the same auditory brain regions underlie song discrimination in adult brood parasitic pin-tailed whydahs (Vidua macroura), a close relative of the zebra finch lineage. Similar to zebra finches, whydahs showed stronger behavioral responses during conspecific vs. heterospecific song and tone pips as well as increased neural responses within the auditory forebrain, as measured by both functional magnetic resonance imaging (fMRI) and immediate early gene (IEG) expression. Given parallel behavioral and neuroanatomical patterns of song discrimination, our results suggest that the evolutionary transition to brood parasitism from parental songbirds likely involved an "evolutionary tinkering" of existing proximate mechanisms, rather than the wholesale reworking of the neural substrates of species recognition. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The representation of sound localization cues in the barn owl's inferior colliculus
Singheiser, Martin; Gutfreund, Yoram; Wagner, Hermann
2012-01-01
The barn owl is a well-known model system for studying auditory processing and sound localization. This article reviews the morphological and functional organization, as well as the role of the underlying microcircuits, of the barn owl's inferior colliculus (IC). We focus on the processing of frequency and interaural time (ITD) and level differences (ILD). We first summarize the morphology of the sub-nuclei belonging to the IC and their differentiation by antero- and retrograde labeling and by staining with various antibodies. We then focus on the response properties of neurons in the three major sub-nuclei of IC [core of the central nucleus of the IC (ICCc), lateral shell of the central nucleus of the IC (ICCls), and the external nucleus of the IC (ICX)]. ICCc projects to ICCls, which in turn sends its information to ICX. The responses of neurons in ICCc are sensitive to changes in ITD but not to changes in ILD. The distribution of ITD sensitivity with frequency in ICCc can only partly be explained by optimal coding. We continue with the tuning properties of ICCls neurons, the first station in the midbrain where the ITD and ILD pathways merge after they have split at the level of the cochlear nucleus. The ICCc and ICCls share similar ITD and frequency tuning. By contrast, ICCls shows sigmoidal ILD tuning which is absent in ICCc. Both ICCc and ICCls project to the forebrain, and ICCls also projects to ICX, where space-specific neurons are found. Space-specific neurons exhibit side peak suppression in ITD tuning, bell-shaped ILD tuning, and are broadly tuned to frequency. These neurons respond only to restricted positions of auditory space and form a map of two-dimensional auditory space. Finally, we briefly review major IC features, including multiplication-like computations, correlates of echo suppression, plasticity, and adaptation. PMID:22798945
Thode, C; Bock, J; Braun, K; Darlison, M G
2005-01-01
The immediate-early gene zenk (an acronym for the avian orthologue of the mammalian genes zif-268, egr-1, ngfi-a and krox-24) has been extensively employed, in studies on oscine birds, as a marker of neuronal activity to reveal forebrain structures that are involved in the memory processes associated with the acquisition, perception and production of song. Audition-induced expression of this gene, in brain, has also recently been reported for the domestic chicken (Gallus gallus domesticus) and the Japanese quail (Coturnix coturnix japonica). Whilst the anatomical distribution of zenk expression was described for the quail, corresponding data for the chicken were not reported. We have, therefore, used in situ hybridisation to localise the mRNA that encodes the product of the zenk gene (which we call ZENK) within the brain of the 1-day-old chick. We demonstrate that this transcript is present in a number of forebrain structures including the medio-rostral neostriatum/hyperstriatum ventrale (MNH), a region that has been strongly implicated in auditory imprinting (which is a form of recognition memory), and Field L, the avian analog of the mammalian auditory cortex. Because of this pattern of gene expression, we have compared the level of the ZENK mRNA in chicks that have been subjected to a 30-min acoustic imprinting paradigm and in untrained controls. Our results reveal a significant increase (P< or =0.05) in the level of the ZENK mRNA in MNH and Field L, and in the two forebrain hemispheres; no increase was seen in the ectostriatum, which is a visual projection area. The data obtained implicate the immediate-early gene, zenk, in auditory imprinting, which is an established model of juvenile learning. In addition, our results indicate that the ZENK mRNA may be used as a molecular marker for MNH, a region that is difficult to anatomically and histochemically delineate.
Bock, Jörg; Braun, Katharina
1999-01-01
Auditory filial imprinting in the domestic chicken is accompanied by a dramatic loss of spine synapses in two higher associative forebrain areas, the mediorostral neostriatum/hyperstriatum ventrale (MNH) and the dorsocaudal neostriatum (Ndc). The cellular mechanisms that underlie this learning-induced synaptic reorganization are unclear. We found that local pharmacological blockade of N-methyl-d-aspartate (NMDA) receptors in the MNH, a manipulation that has been shown previously to impair auditory imprinting, suppresses the learning-induced spine reduction in this region. Chicks treated with the NMDA receptor antagonist 2-amino-5-phosphonovaleric acid (APV) during the behavioral training for imprinting (postnatal day 0–2) displayed similar spine frequencies at postnatal day 7 as naive control animals, which, in both groups, were significantly higher than in imprinted animals. Because the average dendritic length did not differ between the experimental groups, the reduced spine frequency can be interpreted as a reduction of the total number of spine synapses per neuron. In the Ndc, which is reciprocally connected with the MNH and not directly influenced by the injected drug, learning-induced spine elimination was partly suppressed. Spine frequencies of the APV-treated, behaviorally trained but nonimprinted animals were higher than in the imprinted animals but lower than in the naive animals. These results provide evidence that NMDA receptor activation is required for the learning-induced selective reduction of spine synapses, which may serve as a mechanism of information storage specific for juvenile emotional learning events. PMID:10051669
Information flow in the auditory cortical network
Hackett, Troy A.
2011-01-01
Auditory processing in the cerebral cortex is comprised of an interconnected network of auditory and auditory-related areas distributed throughout the forebrain. The nexus of auditory activity is located in temporal cortex among several specialized areas, or fields, that receive dense inputs from the medial geniculate complex. These areas are collectively referred to as auditory cortex. Auditory activity is extended beyond auditory cortex via connections with auditory-related areas elsewhere in the cortex. Within this network, information flows between areas to and from countless targets, but in a manner that is characterized by orderly regional, areal and laminar patterns. These patterns reflect some of the structural constraints that passively govern the flow of information at all levels of the network. In addition, the exchange of information within these circuits is dynamically regulated by intrinsic neurochemical properties of projecting neurons and their targets. This article begins with an overview of the principal circuits and how each is related to information flow along major axes of the network. The discussion then turns to a description of neurochemical gradients along these axes, highlighting recent work on glutamate transporters in the thalamocortical projections to auditory cortex. The article concludes with a brief discussion of relevant neurophysiological findings as they relate to structural gradients in the network. PMID:20116421
Precise auditory-vocal mirroring in neurons for learned vocal communication.
Prather, J F; Peters, S; Nowicki, S; Mooney, R
2008-01-17
Brain mechanisms for communication must establish a correspondence between sensory and motor codes used to represent the signal. One idea is that this correspondence is established at the level of single neurons that are active when the individual performs a particular gesture or observes a similar gesture performed by another individual. Although neurons that display a precise auditory-vocal correspondence could facilitate vocal communication, they have yet to be identified. Here we report that a certain class of neurons in the swamp sparrow forebrain displays a precise auditory-vocal correspondence. We show that these neurons respond in a temporally precise fashion to auditory presentation of certain note sequences in this songbird's repertoire and to similar note sequences in other birds' songs. These neurons display nearly identical patterns of activity when the bird sings the same sequence, and disrupting auditory feedback does not alter this singing-related activity, indicating it is motor in nature. Furthermore, these neurons innervate striatal structures important for song learning, raising the possibility that singing-related activity in these cells is compared to auditory feedback to guide vocal learning.
Gruss, M; Bock, J; Braun, K
2003-11-01
In vivo microdialysis and behavioural studies in the domestic chick have shown that glutamatergic as well as monoaminergic neurotransmission in the medio-rostral neostriatum/hyperstriatum ventrale (MNH) is altered after auditory filial imprinting. In the present study, using pharmaco-behavioural and in vivo microdialysis approaches, the role of dopaminergic neurotransmission in this juvenile learning event was further evaluated. The results revealed that: (i) the systemic application of the potent dopamine receptor antagonist haloperidol (7.5 mg/kg) strongly impairs auditory filial imprinting; (ii) systemic haloperidol induces a tetrodotoxin-sensitive increase of extracellular levels of the dopamine metabolite, homovanillic acid, in the MNH, whereas the levels of glutamate, taurine and the serotonin metabolite, 5-hydroxyindole-3-acetic acid, remain unchanged; (iii) haloperidol (0.01, 0.1, 1 mm) infused locally into the MNH increases glutamate, taurine and 5- hydroxyindole-3-acetic acid levels in a dose-dependent manner, whereas homovanillic acid levels remain unchanged; (iv) systemic haloperidol infusion reinforces the N-methyl-d-aspartate receptor-mediated inhibitory modulation of the dopaminergic neurotransmission within the MNH. These results indicate that the modulation of dopaminergic function and its interaction with other neurotransmitter systems in a higher associative forebrain region of the juvenile avian brain displays similar neurochemical characteristics as the adult mammalian prefrontal cortex. Furthermore, we were able to show that the pharmacological manipulation of monoaminergic regulatory mechanisms interferes with learning and memory formation, events which in a similar fashion might occur in young or adult mammals.
Scully, Erin N; Hahn, Allison H; Campbell, Kimberley A; McMillan, Neil; Congdon, Jenna V; Sturdy, Christopher B
2017-07-28
Zebra finches (Taeniopygia guttata) are sexually dimorphic songbirds, not only in appearance but also in vocal production: while males produce both calls and songs, females only produce calls. This dimorphism provides a means to contrast the auditory perception of vocalizations produced by songbird species of varying degrees of relatedness in a dimorphic species to that of a monomorphic species, species in which both males and females produce calls and songs (e.g., black-capped chickadees, Poecile atricapillus). In the current study, we examined neuronal expression after playback of acoustically similar hetero- and conspecific calls produced by species of differing phylogenetic relatedness to our subject species, zebra finch. We measured the immediate early gene (IEG) ZENK in two auditory areas of the forebrain (caudomedial mesopallium, CMM, and caudomedial nidopallium, NCM). We found no significant differences in ZENK expression in either male or female zebra finches regardless of playback condition. We also discuss comparisons between our results and the results of a previous study conducted by Avey et al. [1] on black-capped chickadees that used similar stimulus types. These results are consistent with the previous study which also found no significant differences in expression following playback of calls produced by various heterospecific species and conspecifics [1]. Our results suggest that, similar to black-capped chickadees, IEG expression in zebra finch CMM and NCM is tied to the acoustic similarity of vocalizations and not the phylogenetic relatedness of the species producing the vocalizations. Copyright © 2017 Elsevier B.V. All rights reserved.
Mohr, Robert A; Chang, Yiran; Bhandiwad, Ashwin A; Forlano, Paul M; Sisneros, Joseph A
2018-01-01
While the peripheral auditory system of fish has been well studied, less is known about how the fish's brain and central auditory system process complex social acoustic signals. The plainfin midshipman fish, Porichthys notatus, has become a good species for investigating the neural basis of acoustic communication because the production and reception of acoustic signals is paramount for this species' reproductive success. Nesting males produce long-duration advertisement calls that females detect and localize among the noise in the intertidal zone to successfully find mates and spawn. How female midshipman are able to discriminate male advertisement calls from environmental noise and other acoustic stimuli is unknown. Using the immediate early gene product cFos as a marker for neural activity, we quantified neural activation of the ascending auditory pathway in female midshipman exposed to conspecific advertisement calls, heterospecific white seabass calls, or ambient environment noise. We hypothesized that auditory hindbrain nuclei would be activated by general acoustic stimuli (ambient noise and other biotic acoustic stimuli) whereas auditory neurons in the midbrain and forebrain would be selectively activated by conspecific advertisement calls. We show that neural activation in two regions of the auditory hindbrain, i.e., the rostral intermediate division of the descending octaval nucleus and the ventral division of the secondary octaval nucleus, did not differ via cFos immunoreactive (cFos-ir) activity when exposed to different acoustic stimuli. In contrast, female midshipman exposed to conspecific advertisement calls showed greater cFos-ir in the nucleus centralis of the midbrain torus semicircularis compared to fish exposed only to ambient noise. No difference in cFos-ir was observed in the torus semicircularis of animals exposed to conspecific versus heterospecific calls. However, cFos-ir was greater in two forebrain structures that receive auditory input, i.e., the central posterior nucleus of the thalamus and the anterior tuberal hypothalamus, when exposed to conspecific calls versus either ambient noise or heterospecific calls. Our results suggest that higher-order neurons in the female midshipman midbrain torus semicircularis, thalamic central posterior nucleus, and hypothalamic anterior tuberal nucleus may be necessary for the discrimination of complex social acoustic signals. Furthermore, neurons in the central posterior and anterior tuberal nuclei are differentially activated by exposure to conspecific versus other acoustic stimuli. © 2018 S. Karger AG, Basel.
Forlano, Paul M; Marchaterre, Margaret; Deitcher, David L; Bass, Andrew H
2010-02-15
Across all major vertebrate groups, androgen receptors (ARs) have been identified in neural circuits that shape reproductive-related behaviors, including vocalization. The vocal control network of teleost fishes presents an archetypal example of how a vertebrate nervous system produces social, context-dependent sounds. We cloned a partial cDNA of AR that was used to generate specific probes to localize AR expression throughout the central nervous system of the vocal plainfin midshipman fish (Porichthys notatus). In the forebrain, AR mRNA is abundant in proposed homologs of the mammalian striatum and amygdala, and in anterior and posterior parvocellular and magnocellular nuclei of the preoptic area, nucleus preglomerulosus, and posterior, ventral and anterior tuberal nuclei of the hypothalamus. Many of these nuclei are part of the known vocal and auditory circuitry in midshipman. The midbrain periaqueductal gray, an essential link between forebrain and hindbrain vocal circuitry, and the lateral line recipient nucleus medialis in the rostral hindbrain also express abundant AR mRNA. In the caudal hindbrain-spinal vocal circuit, high AR mRNA is found in the vocal prepacemaker nucleus and along the dorsal periphery of the vocal motor nucleus congruent with the known pattern of expression of aromatase-containing glial cells. Additionally, abundant AR mRNA expression is shown for the first time in the inner ear of a vertebrate. The distribution of AR mRNA strongly supports the role of androgens as modulators of behaviorally defined vocal, auditory, and neuroendocrine circuits in teleost fish and vertebrates in general. 2009 Wiley-Liss, Inc.
Hackett, Troy A; Clause, Amanda R; Takahata, Toru; Hackett, Nicholas J; Polley, Daniel B
2016-06-01
Vesicular transporter proteins are an essential component of the presynaptic machinery that regulates neurotransmitter storage and release. They also provide a key point of control for homeostatic signaling pathways that maintain balanced excitation and inhibition following changes in activity levels, including the onset of sensory experience. To advance understanding of their roles in the developing auditory forebrain, we tracked the expression of the vesicular transporters of glutamate (VGluT1, VGluT2) and GABA (VGAT) in primary auditory cortex (A1) and medial geniculate body (MGB) of developing mice (P7, P11, P14, P21, adult) before and after ear canal opening (~P11-P13). RNA sequencing, in situ hybridization, and immunohistochemistry were combined to track changes in transporter expression and document regional patterns of transcript and protein localization. Overall, vesicular transporter expression changed the most between P7 and P21. The expression patterns and maturational trajectories of each marker varied by brain region, cortical layer, and MGB subdivision. VGluT1 expression was highest in A1, moderate in MGB, and increased with age in both regions. VGluT2 mRNA levels were low in A1 at all ages, but high in MGB, where adult levels were reached by P14. VGluT2 immunoreactivity was prominent in both regions. VGluT1 (+) and VGluT2 (+) transcripts were co-expressed in MGB and A1 somata, but co-localization of immunoreactive puncta was not detected. In A1, VGAT mRNA levels were relatively stable from P7 to adult, while immunoreactivity increased steadily. VGAT (+) transcripts were rare in MGB neurons, whereas VGAT immunoreactivity was robust at all ages. Morphological changes in immunoreactive puncta were found in two regions after ear canal opening. In the ventral MGB, a decrease in VGluT2 puncta density was accompanied by an increase in puncta size. In A1, perisomatic VGAT and VGluT1 terminals became prominent around the neuronal somata. Overall, the observed changes in gene and protein expression, regional architecture, and morphology relate to-and to some extent may enable-the emergence of mature sound-evoked activity patterns. In that regard, the findings of this study expand our understanding of the presynaptic mechanisms that regulate critical period formation associated with experience-dependent refinement of sound processing in auditory forebrain circuits.
Hackett, Troy A.; Clause, Amanda R.; Takahata, Toru; Hackett, Nicholas J.; Polley, Daniel B.
2015-01-01
Vesicular transporter proteins are an essential component of the presynaptic machinery that regulates neurotransmitter storage and release. They also provide a key point of control for homeostatic signaling pathways that maintain balanced excitation and inhibition following changes in activity levels, including the onset of sensory experience. To advance understanding of their roles in the developing auditory forebrain, we tracked the expression of the vesicular transporters of glutamate (VGluT1, VGluT2) and GABA (VGAT) in primary auditory cortex (A1) and medial geniculate body (MGB) of developing mice (P7, P11, P14, P21, adult) before and after ear canal opening (~P11–P13). RNA sequencing, in situ hybridization, and immunohistochemistry were combined to track changes in transporter expression and document regional patterns of transcript and protein localization. Overall, vesicular transporter expression changed the most between P7 and P21. The expression patterns and maturational trajectories of each marker varied by brain region, cortical layer, and MGB subdivision. VGluT1 expression was highest in A1, moderate in MGB, and increased with age in both regions. VGluT2 mRNA levels were low in A1 at all ages, but high in MGB, where adult levels were reached by P14. VGluT2 immunoreactivity was prominent in both regions. VGluT1+ and VGluT2+ transcripts were co-expressed in MGB and A1 somata, but co-localization of immunoreactive puncta was not detected. In A1, VGAT mRNA levels were relatively stable from P7 to adult, while immunoreactivity increased steadily. VGAT+ transcripts were rare in MGB neurons, whereas VGAT immunoreactivity was robust at all ages. Morphological changes in immunoreactive puncta were found in two regions after ear canal opening. In the ventral MGB, a decrease in VGluT2 puncta density was accompanied by an increase in puncta size. In A1, peri-somatic VGAT and VGluT1 terminals became prominent around the neuronal somata. Overall, the observed changes in gene and protein expression, regional architecture, and morphology relate to—and to some extent may enable— the emergence of mature sound-evoked activity patterns. In that regard, the findings of this study expand our understanding of the presynaptic mechanisms that regulate critical period formation associated with experience-dependent refinement of sound processing in auditory forebrain circuits. PMID:26159773
Obál, F; Benedek, G; Szikszay, M; Obál, F
1979-01-01
A study was made of the effects of high mesencephalic transection (cerveau isolé) and low doses of pentobarbital on the cortical synchronizations elicited in acute immobilized cats by (a) low frequency stimulation of the lateral hypothalamus (HL) and nucleus ventralis anterior thalami (VA) and (b) by low and high frequency stimulation of the laterobasal preoptic region (RPO) and olfactory tubercle (TbOf). The results obtained were as follows: (1) The synchronizations induced by basal forebrain stimulations were found to survive in acute cerveau isolé cats, moreover, even a facilitation of the synchronizing effect were observed. (2) A gradual facilitation was observed upon TbOf and RPO stimulation, while in the case of VA and HL stimulations, the facilitation appeared immediately after the transection. (3) Low doses of pentobarbital depressed the cortical effects of TbOf stimulation, while an increase of the synchronizing effect of low frequency VA and HL stimulation was found. The observations suggested that (i) the synchronizing mechanism in the ventral part of the basal forebrain (RPO and TbOf) differs from that of the thalamus and HL; (ii) the basal forebrain synchronizing mechanism is effective without the contribution of the brain stem; (iii) the mechanism responsible for the synchronizing effect of low frequency HL stimulation is similar as that described for the thalamus.
Chaves-Coira, Irene; Barros-Zulaica, Natali; Rodrigo-Angulo, Margarita; Núñez, Ángel
2016-01-01
Neocortical cholinergic activity plays a fundamental role in sensory processing and cognitive functions. Previous results have suggested a refined anatomical and functional topographical organization of basal forebrain (BF) projections that may control cortical sensory processing in a specific manner. We have used retrograde anatomical procedures to demonstrate the existence of specific neuronal groups in the BF involved in the control of specific sensory cortices. Fluoro-Gold (FlGo) and Fast Blue (FB) fluorescent retrograde tracers were deposited into the primary somatosensory (S1) and primary auditory (A1) cortices in mice. Our results revealed that the BF is a heterogeneous area in which neurons projecting to different cortical areas are segregated into different neuronal groups. Most of the neurons located in the horizontal limb of the diagonal band of Broca (HDB) projected to the S1 cortex, indicating that this area is specialized in the sensory processing of tactile stimuli. However, the nucleus basalis magnocellularis (B) nucleus shows a similar number of cells projecting to the S1 as to the A1 cortices. In addition, we analyzed the cholinergic effects on the S1 and A1 cortical sensory responses by optogenetic stimulation of the BF neurons in urethane-anesthetized transgenic mice. We used transgenic mice expressing the light-activated cation channel, channelrhodopsin-2, tagged with a fluorescent protein (ChR2-YFP) under the control of the choline-acetyl transferase promoter (ChAT). Cortical evoked potentials were induced by whisker deflections or by auditory clicks. According to the anatomical results, optogenetic HDB stimulation induced more extensive facilitation of tactile evoked potentials in S1 than auditory evoked potentials in A1, while optogenetic stimulation of the B nucleus facilitated either tactile or auditory evoked potentials equally. Consequently, our results suggest that cholinergic projections to the cortex are organized into segregated pools of neurons that may modulate specific cortical areas. PMID:27147975
Chaves-Coira, Irene; Barros-Zulaica, Natali; Rodrigo-Angulo, Margarita; Núñez, Ángel
2016-01-01
Neocortical cholinergic activity plays a fundamental role in sensory processing and cognitive functions. Previous results have suggested a refined anatomical and functional topographical organization of basal forebrain (BF) projections that may control cortical sensory processing in a specific manner. We have used retrograde anatomical procedures to demonstrate the existence of specific neuronal groups in the BF involved in the control of specific sensory cortices. Fluoro-Gold (FlGo) and Fast Blue (FB) fluorescent retrograde tracers were deposited into the primary somatosensory (S1) and primary auditory (A1) cortices in mice. Our results revealed that the BF is a heterogeneous area in which neurons projecting to different cortical areas are segregated into different neuronal groups. Most of the neurons located in the horizontal limb of the diagonal band of Broca (HDB) projected to the S1 cortex, indicating that this area is specialized in the sensory processing of tactile stimuli. However, the nucleus basalis magnocellularis (B) nucleus shows a similar number of cells projecting to the S1 as to the A1 cortices. In addition, we analyzed the cholinergic effects on the S1 and A1 cortical sensory responses by optogenetic stimulation of the BF neurons in urethane-anesthetized transgenic mice. We used transgenic mice expressing the light-activated cation channel, channelrhodopsin-2, tagged with a fluorescent protein (ChR2-YFP) under the control of the choline-acetyl transferase promoter (ChAT). Cortical evoked potentials were induced by whisker deflections or by auditory clicks. According to the anatomical results, optogenetic HDB stimulation induced more extensive facilitation of tactile evoked potentials in S1 than auditory evoked potentials in A1, while optogenetic stimulation of the B nucleus facilitated either tactile or auditory evoked potentials equally. Consequently, our results suggest that cholinergic projections to the cortex are organized into segregated pools of neurons that may modulate specific cortical areas.
Calcium Imaging of Basal Forebrain Activity during Innate and Learned Behaviors
Harrison, Thomas C.; Pinto, Lucas; Brock, Julien R.; Dan, Yang
2016-01-01
The basal forebrain (BF) plays crucial roles in arousal, attention, and memory, and its impairment is associated with a variety of cognitive deficits. The BF consists of cholinergic, GABAergic, and glutamatergic neurons. Electrical or optogenetic stimulation of BF cholinergic neurons enhances cortical processing and behavioral performance, but the natural activity of these cells during behavior is only beginning to be characterized. Even less is known about GABAergic and glutamatergic neurons. Here, we performed microendoscopic calcium imaging of BF neurons as mice engaged in spontaneous behaviors in their home cages (innate) or performed a go/no-go auditory discrimination task (learned). Cholinergic neurons were consistently excited during movement, including running and licking, but GABAergic and glutamatergic neurons exhibited diverse responses. All cell types were activated by overt punishment, either inside or outside of the discrimination task. These findings reveal functional similarities and distinctions between BF cell types during both spontaneous and task-related behaviors. PMID:27242444
Arriaga, Gustavo; Zhou, Eric P.; Jarvis, Erich D.
2012-01-01
Humans and song-learning birds communicate acoustically using learned vocalizations. The characteristic features of this social communication behavior include vocal control by forebrain motor areas, a direct cortical projection to brainstem vocal motor neurons, and dependence on auditory feedback to develop and maintain learned vocalizations. These features have so far not been found in closely related primate and avian species that do not learn vocalizations. Male mice produce courtship ultrasonic vocalizations with acoustic features similar to songs of song-learning birds. However, it is assumed that mice lack a forebrain system for vocal modification and that their ultrasonic vocalizations are innate. Here we investigated the mouse song system and discovered that it includes a motor cortex region active during singing, that projects directly to brainstem vocal motor neurons and is necessary for keeping song more stereotyped and on pitch. We also discovered that male mice depend on auditory feedback to maintain some ultrasonic song features, and that sub-strains with differences in their songs can match each other's pitch when cross-housed under competitive social conditions. We conclude that male mice have some limited vocal modification abilities with at least some neuroanatomical features thought to be unique to humans and song-learning birds. To explain our findings, we propose a continuum hypothesis of vocal learning. PMID:23071596
Local inhibition modulates learning-dependent song encoding in the songbird auditory cortex
Thompson, Jason V.; Jeanne, James M.
2013-01-01
Changes in inhibition during development are well documented, but the role of inhibition in adult learning-related plasticity is not understood. In songbirds, vocal recognition learning alters the neural representation of songs across the auditory forebrain, including the caudomedial nidopallium (NCM), a region analogous to mammalian secondary auditory cortices. Here, we block local inhibition with the iontophoretic application of gabazine, while simultaneously measuring song-evoked spiking activity in NCM of European starlings trained to recognize sets of conspecific songs. We find that local inhibition differentially suppresses the responses to learned and unfamiliar songs and enhances spike-rate differences between learned categories of songs. These learning-dependent response patterns emerge, in part, through inhibitory modulation of selectivity for song components and the masking of responses to specific acoustic features without altering spectrotemporal tuning. The results describe a novel form of inhibitory modulation of the encoding of learned categories and demonstrate that inhibition plays a central role in shaping the responses of neurons to learned, natural signals. PMID:23155175
Simmons, J M; Ackermann, R F; Gallistel, C R
1998-10-15
Lesions in the medial forebrain bundle rostral to a stimulating electrode have variable effects on the rewarding efficacy of self-stimulation. We attempted to account for this variability by measuring the anatomical and functional effects of electrolytic lesions at the level of the lateral hypothalamus (LH) and by correlating these effects to postlesion changes in threshold pulse frequency (pps) for self-stimulation in the ventral tegmental area (VTA). We implanted True Blue in the VTA and compared cell labeling patterns in forebrain regions of intact and lesioned animals. We also compared stimulation-induced regional [14C]deoxyglucose (DG) accumulation patterns in the forebrains of intact and lesioned animals. As expected, postlesion threshold shifts varied: threshold pps remained the same or decreased in eight animals, increased by small but significant amounts in three rats, and increased substantially in six subjects. Unexpectedly, LH lesions did not anatomically or functionally disconnect all forebrain nuclei from the VTA. Most septal and preoptic regions contained equivalent levels of True Blue label in intact and lesioned animals. In both intact and lesioned groups, VTA stimulation increased metabolic activity in the fundus of the striatum (FS), the nucleus of the diagonal band, and the medial preoptic area. On the other hand, True Blue labeling demonstrated anatomical disconnection of the accumbens, FS, substantia innominata/magnocellular preoptic nucleus (SI/MA), and bed nucleus of the stria terminalis. [14C]DG autoradiography indicated functional disconnection of the lateral preoptic area and SI/MA. Correlations between patterns of True Blue labeling or [14C]deoxyglucose accumulation and postlesion shifts in threshold pulse frequency were weak and generally negative. These direct measures of connectivity concord with the behavioral measures in suggesting a diffuse net-like connection between forebrain nuclei and the VTA.
[Auditory processing and high frequency audiometry in students of São Paulo].
Ramos, Cristina Silveira; Pereira, Liliane Desgualdo
2005-01-01
Auditory processing and auditory sensibility to high Frequency sounds. To characterize the localization processes, temporal ordering, hearing patterns and detection of high frequency sounds, looking for possible relations between these factors. 32 hearing fourth grade students, born in city of São Paulo, were submitted to: a simplified evaluation of the auditory processing; duration pattern test; high frequency audiometry. Three (9,4%) individuals presented auditory processing disorder (APD) and in one of them there was the coexistence of lower hearing thresholds in high frequency audiometry. APD associated to an auditory sensibility loss in high frequencies should be further investigated.
A frontal cortex event-related potential driven by the basal forebrain
Nguyen, David P; Lin, Shih-Chieh
2014-01-01
Event-related potentials (ERPs) are widely used in both healthy and neuropsychiatric conditions as physiological indices of cognitive functions. Contrary to the common belief that cognitive ERPs are generated by local activity within the cerebral cortex, here we show that an attention-related ERP in the frontal cortex is correlated with, and likely generated by, subcortical inputs from the basal forebrain (BF). In rats performing an auditory oddball task, both the amplitude and timing of the frontal ERP were coupled with BF neuronal activity in single trials. The local field potentials (LFPs) associated with the frontal ERP, concentrated in deep cortical layers corresponding to the zone of BF input, were similarly coupled with BF activity and consistently triggered by BF electrical stimulation within 5–10 msec. These results highlight the important and previously unrecognized role of long-range subcortical inputs from the BF in the generation of cognitive ERPs. DOI: http://dx.doi.org/10.7554/eLife.02148.001 PMID:24714497
Baltus, Alina; Herrmann, Christoph Siegfried
2016-06-01
Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Avey, Marc T; Phillmore, Leslie S; MacDougall-Shackleton, Scott A
2005-12-07
Sensory driven immediate early gene expression (IEG) has been a key tool to explore auditory perceptual areas in the avian brain. Most work on IEG expression in songbirds such as zebra finches has focused on playback of acoustic stimuli and its effect on auditory processing areas such as caudal medial mesopallium (CMM) caudal medial nidopallium (NCM). However, in a natural setting, the courtship displays of songbirds (including zebra finches) include visual as well as acoustic components. To determine whether the visual stimulus of a courting male modifies song-induced expression of the IEG ZENK in the auditory forebrain we exposed male and female zebra finches to acoustic (song) and visual (dancing) components of courtship. Birds were played digital movies with either combined audio and video, audio only, video only, or neither audio nor video (control). We found significantly increased levels of Zenk response in the auditory region CMM in the two treatment groups exposed to acoustic stimuli compared to the control group. The video only group had an intermediate response, suggesting potential effect of visual input on activity in these auditory brain regions. Finally, we unexpectedly found a lateralization of Zenk response that was independent of sex, brain region, or treatment condition, such that Zenk immunoreactivity was consistently higher in the left hemisphere than in the right and the majority of individual birds were left-hemisphere dominant.
Morphogenetic interaction of presumptive neural and mesodermal cells mixed in different ratios.
Toivonen, S; Saxen, L
1968-02-02
Cells of the presumptive forebrain region and axial mesoderm of Triturus neurulae were disaggregated and combined in different ratios. The differentiation of the central nervous systen in these explants was dependent on the relative amount of mesodermal cells present: an increase of mesodermal cells resulted in a corresponding increase in the frequency with which caudal structures of the central nervous system developed and a gradual loss of the forebrain formations.
Hemispheric differences in processing of vocalizations depend on early experience.
Phan, Mimi L; Vicario, David S
2010-02-02
An intriguing phenomenon in the neurobiology of language is lateralization: the dominant role of one hemisphere in a particular function. Lateralization is not exclusive to language because lateral differences are observed in other sensory modalities, behaviors, and animal species. Despite much scientific attention, the function of lateralization, its possible dependence on experience, and the functional implications of such dependence have yet to be clearly determined. We have explored the role of early experience in the development of lateralized sensory processing in the brain, using the songbird model of vocal learning. By controlling exposure to natural vocalizations (through isolation, song tutoring, and muting), we manipulated the postnatal auditory environment of developing zebra finches, and then assessed effects on hemispheric specialization for communication sounds in adulthood. Using bilateral multielectrode recordings from a forebrain auditory area known to selectively process species-specific vocalizations, we found that auditory responses to species-typical songs and long calls, in both male and female birds, were stronger in the right hemisphere than in the left, and that right-side responses adapted more rapidly to stimulus repetition. We describe specific instances, particularly in males, where these lateral differences show an influence of auditory experience with song and/or the bird's own voice during development.
Seasonal Plasticity of Precise Spike Timing in the Avian Auditory System
Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A.
2015-01-01
Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843
Temporal processing and adaptation in the songbird auditory forebrain.
Nagel, Katherine I; Doupe, Allison J
2006-09-21
Songbird auditory neurons must encode the dynamics of natural sounds at many volumes. We investigated how neural coding depends on the distribution of stimulus intensities. Using reverse-correlation, we modeled responses to amplitude-modulated sounds as the output of a linear filter and a nonlinear gain function, then asked how filters and nonlinearities depend on the stimulus mean and variance. Filter shape depended strongly on mean amplitude (volume): at low mean, most neurons integrated sound over many milliseconds, while at high mean, neurons responded more to local changes in amplitude. Increasing the variance (contrast) of amplitude modulations had less effect on filter shape but decreased the gain of firing in most cells. Both filter and gain changes occurred rapidly after a change in statistics, suggesting that they represent nonlinearities in processing. These changes may permit neurons to signal effectively over a wider dynamic range and are reminiscent of findings in other sensory systems.
Sensory maps in the claustrum of the cat.
Olson, C R; Graybiel, A M
1980-12-04
The claustrum is a telencephalic cell group (Fig. 1A, B) possessing widespread reciprocal connections with the neocortex. In this regard, it bears a unique and striking resemblance to the thalamus. We have now examined the anatomical ordering of pathways linking the claustrum with sensory areas of the cat neocortex and, in parallel electrophysiological experiments, have studied the functional organization of claustral sensory zones so identified. Our findings indicate that there are discrete visual and somatosensory subdivisions in the claustrum interconnected with the corresponding primary sensory areas of the neocortex and that the respective zones contain orderly retinotopic and somatotopic maps. A third claustral region receiving fibre projections from the auditory cortex in or near area Ep was found to contain neurones responsive to auditory stimulation. We conclude that loops connecting sensory areas of the neocortex with satellite zones in the claustrum contribute to the early processing of exteroceptive information by the forebrain.
Integration of auditory and vibrotactile stimuli: Effects of frequency
Wilson, E. Courtenay; Reed, Charlotte M.; Braida, Louis D.
2010-01-01
Perceptual integration of vibrotactile and auditory sinusoidal tone pulses was studied in detection experiments as a function of stimulation frequency. Vibrotactile stimuli were delivered through a single channel vibrator to the left middle fingertip. Auditory stimuli were presented diotically through headphones in a background of 50 dB sound pressure level broadband noise. Detection performance for combined auditory-tactile presentations was measured using stimulus levels that yielded 63% to 77% correct unimodal performance. In Experiment 1, the vibrotactile stimulus was 250 Hz and the auditory stimulus varied between 125 and 2000 Hz. In Experiment 2, the auditory stimulus was 250 Hz and the tactile stimulus varied between 50 and 400 Hz. In Experiment 3, the auditory and tactile stimuli were always equal in frequency and ranged from 50 to 400 Hz. The highest rates of detection for the combined-modality stimulus were obtained when stimulating frequencies in the two modalities were equal or closely spaced (and within the Pacinian range). Combined-modality detection for closely spaced frequencies was generally consistent with an algebraic sum model of perceptual integration; wider-frequency spacings were generally better fit by a Pythagorean sum model. Thus, perceptual integration of auditory and tactile stimuli at near-threshold levels appears to depend both on absolute frequency and relative frequency of stimulation within each modality. PMID:21117754
Baltus, Alina; Vosskuhl, Johannes; Boetzel, Cindy; Herrmann, Christoph Siegfried
2018-05-13
Recent research provides evidence for a functional role of brain oscillations for perception. For example, auditory temporal resolution seems to be linked to individual gamma frequency of auditory cortex. Individual gamma frequency not only correlates with performance in between-channel gap detection tasks but can be modulated via auditory transcranial alternating current stimulation. Modulation of individual gamma frequency is accompanied by an improvement in gap detection performance. Aging changes electrophysiological frequency components and sensory processing mechanisms. Therefore, we conducted a study to investigate the link between individual gamma frequency and gap detection performance in elderly people using auditory transcranial alternating current stimulation. In a within-subject design, twelve participants were electrically stimulated with two individualized transcranial alternating current stimulation frequencies: 3 Hz above their individual gamma frequency (experimental condition) and 4 Hz below their individual gamma frequency (control condition) while they were performing a between-channel gap detection task. As expected, individual gamma frequencies correlated significantly with gap detection performance at baseline and in the experimental condition, transcranial alternating current stimulation modulated gap detection performance. In the control condition, stimulation did not modulate gap detection performance. In addition, in elderly, the effect of transcranial alternating current stimulation on auditory temporal resolution seems to be dependent on endogenous frequencies in auditory cortex: elderlies with slower individual gamma frequencies and lower auditory temporal resolution profit from auditory transcranial alternating current stimulation and show increased gap detection performance during stimulation. Our results strongly suggest individualized transcranial alternating current stimulation protocols for successful modulation of performance. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Modulation frequency as a cue for auditory speed perception.
Senna, Irene; Parise, Cesare V; Ernst, Marc O
2017-07-12
Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).
Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L
2017-12-13
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.
Fergus, Daniel J.; Bass, Andrew H.
2013-01-01
Estrogens play a salient role in the development and maintenance of both male and female nervous systems and behaviors. The plainfin midshipman (Porichthys notatus), a teleost fish, has two male reproductive morphs that follow alternative mating tactics and diverge in multiple somatic, hormonal and neural traits, including the central control of morph-specific vocal behaviors. After we identified duplicate estrogen receptors (ERβ1 and ERβ2) in midshipman, we developed antibodies to localize protein expression in the central vocal-acoustic networks and saccule, the auditory division of the inner ear. As in other teleost species, ERβ1 and ERβ2 were robustly expressed in the telencephalon and hypothalamus in vocal-acoustic and other brain regions shown previously to exhibit strong expression of ERα and aromatase (estrogen synthetase, CYP19) in midshipman. Like aromatase, ERβ1 label co-localized with glial fibrillary acidic protein (GFAP) in telencephalic radial glial cells. Quantitative PCR revealed similar patterns of transcript abundance across reproductive morphs for ERβ1, ERβ2, ERα and aromatase in the forebrain and saccule. In contrast, transcript abundance for ERs and aromatase varied significantly between morphs in and around the sexually polymorphic vocal motor nucleus (VMN). Together, the results suggest that VMN is the major estrogen target within the estrogen-sensitive hindbrain vocal network that directly determines the duration, frequency and amplitude of morph-specific vocalizations. Comparable regional differences in steroid receptor abundances likely regulate morph-specific behaviors in males and females of other species exhibiting alternative reproductive tactics. PMID:23460422
A basic study on universal design of auditory signals in automobiles.
Yamauchi, Katsuya; Choi, Jong-dae; Maiguma, Ryo; Takada, Masayuki; Iwamiya, Shin-ichiro
2004-11-01
In this paper, the impression of various kinds of auditory signals currently used in automobiles and a comprehensive evaluation were measured by a semantic differential method. The desirable acoustic characteristic was examined for each type of auditory signal. Sharp sounds with dominant high-frequency components were not suitable for auditory signals in automobiles. This trend is expedient for the aged whose auditory sensitivity in the high frequency region is lower. When intermittent sounds were used, a longer OFF time was suitable. Generally, "dull (not sharp)" and "calm" sounds were appropriate for auditory signals. Furthermore, the comparison between the frequency spectrum of interior noise in automobiles and that of suitable sounds for various auditory signals indicates that the suitable sounds are not easily masked. The suitable auditory signals for various purposes is a good solution from the viewpoint of universal design.
Mahendra Prashanth, K V; Venugopalachar, Sridhar
2011-01-01
Noise is a common occupational health hazard in most industrial settings. An assessment of noise and its adverse health effects based on noise intensity is inadequate. For an efficient evaluation of noise effects, frequency spectrum analysis should also be included. This paper aims to substantiate the importance of studying the contribution of noise frequencies in evaluating health effects and their association with physiological behavior within human body. Additionally, a review of studies published between 1988 and 2009 that investigate the impact of industrial/occupational noise on auditory and non-auditory effects and the probable association and contribution of noise frequency components to these effects is presented. The relevant studies in English were identified in Medknow, Medline, Wiley, Elsevier, and Springer publications. Data were extracted from the studies that fulfilled the following criteria: title and/or abstract of the given study that involved industrial/occupational noise exposure in relation to auditory and non-auditory effects or health effects. Significant data on the study characteristics, including noise frequency characteristics, for assessment were considered in the study. It is demonstrated that only a few studies have considered the frequency contributions in their investigations to study auditory effects and not non-auditory effects. The data suggest that significant adverse health effects due to industrial noise include auditory and heart-related problems. The study provides a strong evidence for the claims that noise with a major frequency characteristic of around 4 kHz has auditory effects and being deficient in data fails to show any influence of noise frequency components on non-auditory effects. Furthermore, specific noise levels and frequencies predicting the corresponding health impacts have not yet been validated. There is a need for advance research to clarify the importance of the dominant noise frequency contribution in evaluating health effects.
Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G
2017-03-01
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
2017-01-01
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238
Aizenberg, Mark; Mwilambwe-Tshilobo, Laetitia; Briguglio, John J.; Natan, Ryan G.; Geffen, Maria N.
2015-01-01
The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC) respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs) improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination. PMID:26629746
Surbhi; Rastogi, Ashutosh; Malik, Shalie; Rani, Sangeeta; Kumar, Vinod
2016-10-01
This study examines whether differences in annual life-history states (LHSs) among the inhabitants of two latitudes would have an impact on the neuronal plasticity of the song-control system in songbirds. At the times of equinoxes and solstices during the year (n = 4 per year) corresponding to different LHSs, we measured the volumetric changes and expression of doublecortin (DCX; an endogenous marker of the neuronal recruitment) in the song-control nuclei and higher order auditory forebrain regions of the subtropical resident Indian weaverbirds (Ploceus philippinus) and Palearctic-Indian migratory redheaded buntings (Emberiza bruniceps). Area X in basal ganglia, lateral magnocellular nucleus of the anterior nidopallium (LMAN), HVC (proper name), and robust nucleus of the arcopallium (RA) were enlarged during the breeding LHS. Both round and fusiform DCX-immunoreactive (DCX-ir) cells were found in area X and HVC but not in LMAN or RA, with a significant seasonal difference. Also, as shown by increase in volume and by dense, round DCX-ir cells, the neuronal incorporation was increased in HVC alone during the breeding LHS. This suggests differences in the response of song-control nuclei to photoperiod-induced changes in LHSs. Furthermore, DCX immunoreactivity indicated participation of the cortical caudomedial nidopallium and caudomedial mesopallium in the song-control system, albeit with differences between the weaverbirds and the buntings. Overall, these results show seasonal neuronal plasticity in the song-control system closely associated with annual reproductive LHS in both of the songbirds. Differences between species probably account for the differences in the photoperiod-response system between the relative refractory weaverbirds and absolute refractory redheaded buntings. J. Comp. Neurol. 524:2914-2929, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Sewall, Kendra B.; Caro, Samuel P.; Sockman, Keith W.
2013-01-01
Male animals often change their behavior in response to the level of competition for mates. Male Lincoln's sparrows (Melospiza lincolnii) modulate their competitive singing over the period of a week as a function of the level of challenge associated with competitors' songs. Differences in song challenge and associated shifts in competitive state should be accompanied by neural changes, potentially in regions that regulate perception and song production. The monoamines mediate neural plasticity in response to environmental cues to achieve shifts in behavioral state. Therefore, using high pressure liquid chromatography with electrochemical detection, we compared levels of monoamines and their metabolites from male Lincoln's sparrows exposed to songs categorized as more or less challenging. We compared levels of norepinephrine and its principal metabolite in two perceptual regions of the auditory telencephalon, the caudomedial nidopallium and the caudomedial mesopallium (CMM), because this chemical is implicated in modulating auditory sensitivity to song. We also measured the levels of dopamine and its principal metabolite in two song control nuclei, area X and the robust nucleus of the arcopallium (RA), because dopamine is implicated in regulating song output. We measured the levels of serotonin and its principal metabolite in all four brain regions because this monoamine is implicated in perception and behavioral output and is found throughout the avian forebrain. After controlling for recent singing, we found that males exposed to more challenging song had higher levels of norepinephrine metabolite in the CMM and lower levels of serotonin in the RA. Collectively, these findings are consistent with norepinephrine in perceptual brain regions and serotonin in song control regions contributing to neuroplasticity that underlies socially-induced changes in behavioral state. PMID:23555809
Happel, Max F. K.; Ohl, Frank W.
2017-01-01
Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062
Bell, Brittany A; Phan, Mimi L; Vicario, David S
2015-03-01
How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions. Copyright © 2015 the American Physiological Society.
Raksin, Jonathan N; Glaze, Christopher M; Smith, Sarah; Schmidt, Marc F
2012-04-01
Motor-related forebrain areas in higher vertebrates also show responses to passively presented sensory stimuli. However, sensory tuning properties in these areas, especially during wakefulness, and their relation to perception, are poorly understood. In the avian song system, HVC (proper name) is a vocal-motor structure with auditory responses well defined under anesthesia but poorly characterized during wakefulness. We used a large set of stimuli including the bird's own song (BOS) and many conspecific songs (CON) to characterize auditory tuning properties in putative interneurons (HVC(IN)) during wakefulness. Our findings suggest that HVC contains a diversity of responses that vary in overall excitability to auditory stimuli, as well as bias in spike rate increases to BOS over CON. We used statistical tests to classify cells in order to further probe auditory responses, yielding one-third of neurons that were either unresponsive or suppressed and two-thirds with excitatory responses to one or more stimuli. A subset of excitatory neurons were tuned exclusively to BOS and showed very low linearity as measured by spectrotemporal receptive field analysis (STRF). The remaining excitatory neurons responded well to CON stimuli, although many cells still expressed a bias toward BOS. These findings suggest the concurrent presence of a nonlinear and a linear component to responses in HVC, even within the same neuron. These characteristics are consistent with perceptual deficits in distinguishing BOS from CON stimuli following lesions of HVC and other song nuclei and suggest mirror neuronlike qualities in which "self" (here BOS) is used as a referent to judge "other" (here CON).
Species-specific calls evoke asymmetric activity in the monkey's temporal poles.
Poremba, Amy; Malloy, Megan; Saunders, Richard C; Carson, Richard E; Herscovitch, Peter; Mishkin, Mortimer
2004-01-29
It has often been proposed that the vocal calls of monkeys are precursors of human speech, in part because they provide critical information to other members of the species who rely on them for survival and social interactions. Both behavioural and lesion studies suggest that monkeys, like humans, use the auditory system of the left hemisphere preferentially to process vocalizations. To investigate the pattern of neural activity that might underlie this particular form of functional asymmetry in monkeys, we measured local cerebral metabolic activity while the animals listened passively to species-specific calls compared with a variety of other classes of sound. Within the superior temporal gyrus, significantly greater metabolic activity occurred on the left side than on the right, only in the region of the temporal pole and only in response to monkey calls. This functional asymmetry was absent when these regions were separated by forebrain commissurotomy, suggesting that the perception of vocalizations elicits concurrent interhemispheric interactions that focus the auditory processing within a specialized area of one hemisphere.
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.
Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex
Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie
2013-01-01
Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225
Different auditory feedback control for echolocation and communication in horseshoe bats.
Liu, Ying; Feng, Jiang; Metzner, Walter
2013-01-01
Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this "auditory fovea", horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC) behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs) and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs) and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea.
Different Auditory Feedback Control for Echolocation and Communication in Horseshoe Bats
Liu, Ying; Feng, Jiang; Metzner, Walter
2013-01-01
Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this “auditory fovea”, horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC) behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs) and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs) and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea. PMID:23638137
NASA Astrophysics Data System (ADS)
Mhatre, Natasha; Robert, Daniel
2018-05-01
Tree cricket hearing shows all the features of an actively amplified auditory system, particularly spontaneous oscillations (SOs) of the tympanal membrane. As expected from an actively amplified auditory system, SO frequency and the peak frequency in evoked responses as observed in sensitivity spectra are correlated. Sensitivity spectra also show compressive non-linearity at this frequency, i.e. a reduction in peak height and sharpness with increasing stimulus amplitude. Both SO and amplified frequency also change with ambient temperature, allowing the auditory system to maintain a filter that is matched to song frequency. In tree crickets, remarkably, song frequency varies with ambient temperature. Interestingly, active amplification has been reported to be switched ON and OFF. The mechanism of this switch is as yet unknown. In order to gain insights into this switch, we recorded and analysed SOs as the auditory system transitioned from the passive (OFF) state to the active (ON) state. We found that while SO amplitude did not follow a fixed pattern, SO frequency changed during the ON-OFF transition. SOs were first detected above noise levels at low frequencies, sometimes well below the known song frequency range (0.5-1 kHz lower). SO frequency was observed to increase over the next ˜30 minutes, in the absence of any ambient temperature change, before settling at a frequency within the range of conspecific song. We examine the frequency shift in SO spectra with temperature and during the ON/OFF transition and discuss the mechanistic implications. To our knowledge, such modulation of active auditory amplification, and its dynamics are unique amongst auditory animals.
Boumans, Tiny; Gobes, Sharon M. H.; Poirier, Colline; Theunissen, Frederic E.; Vandersmissen, Liesbeth; Pintjens, Wouter; Verhoye, Marleen; Bolhuis, Johan J.; Van der Linden, Annemie
2008-01-01
Background Male songbirds learn their songs from an adult tutor when they are young. A network of brain nuclei known as the ‘song system’ is the likely neural substrate for sensorimotor learning and production of song, but the neural networks involved in processing the auditory feedback signals necessary for song learning and maintenance remain unknown. Determining which regions show preferential responsiveness to the bird's own song (BOS) is of great importance because neurons sensitive to self-generated vocalisations could mediate this auditory feedback process. Neurons in the song nuclei and in a secondary auditory area, the caudal medial mesopallium (CMM), show selective responses to the BOS. The aim of the present study is to investigate the emergence of BOS selectivity within the network of primary auditory sub-regions in the avian pallium. Methods and Findings Using blood oxygen level-dependent (BOLD) fMRI, we investigated neural responsiveness to natural and manipulated self-generated vocalisations and compared the selectivity for BOS and conspecific song in different sub-regions of the thalamo-recipient area Field L. Zebra finch males were exposed to conspecific song, BOS and to synthetic variations on BOS that differed in spectro-temporal and/or modulation phase structure. We found significant differences in the strength of BOLD responses between regions L2a, L2b and CMM, but no inter-stimuli differences within regions. In particular, we have shown that the overall signal strength to song and synthetic variations thereof was different within two sub-regions of Field L2: zone L2a was significantly more activated compared to the adjacent sub-region L2b. Conclusions Based on our results we suggest that unlike nuclei in the song system, sub-regions in the primary auditory pallium do not show selectivity for the BOS, but appear to show different levels of activity with exposure to any sound according to their place in the auditory processing stream. PMID:18781203
Lina, Ioan A; Lauer, Amanda M
2013-04-01
The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256
Sustained selective attention to competing amplitude-modulations in human auditory cortex.
Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander
2014-01-01
Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.
Sustained Selective Attention to Competing Amplitude-Modulations in Human Auditory Cortex
Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander
2014-01-01
Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525
Control of Phasic Firing by a Background Leak Current in Avian Forebrain Auditory Neurons
Dagostin, André A.; Lovell, Peter V.; Hilscher, Markus M.; Mello, Claudio V.; Leão, Ricardo M.
2015-01-01
Central neurons express a variety of neuronal types and ion channels that promote firing heterogeneity among their distinct neuronal populations. Action potential (AP) phasic firing, produced by low-threshold voltage-activated potassium currents (VAKCs), is commonly observed in mammalian brainstem neurons involved in the processing of temporal properties of the acoustic information. The avian caudomedial nidopallium (NCM) is an auditory area analogous to portions of the mammalian auditory cortex that is involved in the perceptual discrimination and memorization of birdsong and shows complex responses to auditory stimuli We performed in vitro whole-cell patch-clamp recordings in brain slices from adult zebra finches (Taeniopygia guttata) and observed that half of NCM neurons fire APs phasically in response to membrane depolarizations, while the rest fire transiently or tonically. Phasic neurons fired APs faster and with more temporal precision than tonic and transient neurons. These neurons had similar membrane resting potentials, but phasic neurons had lower membrane input resistance and time constant. Surprisingly phasic neurons did not express low-threshold VAKCs, which curtailed firing in phasic mammalian brainstem neurons, having similar VAKCs to other NCM neurons. The phasic firing was determined not by VAKCs, but by the potassium background leak conductances, which was more prominently expressed in phasic neurons, a result corroborated by pharmacological, dynamic-clamp, and modeling experiments. These results reveal a new role for leak currents in generating firing diversity in central neurons. PMID:26696830
Incorporating Auditory Models in Speech/Audio Applications
NASA Astrophysics Data System (ADS)
Krishnamoorthi, Harish
2011-12-01
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
Daliri, Ayoub; Max, Ludo
2018-02-01
Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre-speech modulation is not directly related to limited auditory-motor adaptation; and in AWS, DAF paradoxically tends to normalize their otherwise limited pre-speech auditory modulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lee, Shao-Hsuan; Fang, Tuan-Jen; Yu, Jen-Fang; Lee, Guo-She
2017-09-01
Auditory feedback can make reflexive responses on sustained vocalizations. Among them, the middle-frequency power of F0 (MFP) may provide a sensitive index to access the subtle changes in different auditory feedback conditions. Phonatory airflow temperature was obtained from 20 healthy adults at two vocal intensity ranges under four auditory feedback conditions: (1) natural auditory feedback (NO); (2) binaural speech noise masking (SN); (3) bone-conducted feedback of self-generated voice (BAF); and (4) SN and BAF simultaneously. The modulations of F0 in low-frequency (0.2 Hz-3 Hz), middle-frequency (3 Hz-8 Hz), and high-frequency (8 Hz-25 Hz) bands were acquired using power spectral analysis of F0. Acoustic and aerodynamic analyses were used to acquire vocal intensity, maximum phonation time (MPT), phonatory airflow, and MFP-based vocal efficiency (MBVE). SN and high vocal intensity decreased MFP and raised MBVE and MPT significantly. BAF showed no effect on MFP but significantly lowered MBVE. Moreover, BAF significantly increased the perception of voice feedback and the sensation of vocal effort. Altered auditory feedback significantly changed the middle-frequency modulations of F0. MFP and MBVE could well detect these subtle responses of audio-vocal feedback. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
He hears, she hears: are there sex differences in auditory processing?
Yoder, Kathleen M; Phan, Mimi L; Lu, Kai; Vicario, David S
2015-03-01
Songbirds learn individually unique songs through vocal imitation and use them in courtship and territorial displays. Previous work has identified a forebrain auditory area, the caudomedial nidopallium (NCM), that appears specialized for discriminating and remembering conspecific vocalizations. In zebra finches (ZFs), only males produce learned vocalizations, but both sexes process these and other signals. This study assessed sex differences in auditory processing by recording extracellular multiunit activity at multiple sites within NCM. Juvenile female ZFs (n = 46) were reared in individual isolation and artificially tutored with song. In adulthood, songs were played back to assess auditory responses, stimulus-specific adaptation, neural bias for conspecific song, and memory for the tutor's song, as well as recently heard songs. In a subset of females (n = 36), estradiol (E2) levels were manipulated to test the contribution of E2, known to be synthesized in the brain, to auditory responses. Untreated females (n = 10) showed significant differences in response magnitude and stimulus-specific adaptation compared to males reared in the same paradigm (n = 9). In hormone-manipulated females, E2 augmentation facilitated the memory for recently heard songs in adulthood, but neither E2 augmentation (n = 15) nor E2 synthesis blockade (n = 9) affected tutor song memory or the neural bias for conspecific song. The results demonstrate subtle sex differences in processing communication signals, and show that E2 levels in female songbirds can affect the memory for songs of potential suitors, thus contributing to the process of mate selection. The results also have potential relevance to clinical interventions that manipulate E2 in human patients. © 2014 Wiley Periodicals, Inc.
Intracerebral evidence of rhythm transform in the human auditory cortex.
Nozaradan, Sylvie; Mouraux, André; Jonas, Jacques; Colnat-Coulbois, Sophie; Rossion, Bruno; Maillard, Louis
2017-07-01
Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (<30 Hz) and high-frequency (>30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.
NASA Astrophysics Data System (ADS)
Martens, William
2005-04-01
Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.
Rota-Donahue, Christine; Schwartz, Richard G.; Shafer, Valerie; Sussman, Elyse S.
2016-01-01
Background Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children’s auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. Purpose This study examined the perception of small frequency differences (Δf) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. Research Design An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of Δf from the 1000-Hz base frequency. Study Sample Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. Data Collection and Analysis Behavioral data collected using headphone delivery were analyzed using the sensitivity index d′, calculated for three Δf was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d′ and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. Results TD children and children with APD and/or SLI differed in the detection of small-tone Δf. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d′ showed different strengths of correlation based on the magnitudes of the Δf. Auditory processing scores showed stronger correlation to the sensitivity index d′ for the small Δf, while language scores showed stronger correlation to the sensitivity index d′ for the large Δf. Conclusion Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI. PMID:27310407
Rota-Donahue, Christine; Schwartz, Richard G; Shafer, Valerie; Sussman, Elyse S
2016-06-01
Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children's auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. : This study examined the perception of small frequency differences (∆ƒ) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of ∆ƒ from the 1000-Hz base frequency. Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. Behavioral data collected using headphone delivery were analyzed using the sensitivity index d', calculated for three ∆ƒ was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d' and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. TD children and children with APD and/or SLI differed in the detection of small-tone ∆ƒ. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d' showed different strengths of correlation based on the magnitudes of the ∆ƒ. Auditory processing scores showed stronger correlation to the sensitivity index d' for the small ∆ƒ, while language scores showed stronger correlation to the sensitivity index d' for the large ∆ƒ. Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI. American Academy of Audiology.
The relationship between auditory exostoses and cold water: a latitudinal analysis.
Kennedy, G E
1986-12-01
The frequency of auditory exostoses was examined by latitude. It was found that discrete bony lesions of the external auditory canal were, with very few exceptions, either absent or in very low frequency (less than 3.0%) in 0-30 degrees N and S latitudes and above 45 degrees N. The highest frequencies of auditory exostoses were found in the middle latitudes (30-45 degrees N and S) among populations who exploit either marine or fresh water resources. Clinical and experimental data are discussed, and these data are found to support strongly the hypothesis that there is a causative relationship between the formation of auditory exostoses and exploitation of resources in cold water, particularly through diving. It is therefore suggested that since auditory exostoses are behavioral rather than genetic in etiology, they should not be included in estimates of population distance based on nonmetric variables.
Fonseca, P J; Correia, T
2007-05-01
The effects of temperature on hearing in the cicada Tettigetta josei were studied. The activity of the auditory nerve and the responses of auditory interneurons to stimuli of different frequencies and intensities were recorded at different temperatures ranging from 16 degrees C to 29 degrees C. Firstly, in order to investigate the temperature dependence of hearing processes, we analyzed its effects on auditory tuning, sensitivity, latency and Q(10dB). Increasing temperature led to an upward shift of the characteristic hearing frequency, to an increase in sensitivity and to a decrease in the latency of the auditory response both in the auditory nerve recordings (periphery) and in some interneurons at the metathoracic-abdominal ganglionic complex (MAC). Characteristic frequency shifts were only observed at low frequency (3-8 kHz). No changes were seen in Q(10dB). Different tuning mechanisms underlying frequency selectivity may explain the results observed. Secondly, we investigated the role of the mechanical sensory structures that participate in the transduction process. Laser vibrometry measurements revealed that the vibrations of the tympanum and tympanal apodeme are temperature independent in the biologically relevant range (18-35 degrees C). Since the above mentioned effects of temperature are present in the auditory nerve recordings, the observed shifts in frequency tuning must be performed by mechanisms intrinsic to the receptor cells. Finally, the role of potassium channels in the response of the auditory system was investigated using a specific inhibitor of these channels, tetraethylammonium (TEA). TEA caused shifts on tuning and sensitivity of the summed response of the receptors similar to the effects of temperature. Thus, potassium channels are implicated in the tuning of the receptor cells.
Infant Auditory Sensitivity to Pure Tones and Frequency-Modulated Tones
ERIC Educational Resources Information Center
Leibold, Lori J.; Werner, Lynne A.
2007-01-01
It has been suggested that infants respond preferentially to infant-directed speech because their auditory sensitivity to sounds with extensive frequency modulation (FM) is better than their sensitivity to less modulated sounds. In this experiment, auditory thresholds for FM tones and for unmodulated, or pure, tones in a background of noise were…
Ghai, Shashank; Schmitz, Gerd; Hwang, Tong-Hun; Effenberg, Alfred O.
2018-01-01
The purpose of the study was to assess the influence of real-time auditory feedback on knee proprioception. Thirty healthy participants were randomly allocated to control (n = 15), and experimental group I (15). The participants performed an active knee-repositioning task using their dominant leg, with/without additional real-time auditory feedback where the frequency was mapped in a convergent manner to two different target angles (40 and 75°). Statistical analysis revealed significant enhancement in knee re-positioning accuracy for the constant and absolute error with real-time auditory feedback, within and across the groups. Besides this convergent condition, we established a second divergent condition. Here, a step-wise transposition of frequency was performed to explore whether a systematic tuning between auditory-proprioceptive repositioning exists. No significant effects were identified in this divergent auditory feedback condition. An additional experimental group II (n = 20) was further included. Here, we investigated the influence of a larger magnitude and directional change of step-wise transposition of the frequency. In a first step, results confirm the findings of experiment I. Moreover, significant effects on knee auditory-proprioception repositioning were evident when divergent auditory feedback was applied. During the step-wise transposition participants showed systematic modulation of knee movements in the opposite direction of transposition. We confirm that knee re-positioning accuracy can be enhanced with concurrent application of real-time auditory feedback and that knee re-positioning can modulated in a goal-directed manner with step-wise transposition of frequency. Clinical implications are discussed with respect to joint position sense in rehabilitation settings. PMID:29568259
Salicylate-induced cochlear impairments, cortical hyperactivity and re-tuning, and tinnitus.
Chen, Guang-Di; Stolzberg, Daniel; Lobarinas, Edward; Sun, Wei; Ding, Dalian; Salvi, Richard
2013-01-01
High doses of sodium salicylate (SS) have long been known to induce temporary hearing loss and tinnitus, effects attributed to cochlear dysfunction. However, our recent publications reviewed here show that SS can induce profound, permanent, and unexpected changes in the cochlea and central nervous system. Prolonged treatment with SS permanently decreased the cochlear compound action potential (CAP) amplitude in vivo. In vitro, high dose SS resulted in a permanent loss of spiral ganglion neurons and nerve fibers, but did not damage hair cells. Acute treatment with high-dose SS produced a frequency-dependent decrease in the amplitude of distortion product otoacoustic emissions and CAP. Losses were greatest at low and high frequencies, but least at the mid-frequencies (10-20 kHz), the mid-frequency band that corresponds to the tinnitus pitch measured behaviorally. In the auditory cortex, medial geniculate body and amygdala, high-dose SS enhanced sound-evoked neural responses at high stimulus levels, but it suppressed activity at low intensities and elevated response threshold. When SS was applied directly to the auditory cortex or amygdala, it only enhanced sound evoked activity, but did not elevate response threshold. Current source density analysis revealed enhanced current flow into the supragranular layer of auditory cortex following systemic SS treatment. Systemic SS treatment also altered tuning in auditory cortex and amygdala; low frequency and high frequency multiunit clusters up-shifted or down-shifted their characteristic frequency into the 10-20 kHz range thereby altering auditory cortex tonotopy and enhancing neural activity at mid-frequencies corresponding to the tinnitus pitch. These results suggest that SS-induced hyperactivity in auditory cortex originates in the central nervous system, that the amygdala potentiates these effects and that the SS-induced tonotopic shifts in auditory cortex, the putative neural correlate of tinnitus, arises from the interaction between the frequency-dependent losses in the cochlea and hyperactivity in the central nervous system. Copyright © 2012 Elsevier B.V. All rights reserved.
Penatti, Carlos A A; Porter, Donna M; Henderson, Leslie P
2009-01-01
Anabolic androgenic steroids (AAS) can promote detrimental effects on social behaviors for which γ-aminobutyric acid type A (GABAA) receptor-mediated circuits in the forebrain play a critical role. While all AAS bind to androgen receptors (AR), they may also be aromatized to estrogens and thus potentially impart effects via estrogen receptors (ER). Chronic exposure of wild type male mice to a combination of chemically distinct AAS increased action potential (AP) frequency, selective GABAA receptor subunit mRNAs, and GABAergic synaptic current decay in the medial preoptic area (mPOA). Experiments performed with pharmacological agents and in AR-deficient Tfm mutant mice suggest that the AAS-dependent enhancement of GABAergic transmission in wild type mice is AR-mediated. In AR-deficient mice, the AAS elicited dramatically different effects, decreasing AP frequency, sIPSC amplitude and frequency and the expression of selective GABAA receptor subunit mRNAs. Surprisingly, in the absence of AR signaling, the data indicate that the AAS do not act as ER agonists, but rather suggest a novel in vivo action in which the AAS inhibit aromatase and impair endogenous ER signaling. These results show that the AAS have the capacity to alter neuronal function in the forebrain via multiple steroid signaling mechanisms and suggest that effects of these steroids in the brain will depend not only on the balance of AR- vs. ER-mediated regulation for different target genes, but also on the ability of these drugs to alter steroid metabolism and thus the endogenous steroid milieu. PMID:19812324
Suga, Nobuo
2018-04-01
For echolocation, mustached bats emit velocity-sensitive orientation sounds (pulses) containing a constant-frequency component consisting of four harmonics (CF 1-4 ). They show unique behavior called Doppler-shift compensation for Doppler-shifted echoes and hunting behavior for frequency and amplitude modulated echoes from fluttering insects. Their peripheral auditory system is highly specialized for fine frequency analysis of CF 2 (∼61.0 kHz) and detecting echo CF 2 from fluttering insects. In their central auditory system, lateral inhibition occurring at multiple levels sharpens V-shaped frequency-tuning curves at the periphery and creates sharp spindle-shaped tuning curves and amplitude tuning. The large CF 2 -tuned area of the auditory cortex systematically represents the frequency and amplitude of CF 2 in a frequency-versus-amplitude map. "CF/CF" neurons are tuned to a specific combination of pulse CF 1 and Doppler-shifted echo CF 2 or 3 . They are tuned to specific velocities. CF/CF neurons cluster in the CC ("C" stands for CF) and DIF (dorsal intrafossa) areas of the auditory cortex. The CC area has the velocity map for Doppler imaging. The DIF area is particularly for Dopper imaging of other bats approaching in cruising flight. To optimize the processing of behaviorally relevant sounds, cortico-cortical interactions and corticofugal feedback modulate the frequency tuning of cortical and sub-cortical auditory neurons and cochlear hair cells through a neural net consisting of positive feedback associated with lateral inhibition. Copyright © 2018 Elsevier B.V. All rights reserved.
Buchholz, Jörg M
2011-07-01
Coloration detection thresholds (CDTs) were measured for a single reflection as a function of spectral content and reflection delay for diotic stimulus presentation. The direct sound was a 320-ms long burst of bandpass-filtered noise with varying lower and upper cut-off frequencies. The resulting threshold data revealed that: (1) sensitivity decreases with decreasing bandwidth and increasing reflection delay and (2) high-frequency components contribute less to detection than low-frequency components. The auditory processes that may be involved in coloration detection (CD) are discussed in terms of a spectrum-based auditory model, which is conceptually similar to the pattern-transformation model of pitch (Wightman, 1973). Hence, the model derives an auto-correlation function of the input stimulus by applying a frequency analysis to an auditory representation of the power spectrum. It was found that, to successfully describe the quantitative behavior of the CDT data, three important mechanisms need to be included: (1) auditory bandpass filters with a narrower bandwidth than classic Gammatone filters, the increase in spectral resolution was here linked to cochlear suppression, (2) a spectral contrast enhancement process that reflects neural inhibition mechanisms, and (3) integration of information across auditory frequency bands. Copyright © 2011 Elsevier B.V. All rights reserved.
The effect of superior auditory skills on vocal accuracy
NASA Astrophysics Data System (ADS)
Amir, Ofer; Amir, Noam; Kishon-Rabin, Liat
2003-02-01
The relationship between auditory perception and vocal production has been typically investigated by evaluating the effect of either altered or degraded auditory feedback on speech production in either normal hearing or hearing-impaired individuals. Our goal in the present study was to examine this relationship in individuals with superior auditory abilities. Thirteen professional musicians and thirteen nonmusicians, with no vocal or singing training, participated in this study. For vocal production accuracy, subjects were presented with three tones. They were asked to reproduce the pitch using the vowel /a/. This procedure was repeated three times. The fundamental frequency of each production was measured using an autocorrelation pitch detection algorithm designed for this study. The musicians' superior auditory abilities (compared to the nonmusicians) were established in a frequency discrimination task reported elsewhere. Results indicate that (a) musicians had better vocal production accuracy than nonmusicians (production errors of 1/2 a semitone compared to 1.3 semitones, respectively); (b) frequency discrimination thresholds explain 43% of the variance of the production data, and (c) all subjects with superior frequency discrimination thresholds showed accurate vocal production; the reverse relationship, however, does not hold true. In this study we provide empirical evidence to the importance of auditory feedback on vocal production in listeners with superior auditory skills.
Encoding of frequency-modulation (FM) rates in human auditory cortex.
Okamoto, Hidehiko; Kakigi, Ryusuke
2015-12-14
Frequency-modulated sounds play an important role in our daily social life. However, it currently remains unclear whether frequency modulation rates affect neural activity in the human auditory cortex. In the present study, using magnetoencephalography, we investigated the auditory evoked N1m and sustained field responses elicited by temporally repeated and superimposed frequency-modulated sweeps that were matched in the spectral domain, but differed in frequency modulation rates (1, 4, 16, and 64 octaves per sec). The results obtained demonstrated that the higher rate frequency-modulated sweeps elicited the smaller N1m and the larger sustained field responses. Frequency modulation rate had a significant impact on the human brain responses, thereby providing a key for disentangling a series of natural frequency-modulated sounds such as speech and music.
Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar
Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua
2016-01-01
Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power. PMID:27483261
Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar.
Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua
2016-07-28
Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power.
Cortical contributions to the auditory frequency-following response revealed by MEG
Coffey, Emily B. J.; Herholz, Sibylle C.; Chepesiuk, Alexander M. P.; Baillet, Sylvain; Zatorre, Robert J.
2016-01-01
The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. PMID:27009409
Binaural auditory beats affect long-term memory.
Garcia-Argibay, Miguel; Santed, Miguel A; Reales, José M
2017-12-08
The presentation of two pure tones to each ear separately with a slight difference in their frequency results in the perception of a single tone that fluctuates in amplitude at a frequency that equals the difference of interaural frequencies. This perceptual phenomenon is known as binaural auditory beats, and it is thought to entrain electrocortical activity and enhance cognition functions such as attention and memory. The aim of this study was to determine the effect of binaural auditory beats on long-term memory. Participants (n = 32) were kept blind to the goal of the study and performed both the free recall and recognition tasks after being exposed to binaural auditory beats, either in the beta (20 Hz) or theta (5 Hz) frequency bands and white noise as a control condition. Exposure to beta-frequency binaural beats yielded a greater proportion of correctly recalled words and a higher sensitivity index d' in recognition tasks, while theta-frequency binaural-beat presentation lessened the number of correctly remembered words and the sensitivity index. On the other hand, we could not find differences in the conditional probability for recall given recognition between beta and theta frequencies and white noise, suggesting that the observed changes in recognition were due to the recollection component. These findings indicate that the presentation of binaural auditory beats can affect long-term memory both positively and negatively, depending on the frequency used.
A possible role for a paralemniscal auditory pathway in the coding of slow temporal information
Abrams, Daniel A.; Nicol, Trent; Zecker, Steven; Kraus, Nina
2010-01-01
Low frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery. PMID:21094680
ERIC Educational Resources Information Center
Bishop, Dorothy V. M.; Hardiman, Mervyn J.; Barry, Johanna G.
2011-01-01
Behavioural and electrophysiological studies give differing impressions of when auditory discrimination is mature. Ability to discriminate frequency and speech contrasts reaches adult levels only around 12 years of age, yet an electrophysiological index of auditory discrimination, the mismatch negativity (MMN), is reported to be as large in…
Fundamental deficits of auditory perception in Wernicke's aphasia.
Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen
2013-01-01
This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
The auditory nerve overlapped waveform (ANOW): A new objective measure of low-frequency hearing
NASA Astrophysics Data System (ADS)
Lichtenhan, Jeffery T.; Salt, Alec N.; Guinan, John J.
2015-12-01
One of the most pressing problems today in the mechanics of hearing is to understand the mechanical motions in the apical half of the cochlea. Almost all available measurements from the cochlear apex of basilar membrane or other organ-of-Corti transverse motion have been made from ears where the health, or sensitivity, in the apical half of the cochlea was not known. A key step in understanding the mechanics of the cochlear base was to trust mechanical measurements only when objective measures from auditory-nerve compound action potentials (CAPs) showed good preparation sensitivity. However, such traditional objective measures are not adequate monitors of cochlear health in the very low-frequency regions of the apex that are accessible for mechanical measurements. To address this problem, we developed the Auditory Nerve Overlapped Waveform (ANOW) that originates from auditory nerve output in the apex. When responses from the round window to alternating low-frequency tones are averaged, the cochlear microphonic is canceled and phase-locked neural firing interleaves in time (i.e., overlaps). The result is a waveform that oscillates at twice the probe frequency. We have demonstrated that this Auditory Nerve Overlapped Waveform - called ANOW - originates from auditory nerve fibers in the cochlear apex [8], relates well to single-auditory-nerve-fiber thresholds, and can provide an objective estimate of low-frequency sensitivity [7]. Our new experiments demonstrate that ANOW is a highly sensitive indicator of apical cochlear function. During four different manipulations to the scala media along the cochlear spiral, ANOW amplitude changed when either no, or only small, changes occurred in CAP thresholds. Overall, our results demonstrate that ANOW can be used to monitor cochlear sensitivity of low-frequency regions during experiments that make apical basilar membrane motion measurements.
Oscillatory support for rapid frequency change processing in infants.
Musacchia, Gabriella; Choudhury, Naseem A; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P; Benasich, April A
2013-11-01
Rapid auditory processing and auditory change detection abilities are crucial aspects of speech and language development, particularly in the first year of life. Animal models and adult studies suggest that oscillatory synchrony, and in particular low-frequency oscillations play key roles in this process. We hypothesize that infant perception of rapid pitch and timing changes is mediated, at least in part, by oscillatory mechanisms. Using event-related potentials (ERPs), source localization and time-frequency analysis of event-related oscillations (EROs), we examined the neural substrates of rapid auditory processing in 4-month-olds. During a standard oddball paradigm, infants listened to tone pairs with invariant standard (STD, 800-800 Hz) and variant deviant (DEV, 800-1200 Hz) pitch. STD and DEV tone pairs were first presented in a block with a short inter-stimulus interval (ISI) (Rapid Rate: 70 ms ISI), followed by a block of stimuli with a longer ISI (Control Rate: 300 ms ISI). Results showed greater ERP peak amplitude in response to the DEV tone in both conditions and later and larger peaks during Rapid Rate presentation, compared to the Control condition. Sources of neural activity, localized to right and left auditory regions, showed larger and faster activation in the right hemisphere for both rate conditions. Time-frequency analysis of the source activity revealed clusters of theta band enhancement to the DEV tone in right auditory cortex for both conditions. Left auditory activity was enhanced only during Rapid Rate presentation. These data suggest that local low-frequency oscillatory synchrony underlies rapid processing and can robustly index auditory perception in young infants. Furthermore, left hemisphere recruitment during rapid frequency change discrimination suggests a difference in the spectral and temporal resolution of right and left hemispheres at a very young age. © 2013 Elsevier Ltd. All rights reserved.
Nozaradan, Sylvie; Schönwiesner, Marc; Keller, Peter E; Lenc, Tomas; Lehmann, Alexandre
2018-02-01
The spontaneous ability to entrain to meter periodicities is central to music perception and production across cultures. There is increasing evidence that this ability involves selective neural responses to meter-related frequencies. This phenomenon has been observed in the human auditory cortex, yet it could be the product of evolutionarily older lower-level properties of brainstem auditory neurons, as suggested by recent recordings from rodent midbrain. We addressed this question by taking advantage of a new method to simultaneously record human EEG activity originating from cortical and lower-level sources, in the form of slow (< 20 Hz) and fast (> 150 Hz) responses to auditory rhythms. Cortical responses showed increased amplitudes at meter-related frequencies compared to meter-unrelated frequencies, regardless of the prominence of the meter-related frequencies in the modulation spectrum of the rhythmic inputs. In contrast, frequency-following responses showed increased amplitudes at meter-related frequencies only in rhythms with prominent meter-related frequencies in the input but not for a more complex rhythm requiring more endogenous generation of the meter. This interaction with rhythm complexity suggests that the selective enhancement of meter-related frequencies does not fully rely on subcortical auditory properties, but is critically shaped at the cortical level, possibly through functional connections between the auditory cortex and other, movement-related, brain structures. This process of temporal selection would thus enable endogenous and motor entrainment to emerge with substantial flexibility and invariance with respect to the rhythmic input in humans in contrast with non-human animals. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Artieda, J; Valencia, M; Alegre, M; Olaziregi, O; Urrestarazu, E; Iriarte, J
2004-03-01
Steady-state potentials are oscillatory responses generated by a rhythmic stimulation of a sensory pathway. The frequency of the response, which follows the frequency of stimulation, is maximal at a stimulus rate of 40 Hz for auditory stimuli. The exact cause of these maximal responses is not known, although some authors have suggested that they might be related to the 'working frequency' of the auditory cortex. Testing of the responses to different frequencies of stimulation may be lengthy if a single frequency is studied at a time. Our aim was to develop a fast technique to explore the oscillatory response to auditory stimuli, using a tone modulated in amplitude by a sinusoid whose frequency increases linearly in frequency ('chirp') from 1 to 120 Hz. Time-frequency transforms were used for the analysis of the evoked responses in 10 subjects. Also, we analyzed whether the peaks in these responses were due to increases of amplitude or to phase-locking phenomena, using single-sweep time-frequency transforms and inter-trial phase analysis. The pattern observed in the time-frequency transform of the chirp-evoked potential was very similar in all subjects: a diagonal band of energy was observed, corresponding to the frequency of modulation at each time instant. Two components were present in the band, one around 45 Hz (30-60 Hz) and a smaller one between 80 and 120 Hz. Inter-trial phase analysis showed that these components were mainly due to phase locking phenomena. A simultaneous testing of the amplitude-modulation-following oscillatory responses to auditory stimulation is feasible using a tone modulated in amplitude at increasing frequencies. The maximal energies found at stimulation frequencies around 40 Hz are probably due to increased phase-locking of the individual responses.
Flying in tune: sexual recognition in mosquitoes.
Gibson, Gabriella; Russell, Ian
2006-07-11
Mosquitoes hear with their antennae, which in most species are sexually dimorphic. Johnston, who discovered the mosquito auditory organ at the base of the antenna 150 years ago, speculated that audition was involved with mating behaviour. Indeed, male mosquitoes are attracted to female flight tones. The male auditory organ has been proposed to act as an acoustic filter for female flight tones, but female auditory behavior is unknown. We show, for the first time, interactive auditory behavior between males and females that leads to sexual recognition. Individual males and females both respond to pure tones by altering wing-beat frequency. Behavioral auditory tuning curves, based on minimum threshold sound levels that elicit a change in wing-beat frequency to pure tones, are sharper than the mechanical tuning of the antennae, with males being more sensitive than females. We flew opposite-sex pairs of tethered Toxorhynchites brevipalpis and found that each mosquito alters its wing-beat frequency in response to the flight tone of the other, so that within seconds their flight-tone frequencies are closely matched, if not completely synchronized. The flight tones of same-sex pairs may converge in frequency but eventually diverge dramatically.
Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence.
Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles
2015-01-01
The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective.
Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence
Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles
2015-01-01
The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective. PMID:26348628
Schnitzler, Hans-Ulrich; Denzinger, Annette
2011-05-01
Rhythmical modulations in insect echoes caused by the moving wings of fluttering insects are behaviourally relevant information for bats emitting CF-FM signals with a high duty cycle. Transmitter and receiver of the echolocation system in flutter detecting foragers are especially adapted for the processing of flutter information. The adaptations of the transmitter are indicated by a flutter induced increase in duty cycle, and by Doppler shift compensation (DSC) that keeps the carrier frequency of the insect echoes near a reference frequency. An adaptation of the receiver is the auditory fovea on the basilar membrane, a highly expanded frequency representation centred to the reference frequency. The afferent projections from the fovea lead to foveal areas with an overrepresentation of sharply tuned neurons with best frequencies near the reference frequency throughout the entire auditory pathway. These foveal neurons are very sensitive to stimuli with natural and simulated flutter information. The frequency range of the foveal areas with their flutter processing neurons overlaps exactly with the frequency range where DS compensating bats most likely receive echoes from fluttering insects. This tight match indicates that auditory fovea and DSC are adaptations for the detection and evaluation of insects flying in clutter.
Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim
2015-06-15
Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Park, Hyojin; Ince, Robin A.A.; Schyns, Philippe G.; Thut, Gregor; Gross, Joachim
2015-01-01
Summary Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433
Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de
2017-12-07
To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.
Binaural auditory beats affect vigilance performance and mood.
Lane, J D; Kasian, S J; Owens, J E; Marsh, G R
1998-01-01
When two tones of slightly different frequency are presented separately to the left and right ears the listener perceives a single tone that varies in amplitude at a frequency equal to the frequency difference between the two tones, a perceptual phenomenon known as the binaural auditory beat. Anecdotal reports suggest that binaural auditory beats within the electroencephalograph frequency range can entrain EEG activity and may affect states of consciousness, although few scientific studies have been published. This study compared the effects of binaural auditory beats in the EEG beta and EEG theta/delta frequency ranges on mood and on performance of a vigilance task to investigate their effects on subjective and objective measures of arousal. Participants (n = 29) performed a 30-min visual vigilance task on three different days while listening to pink noise containing simple tones or binaural beats either in the beta range (16 and 24 Hz) or the theta/delta range (1.5 and 4 Hz). However, participants were kept blind to the presence of binaural beats to control expectation effects. Presentation of beta-frequency binaural beats yielded more correct target detections and fewer false alarms than presentation of theta/delta frequency binaural beats. In addition, the beta-frequency beats were associated with less negative mood. Results suggest that the presentation of binaural auditory beats can affect psychomotor performance and mood. This technology may have applications for the control of attention and arousal and the enhancement of human performance.
Effects of auditory selective attention on chirp evoked auditory steady state responses.
Bohr, Andreas; Bernarding, Corinna; Strauss, Daniel J; Corona-Strauss, Farah I
2011-01-01
Auditory steady state responses (ASSRs) are frequently used to assess auditory function. Recently, the interest in effects of attention on ASSRs has increased. In this paper, we investigated for the first time possible effects of attention on AS-SRs evoked by amplitude modulated and frequency modulated chirps paradigms. Different paradigms were designed using chirps with low and high frequency content, and the stimulation was presented in a monaural and dichotic modality. A total of 10 young subjects participated in the study, they were instructed to ignore the stimuli and after a second repetition they had to detect a deviant stimulus. In the time domain analysis, we found enhanced amplitudes for the attended conditions. Furthermore, we noticed higher amplitudes values for the condition using frequency modulated low frequency chirps evoked by a monaural stimulation. The most difference between attended and unattended modality was exhibited at the dichotic case of the amplitude modulated condition using chirps with low frequency content.
Binaural beats increase interhemispheric alpha-band coherence between auditory cortices.
Solcà, Marco; Mottaz, Anaïs; Guggisberg, Adrian G
2016-02-01
Binaural beats (BBs) are an auditory illusion occurring when two tones of slightly different frequency are presented separately to each ear. BBs have been suggested to alter physiological and cognitive processes through synchronization of the brain hemispheres. To test this, we recorded electroencephalograms (EEG) at rest and while participants listened to BBs or a monaural control condition during which both tones were presented to both ears. We calculated for each condition the interhemispheric coherence, which expressed the synchrony between neural oscillations of both hemispheres. Compared to monaural beats and resting state, BBs enhanced interhemispheric coherence between the auditory cortices. Beat frequencies in the alpha (10 Hz) and theta (4 Hz) frequency range both increased interhemispheric coherence selectively at alpha frequencies. In a second experiment, we evaluated whether this coherence increase has a behavioral aftereffect on binaural listening. No effects were observed in a dichotic digit task performed immediately after BBs presentation. Our results suggest that BBs enhance alpha-band oscillation synchrony between the auditory cortices during auditory stimulation. This effect seems to reflect binaural integration rather than entrainment. Copyright © 2015 Elsevier B.V. All rights reserved.
Establishing the Response of Low Frequency Auditory Filters
NASA Technical Reports Server (NTRS)
Rafaelof, Menachem; Christian, Andrew; Shepherd, Kevin; Rizzi, Stephen; Stephenson, James
2017-01-01
The response of auditory filters is central to frequency selectivity of sound by the human auditory system. This is true especially for realistic complex sounds that are often encountered in many applications such as modeling the audibility of sound, voice recognition, noise cancelation, and the development of advanced hearing aid devices. The purpose of this study was to establish the response of low frequency (below 100Hz) auditory filters. Two experiments were designed and executed; the first was to measure subject's hearing threshold for pure tones (at 25, 31.5, 40, 50, 63 and 80 Hz), and the second was to measure the Psychophysical Tuning Curves (PTCs) at two signal frequencies (Fs= 40 and 63Hz). Experiment 1 involved 36 subjects while experiment 2 used 20 subjects selected from experiment 1. Both experiments were based on a 3-down 1-up 3AFC adaptive staircase test procedure using either a variable level narrow-band noise masker or a tone. A summary of the results includes masked threshold data in form of PTCs, the response of auditory filters, their distribution, and comparison with similar recently published data.
Reduced variability of auditory alpha activity in chronic tinnitus.
Schlee, Winfried; Schecklmann, Martin; Lehner, Astrid; Kreuzer, Peter M; Vielsmeier, Veronika; Poeppl, Timm B; Langguth, Berthold
2014-01-01
Subjective tinnitus is characterized by the conscious perception of a phantom sound which is usually more prominent under silence. Resting state recordings without any auditory stimulation demonstrated a decrease of cortical alpha activity in temporal areas of subjects with an ongoing tinnitus perception. This is often interpreted as an indicator for enhanced excitability of the auditory cortex in tinnitus. In this study we want to further investigate this effect by analysing the moment-to-moment variability of the alpha activity in temporal areas. Magnetoencephalographic resting state recordings of 21 tinnitus subjects and 21 healthy controls were analysed with respect to the mean and the variability of spectral power in the alpha frequency band over temporal areas. A significant decrease of auditory alpha activity was detected for the low alpha frequency band (8-10 Hz) but not for the upper alpha band (10-12 Hz). Furthermore, we found a significant decrease of alpha variability for the tinnitus group. This result was significant for the lower alpha frequency range and not significant for the upper alpha frequencies. Tinnitus subjects with a longer history of tinnitus showed less variability of their auditory alpha activity which might be an indicator for reduced adaptability of the auditory cortex in chronic tinnitus.
Frequency encoded auditory display of the critical tracking task
NASA Technical Reports Server (NTRS)
Stevenson, J.
1984-01-01
The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.
Vu, Michael T.; Du, Guizhi; Bayliss, Douglas A.
2015-01-01
Basal forebrain cholinergic neurons are the main source of cortical acetylcholine, and their activation by histamine elicits cortical arousal. TWIK-like acid-sensitive K+ (TASK) channels modulate neuronal excitability and are expressed on basal forebrain cholinergic neurons, but the role of TASK channels in the histamine-basal forebrain cholinergic arousal circuit is unknown. We first expressed TASK channel subunits and histamine Type 1 receptors in HEK cells. Application of histamine in vitro inhibited the acid-sensitive K+ current, indicating a functionally coupled signaling mechanism. We then studied the role of TASK channels in modulating electrocortical activity in vivo using freely behaving wild-type (n = 12) and ChAT-Cre:TASKf/f mice (n = 12), the latter lacking TASK-1/3 channels on cholinergic neurons. TASK channel deletion on cholinergic neurons significantly altered endogenous electroencephalogram oscillations in multiple frequency bands. We then identified the effect of TASK channel deletion during microperfusion of histamine into the basal forebrain. In non-rapid eye movement sleep, TASK channel deletion on cholinergic neurons significantly attenuated the histamine-induced increase in 30–50 Hz activity, consistent with TASK channels contributing to histamine action on basal forebrain cholinergic neurons. In contrast, during active wakefulness, histamine significantly increased 30–50 Hz activity in ChAT-Cre:TASKf/f mice but not wild-type mice, showing that the histamine response depended upon the prevailing cortical arousal state. In summary, we identify TASK channel modulation in response to histamine receptor activation in vitro, as well as a role of TASK channels on cholinergic neurons in modulating endogenous oscillations in the electroencephalogram and the electrocortical response to histamine at the basal forebrain in vivo. SIGNIFICANCE STATEMENT Attentive states and cognitive function are associated with the generation of γ EEG activity. Basal forebrain cholinergic neurons are important modulators of cortical arousal and γ activity, and in this study we investigated the mechanism by which these neurons are activated by the wake-active neurotransmitter histamine. We found that histamine inhibited a class of K+ leak channels called TASK channels and that deletion of TASK channels selectively on cholinergic neurons modulated baseline EEG activity as well as histamine-induced changes in γ activity. By identifying a discrete brain circuit where TASK channels can influence γ activity, these results represent new knowledge that enhances our understanding of how subcortical arousal systems may contribute to the generation of attentive states. PMID:26446210
Ozdamar, Ozcan; Bohorquez, Jorge; Mihajloski, Todor; Yavuz, Erdem; Lachowska, Magdalena
2011-01-01
Electrophysiological indices of auditory binaural beats illusions are studied using late latency evoked responses. Binaural beats are generated by continuous monaural FM tones with slightly different ascending and descending frequencies lasting about 25 ms presented at 1 sec intervals. Frequency changes are carefully adjusted to avoid any creation of abrupt waveform changes. Binaural Interaction Component (BIC) analysis is used to separate the neural responses due to binaural involvement. The results show that the transient auditory evoked responses can be obtained from the auditory illusion of binaural beats.
Sleifer, Pricila; Didoné, Dayane Domeneghini; Keppeler, Ísis Bicca; Bueno, Claudine Devicari; Riesgo, Rudimar dos Santos
2017-01-01
Introduction The tone-evoked auditory brainstem responses (tone-ABR) enable the differential diagnosis in the evaluation of children until 12 months of age, including those with external and/or middle ear malformations. The use of auditory stimuli with frequency specificity by air and bone conduction allows characterization of hearing profile. Objective The objective of our study was to compare the results obtained in tone-ABR by air and bone conduction in children until 12 months, with agenesis of the external auditory canal. Method The study was cross-sectional, observational, individual, and contemporary. We conducted the research with tone-ABR by air and bone conduction in the frequencies of 500 Hz and 2000 Hz in 32 children, 23 boys, from one to 12 months old, with agenesis of the external auditory canal. Results The tone-ABR thresholds were significantly elevated for air conduction in the frequencies of 500 Hz and 2000 Hz, while the thresholds of bone conduction had normal values in both ears. We found no statistically significant difference between genders and ears for most of the comparisons. Conclusion The thresholds obtained by bone conduction did not alter the thresholds in children with conductive hearing loss. However, the conductive hearing loss alter all thresholds by air conduction. The tone-ABR by bone conduction is an important tool for assessing cochlear integrity in children with agenesis of the external auditory canal under 12 months. PMID:29018492
Impact of Educational Level on Performance on Auditory Processing Tests.
Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane
2016-01-01
Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.
Moraes, Michele M; Rabelo, Patrícia C R; Pinto, Valéria A; Pires, Washington; Wanner, Samuel P; Szawka, Raphael E; Soares, Danusa D
2018-04-23
Listening to melodic music is regarded as a non-pharmacological intervention that ameliorates various disease symptoms, likely by changing the activity of brain monoaminergic systems. Here, we investigated the effects of exposure to melodic music on the concentrations of dopamine (DA), serotonin (5-HT) and their respective metabolites in the caudate-putamen (CPu) and nucleus accumbens (NAcc), areas linked to reward and motor control. Male adult Wistar rats were randomly assigned to a control group or a group exposed to music. The music group was submitted to 8 music sessions [Mozart's sonata for two pianos (K. 488) at an average sound pressure of 65 dB]. The control rats were handled in the same way but were not exposed to music. Immediately after the last exposure or control session, the rats were euthanized, and their brains were quickly removed to analyze the concentrations of 5-HT, DA, 5-hydroxyindoleacetic acid (5-HIAA) and 3,4-dihydroxyphenylacetic acid (DOPAC) in the CPu and NAcc. Auditory stimuli affected the monoaminergic system in these two brain structures. In the CPu, auditory stimuli increased the concentrations of DA and 5-HIAA but did not change the DOPAC or 5-HT levels. In the NAcc, music markedly increased the DOPAC/DA ratio, suggesting an increase in DA turnover. Our data indicate that auditory stimuli, such as exposure to melodic music, increase DA levels and the release of 5-HT in the CPu as well as DA turnover in the NAcc, suggesting that the music had a direct impact on monoamine activity in these brain areas. Copyright © 2018 Elsevier B.V. All rights reserved.
Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning
2012-01-01
Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus-tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive and low-cost treatment approach for tonal tinnitus into routine clinical practice.
Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning
2012-01-01
Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus—tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive and low-cost treatment approach for tonal tinnitus into routine clinical practice. PMID:22754508
Is the Role of External Feedback in Auditory Skill Learning Age Dependent?
Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat
2017-12-20
The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for frequency) task, with external feedback (EF) provided for half of them. Data supported the following findings: (a) Children learned the difference limen for frequency task only when EF was provided. (b) The ability of the children to benefit from EF was associated with better cognitive skills. (c) Adults showed significant learning whether EF was provided or not. (d) In children, within-session learning following training was dependent on the provision of feedback, whereas between-sessions learning occurred irrespective of feedback. EF was found beneficial for auditory skill learning of 7-9-year-old children but not for young adults. The data support the supervised Hebbian model for auditory skill learning, suggesting combined bottom-up internal neural feedback controlled by top-down monitoring. In the case of immature executive functions, EF enhanced auditory skill learning. This study has implications for the design of training protocols in the auditory modality for different age groups, as well as for special populations.
Joachimsthaler, Bettina; Uhlmann, Michaela; Miller, Frank; Ehret, Günter; Kurt, Simone
2014-01-01
Because of its great genetic potential, the mouse (Mus musculus) has become a popular model species for studies on hearing and sound processing along the auditory pathways. Here, we present the first comparative study on the representation of neuronal response parameters to tones in primary and higher-order auditory cortical fields of awake mice. We quantified 12 neuronal properties of tone processing in order to estimate similarities and differences of function between the fields, and to discuss how far auditory cortex (AC) function in the mouse is comparable to that in awake monkeys and cats. Extracellular recordings were made from 1400 small clusters of neurons from cortical layers III/IV in the primary fields AI (primary auditory field) and AAF (anterior auditory field), and the higher-order fields AII (second auditory field) and DP (dorsoposterior field). Field specificity was shown with regard to spontaneous activity, correlation between spontaneous and evoked activity, tone response latency, sharpness of frequency tuning, temporal response patterns (occurrence of phasic responses, phasic-tonic responses, tonic responses, and off-responses), and degree of variation between the characteristic frequency (CF) and the best frequency (BF) (CF–BF relationship). Field similarities were noted as significant correlations between CFs and BFs, V-shaped frequency tuning curves, similar minimum response thresholds and non-monotonic rate-level functions in approximately two-thirds of the neurons. Comparative and quantitative analyses showed that the measured response characteristics were, to various degrees, susceptible to influences of anesthetics. Therefore, studies of neuronal responses in the awake AC are important in order to establish adequate relationships between neuronal data and auditory perception and acoustic response behavior. PMID:24506843
Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation
Lopez-Poveda, Enrique A.; Barrios, Pablo
2013-01-01
Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176
Effect of EEG Referencing Methods on Auditory Mismatch Negativity
Mahajan, Yatin; Peter, Varghese; Sharma, Mridula
2017-01-01
Auditory event-related potentials (ERPs) have consistently been used in the investigation of auditory and cognitive processing in the research and clinical laboratories. There is currently no consensus on the choice of appropriate reference for auditory ERPs. The most commonly used references in auditory ERP research are the mathematically linked-mastoids (LM) and average referencing (AVG). Since LM and AVG referencing procedures do not solve the issue of electrically-neutral reference, Reference Electrode Standardization Technique (REST) was developed to create a neutral reference for EEG recordings. The aim of the current research is to compare the influence of the reference on amplitude and latency of auditory mismatch negativity (MMN) as a function of magnitude of frequency deviance across three commonly used electrode montages (16, 32, and 64-channel) using REST, LM, and AVG reference procedures. The current study was designed to determine if the three reference methods capture the variation in amplitude and latency of MMN with the deviance magnitude. We recorded MMN from 12 normal hearing young adults in an auditory oddball paradigm with 1,000 Hz pure tone as standard and 1,030, 1,100, and 1,200 Hz as small, medium and large frequency deviants, respectively. The EEG data recorded to these sounds was re-referenced using REST, LM, and AVG methods across 16-, 32-, and 64-channel EEG electrode montages. Results revealed that while the latency of MMN decreased with increment in frequency of deviant sounds, no effect of frequency deviance was present for amplitude of MMN. There was no effect of referencing procedure on the experimental effect tested. The amplitude of MMN was largest when the ERP was computed using LM referencing and the REST referencing produced the largest amplitude of MMN for 64-channel montage. There was no effect of electrode-montage on AVG referencing induced ERPs. Contrary to our predictions, the results suggest that the auditory MMN elicited as a function of increments in frequency deviance does not depend on the choice of referencing procedure. The results also suggest that auditory ERPs generated using REST referencing is contingent on the electrode arrays more than the AVG referencing. PMID:29066945
Rohmann, Kevin N.; Bass, Andrew H.
2011-01-01
SUMMARY Vertebrates displaying seasonal shifts in reproductive behavior provide the opportunity to investigate bidirectional plasticity in sensory function. The midshipman teleost fish exhibits steroid-dependent plasticity in frequency encoding by eighth nerve auditory afferents. In this study, evoked potentials were recorded in vivo from the saccule, the main auditory division of the inner ear of most teleosts, to test the hypothesis that males and females exhibit seasonal changes in hair cell physiology in relation to seasonal changes in plasma levels of steroids. Thresholds across the predominant frequency range of natural vocalizations were significantly less in both sexes in reproductive compared with non-reproductive conditions, with differences greatest at frequencies corresponding to call upper harmonics. A subset of non-reproductive males exhibiting an intermediate saccular phenotype had elevated testosterone levels, supporting the hypothesis that rising steroid levels induce non-reproductive to reproductive transitions in saccular physiology. We propose that elevated levels of steroids act via long-term (days to weeks) signaling pathways to upregulate ion channel expression generating higher resonant frequencies characteristic of non-mammalian auditory hair cells, thereby lowering acoustic thresholds. PMID:21562181
Auditory-motor Mapping for Pitch Control in Singers and Nonsingers
Jones, Jeffery A.; Keough, Dwayne
2009-01-01
Little is known about the basic processes underlying the behavior of singing. This experiment was designed to examine differences in the representation of the mapping between fundamental frequency (F0) feedback and the vocal production system in singers and nonsingers. Auditory feedback regarding F0 was shifted down in frequency while participants sang the consonant-vowel /ta/. During the initial frequency-altered trials, singers compensated to a lesser degree than nonsingers, but this difference was reduced with continued exposure to frequency-altered feedback. After brief exposure to frequency altered auditory feedback, both singers and nonsingers suddenly heard their F0 unaltered. When participants received this unaltered feedback, only singers' F0 values were found to be significantly higher than their F0 values produced during baseline and control trials. These aftereffects in singers were replicated when participants sang a different note than the note they produced while hearing altered feedback. Together, these results suggest that singers rely more on internal models than nonsingers to regulate vocal productions rather than real time auditory feedback. PMID:18592224
Clinical applications of the human brainstem responses to auditory stimuli
NASA Technical Reports Server (NTRS)
Galambos, R.; Hecox, K.
1975-01-01
A technique utilizing the frequency following response (FFR) (obtained by auditory stimulation, whereby the stimulus frequency and duration are mirror-imaged in the resulting brainwaves) as a clinical tool for hearing disorders in humans of all ages is presented. Various medical studies are discussed to support the clinical value of the technique. The discovery and origin of the FFR and another significant brainstem auditory response involved in studying the eighth nerve is also discussed.
Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony
2009-01-01
It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…
The effects of divided attention on auditory priming.
Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W
2007-09-01
Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.
Mapping Frequency-Specific Tone Predictions in the Human Auditory Cortex at High Spatial Resolution.
Berlot, Eva; Formisano, Elia; De Martino, Federico
2018-05-23
Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels. SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such prediction signaling; for example, whether a predicted tone is encoded with similar specificity as a perceived tone. Here, we show that early auditory regions encode the frequency of a tone that is predicted yet omitted. Our findings contribute to the understanding of how expectations shape sound processing in the human auditory cortex and provide further insights into how contextual information influences computations in neuronal circuits. Copyright © 2018 the authors 0270-6474/18/384934-09$15.00/0.
A lateralized functional auditory network is involved in anuran sexual selection.
Xue, Fei; Fang, Guangzhan; Yue, Xizi; Zhao, Ermi; Brauth, Steven E; Tang, Yezhong
2016-12-01
Right ear advantage (REA) exists in many land vertebrates in which the right ear and left hemisphere preferentially process conspecific acoustic stimuli such as those related to sexual selection. Although ecological and neural mechanisms for sexual selection have been widely studied, the brain networks involved are still poorly understood. In this study we used multi-channel electroencephalographic data in combination with Granger causal connectivity analysis to demonstrate, for the first time, that auditory neural network interconnecting the left and right midbrain and forebrain function asymmetrically in the Emei music frog (Babina daunchina), an anuran species which exhibits REA. The results showed the network was lateralized. Ascending connections between the mesencephalon and telencephalon were stronger in the left side while descending ones were stronger in the right, which matched with the REA in this species and implied that inhibition from the forebrainmay induce REA partly. Connections from the telencephalon to ipsilateral mesencephalon in response to white noise were the highest in the non-reproductive stage while those to advertisement calls were the highest in reproductive stage, implying the attention resources and living strategy shift when entered the reproductive season. Finally, these connection changes were sexually dimorphic, revealing sex differences in reproductive roles.
Mhatre, Natasha; Pollack, Gerald; Mason, Andrew
2016-04-01
Tree cricket males produce tonal songs, used for mate attraction and male-male interactions. Active mechanics tunes hearing to conspecific song frequency. However, tree cricket song frequency increases with temperature, presenting a problem for tuned listeners. We show that the actively amplified frequency increases with temperature, thus shifting mechanical and neuronal auditory tuning to maintain a match with conspecific song frequency. Active auditory processes are known from several taxa, but their adaptive function has rarely been demonstrated. We show that tree crickets harness active processes to ensure that auditory tuning remains matched to conspecific song frequency, despite changing environmental conditions and signal characteristics. Adaptive tuning allows tree crickets to selectively detect potential mates or rivals over large distances and is likely to bestow a strong selective advantage by reducing mate-finding effort and facilitating intermale interactions. © 2016 The Author(s).
Foran, Lindsey; Blackburn, Kaitlyn; Kulesza, Randy J
2017-03-06
Glutamate is the most abundant excitatory neurotransmitter in the central nervous system, and is stored and released by both neurons and astrocytes. Despite the important role of glutamate as a neurotransmitter, elevated extracellular glutamate can result in excitotoxicity and apoptosis. Monosodium glutamate (MSG) is a naturally occurring sodium salt of glutamic acid that is used as a flavor enhancer in many processed foods. Previous studies have shown that MSG administration during the early postnatal period results in neurodegenerative changes in several forebrain regions, characterized by neuronal loss and neuroendocrine abnormalities. Systemic delivery of MSG during the neonatal period and induction of glutamate neurotoxicity in the cochlea have both been shown to result in fewer neurons in the spiral ganglion. We hypothesized that an MSG-induced loss of neurons in the spiral ganglion would have a significant impact on the number of neurons in the cochlear nuclei and superior olivary complex (SOC). Indeed, we found that exposure to MSG from postnatal days 4 through 10 resulted in significantly fewer neurons in the cochlear nuclei and SOC and significant dysmorphology in surviving neurons. Moreover, we found that neonatal MSG exposure resulted in a significant decrease in the expression of both calretinin and calbindin. These results suggest that neonatal exposure to MSG interferes with early development of the auditory brainstem and impacts expression of calcium binding proteins, both of which may lead to diminished auditory function. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Kadner, Alexander; Berrebi, Albert S.
2008-01-01
Neurons in the superior paraolivary nucleus (SPON) respond to the offset of pure tones with a brief burst of spikes. Medial nucleus of the trapezoid body (MNTB) neurons, which inhibit the SPON, produce a sustained pure tone response followed by an offset response characterized by a period of suppressed spontaneous activity. This MNTB offset response is duration dependent and critical to the formation of SPON offset spikes (Kadner et al., 2006; Kulesza, Jr. et al., 2007). Here we examine the temporal resolution of the MNTB/SPON circuit by assessing its capability to i) detect gaps in tones, and ii) synchronize to sinusoidally amplitude modulated (SAM) tones. Gap detection was tested by presenting two identical pure tone markers interrupted by gaps ranging from 0–25 ms duration. SPON neurons responded to the offset of the leading marker even when the two markers were separated only by their ramps (i.e., a 0 ms gap); longer gap durations elicited progressively larger responses. MNTB neurons produced an offset response at gap durations of 2 ms or longer, with a subset of neurons responding to 0 ms gaps. SAM tone stimuli used the unit’s characteristic frequency as a carrier, and modulation rates ranged from 40–1160 Hz. MNTB neurons synchronized to modulation rates up to ~1 KHz, whereas spiking of SPON neurons decreased sharply at modulation rates ≥ 400 Hz. Modulation transfer functions based on spike count were all-pass for MNTB neurons and low-pass for SPON neurons; the modulation transfer functions based on vector strength were low-pass for both nuclei, with a steeper cut-off for SPON neurons. Thus, the MNTB/SPON circuit encodes episodes of low stimulus energy, such as gaps in pure tones and troughs in amplitude modulated tones. The output of this circuit consists of brief SPON spiking episodes; their potential effects on the auditory midbrain and forebrain are discussed. PMID:18155850
Keough, Dwayne; Hawco, Colin; Jones, Jeffery A
2013-03-09
Auditory feedback is important for accurate control of voice fundamental frequency (F(0)). The purpose of this study was to address whether task instructions could influence the compensatory responding and sensorimotor adaptation that has been previously found when participants are presented with a series of frequency-altered feedback (FAF) trials. Trained singers and musically untrained participants (nonsingers) were informed that their auditory feedback would be manipulated in pitch while they sang the target vowel [/α /]. Participants were instructed to either 'compensate' for, or 'ignore' the changes in auditory feedback. Whole utterance auditory feedback manipulations were either gradually presented ('ramp') in -2 cent increments down to -100 cents (1 semitone) or were suddenly ('constant') shifted down by 1 semitone. Results indicated that singers and nonsingers could not suppress their compensatory responses to FAF, nor could they reduce the sensorimotor adaptation observed during both the ramp and constant FAF trials. Compared to previous research, these data suggest that musical training is effective in suppressing compensatory responses only when FAF occurs after vocal onset (500-2500 ms). Moreover, our data suggest that compensation and adaptation are automatic and are influenced little by conscious control.
2013-01-01
Background Auditory feedback is important for accurate control of voice fundamental frequency (F0). The purpose of this study was to address whether task instructions could influence the compensatory responding and sensorimotor adaptation that has been previously found when participants are presented with a series of frequency-altered feedback (FAF) trials. Trained singers and musically untrained participants (nonsingers) were informed that their auditory feedback would be manipulated in pitch while they sang the target vowel [/ɑ /]. Participants were instructed to either ‘compensate’ for, or ‘ignore’ the changes in auditory feedback. Whole utterance auditory feedback manipulations were either gradually presented (‘ramp’) in -2 cent increments down to -100 cents (1 semitone) or were suddenly (’constant‘) shifted down by 1 semitone. Results Results indicated that singers and nonsingers could not suppress their compensatory responses to FAF, nor could they reduce the sensorimotor adaptation observed during both the ramp and constant FAF trials. Conclusions Compared to previous research, these data suggest that musical training is effective in suppressing compensatory responses only when FAF occurs after vocal onset (500-2500 ms). Moreover, our data suggest that compensation and adaptation are automatic and are influenced little by conscious control. PMID:23497238
Shepard, Kathryn N; Chong, Kelly K; Liu, Robert C
2016-01-01
Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups' postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning.
Norman-Haignere, Sam; Kanwisher, Nancy; McDermott, Josh H
2013-12-11
Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce "resolved" peaks of excitation in the cochlea, whereas others are "unresolved," providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.
Kanwisher, Nancy; McDermott, Josh H.
2013-01-01
Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce “resolved” peaks of excitation in the cochlea, whereas others are “unresolved,” providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior. PMID:24336712
Stimulus-specific suppression preserves information in auditory short-term memory.
Linke, Annika C; Vicente-Grabovetsky, Alejandro; Cusack, Rhodri
2011-08-02
Philosophers and scientists have puzzled for millennia over how perceptual information is stored in short-term memory. Some have suggested that early sensory representations are involved, but their precise role has remained unclear. The current study asks whether auditory cortex shows sustained frequency-specific activation while sounds are maintained in short-term memory using high-resolution functional MRI (fMRI). Investigating short-term memory representations within regions of human auditory cortex with fMRI has been difficult because of their small size and high anatomical variability between subjects. However, we overcame these constraints by using multivoxel pattern analysis. It clearly revealed frequency-specific activity during the encoding phase of a change detection task, and the degree of this frequency-specific activation was positively related to performance in the task. Although the sounds had to be maintained in memory, activity in auditory cortex was significantly suppressed. Strikingly, patterns of activity in this maintenance period correlated negatively with the patterns evoked by the same frequencies during encoding. Furthermore, individuals who used a rehearsal strategy to remember the sounds showed reduced frequency-specific suppression during the maintenance period. Although negative activations are often disregarded in fMRI research, our findings imply that decreases in blood oxygenation level-dependent response carry important stimulus-specific information and can be related to cognitive processes. We hypothesize that, during auditory change detection, frequency-specific suppression protects short-term memory representations from being overwritten by inhibiting the encoding of interfering sounds.
Paavilainen, P; Simola, J; Jaramillo, M; Näätänen, R; Winkler, I
2001-03-01
Brain mechanisms extracting invariant information from varying auditory inputs were studied using the mismatch-negativity (MMN) brain response. We wished to determine whether the preattentive sound-analysis mechanisms, reflected by MMN, are capable of extracting invariant relationships based on abstract conjunctions between two sound features. The standard stimuli varied over a large range in frequency and intensity dimensions following the rule that the higher the frequency, the louder the intensity. The occasional deviant stimuli violated this frequency-intensity relationship and elicited an MMN. The results demonstrate that preattentive processing of auditory stimuli extends to unexpectedly complex relationships between the stimulus features.
Carroll, Christine A; Kieffaber, Paul D; Vohs, Jenifer L; O'Donnell, Brian F; Shekhar, Anantha; Hetrick, William P
2008-11-01
The present study investigated event-related brain potential (ERP) indices of auditory processing and sensory gating in bipolar disorder and subgroups of bipolar patients with or without a history of psychosis using the P50 dual-click procedure. Auditory-evoked activity in two discrete frequency bands also was explored to distinguish between sensory registration and selective attention deficits. Thirty-one individuals with bipolar disorder and 28 non-psychiatric controls were compared on ERP indices of auditory processing using a dual-click procedure. In addition to conventional P50 ERP peak-picking techniques, quantitative frequency analyses were applied to the ERP data to isolate stages of information processing associated with sensory registration (20-50 Hz; gamma band) and selective attention (0-20 Hz; low-frequency band). Compared to the non-psychiatric control group, patients with bipolar disorder exhibited reduced S1 response magnitudes for the conventional P50 peak-picking and low-frequency response analyses. A bipolar subgroup effect suggested that the attenuated S1 magnitudes from the P50 peak-picking and low-frequency analyses were largely attributable to patients without a history of psychosis. The analysis of distinct frequency bands of the auditory-evoked response elicited during the dual-click procedure allowed further specification of the nature of auditory sensory processing and gating deficits in bipolar disorder with or without a history of psychosis. The observed S1 effects in the low-frequency band suggest selective attention deficits in bipolar patients, especially those patients without a history of psychosis, which may reflect a diminished capacity to selectively attend to salient stimuli as opposed to impairments of inhibitory sensory processes.
Loudspeaker equalization for auditory research.
MacDonald, Justin A; Tran, Phuong K
2007-02-01
The equalization of loudspeaker frequency response is necessary to conduct many types of well-controlled auditory experiments. This article introduces a program that includes functions to measure a loudspeaker's frequency response, design equalization filters, and apply the filters to a set of stimuli to be used in an auditory experiment. The filters can compensate for both magnitude and phase distortions introduced by the loudspeaker. A MATLAB script is included in the Appendix to illustrate the details of the equalization algorithm used in the program.
Plasticity of peripheral auditory frequency sensitivity in Emei music frog.
Zhang, Dian; Cui, Jianguo; Tang, Yezhong
2012-01-01
In anurans reproductive behavior is strongly seasonal. During the spring, frogs emerge from hibernation and males vocalize for mating or advertising territories. Female frogs have the ability to evaluate the quality of the males' resources on the basis of these vocalizations. Although studies revealed that central single torus semicircularis neurons in frogs exhibit season plasticity, the plasticity of peripheral auditory sensitivity in frog is unknown. In this study the seasonally plasticity of peripheral auditory sensitivity was test in the Emei music frog Babina daunchina, by comparing thresholds and latencies of auditory brainstem responses (ABRs) evoked by tone pips and clicks in the reproductive and non-reproductive seasons. The results show that both ABR thresholds and latency differ significantly between the reproductive and non-reproductive seasons. The thresholds of tone pip evoked ABRs in the non-reproductive season increased significantly about 10 dB than those in the reproductive season for frequencies from 1 KHz to 6 KHz. ABR latencies to waveform valley values for tone pips for the same frequencies using appropriate threshold stimulus levels are longer than those in the reproductive season for frequencies from 1.5 to 6 KHz range, although from 0.2 to 1.5 KHz range it is shorter in the non-reproductive season. These results demonstrated that peripheral auditory frequency sensitivity exhibits seasonal plasticity changes which may be adaptive to seasonal reproductive behavior in frogs.
ERIC Educational Resources Information Center
Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean
2015-01-01
We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…
Electrically-evoked frequency-following response (EFFR) in the auditory brainstem of guinea pigs.
He, Wenxin; Ding, Xiuyong; Zhang, Ruxiang; Chen, Jing; Zhang, Daoxing; Wu, Xihong
2014-01-01
It is still a difficult clinical issue to decide whether a patient is a suitable candidate for a cochlear implant and to plan postoperative rehabilitation, especially for some special cases, such as auditory neuropathy. A partial solution to these problems is to preoperatively evaluate the functional integrity of the auditory neural pathways. For evaluating the strength of phase-locking of auditory neurons, which was not reflected in previous methods using electrically evoked auditory brainstem response (EABR), a new method for recording phase-locking related auditory responses to electrical stimulation, called the electrically evoked frequency-following response (EFFR), was developed and evaluated using guinea pigs. The main objective was to assess feasibility of the method by testing whether the recorded signals reflected auditory neural responses or artifacts. The results showed the following: 1) the recorded signals were evoked by neuron responses rather than by artifact; 2) responses evoked by periodic signals were significantly higher than those evoked by the white noise; 3) the latency of the responses fell in the expected range; 4) the responses decreased significantly after death of the guinea pigs; and 5) the responses decreased significantly when the animal was replaced by an electrical resistance. All of these results suggest the method was valid. Recording obtained using complex tones with a missing fundamental component and using pure tones with various frequencies were consistent with those obtained using acoustic stimulation in previous studies.
Cortical mechanisms for the segregation and representation of acoustic textures.
Overath, Tobias; Kumar, Sukhbinder; Stewart, Lauren; von Kriegstein, Katharina; Cusack, Rhodri; Rees, Adrian; Griffiths, Timothy D
2010-02-10
Auditory object analysis requires two fundamental perceptual processes: the definition of the boundaries between objects, and the abstraction and maintenance of an object's characteristic features. Although it is intuitive to assume that the detection of the discontinuities at an object's boundaries precedes the subsequent precise representation of the object, the specific underlying cortical mechanisms for segregating and representing auditory objects within the auditory scene are unknown. We investigated the cortical bases of these two processes for one type of auditory object, an "acoustic texture," composed of multiple frequency-modulated ramps. In these stimuli, we independently manipulated the statistical rules governing (1) the frequency-time space within individual textures (comprising ramps with a given spectrotemporal coherence) and (2) the boundaries between textures (adjacent textures with different spectrotemporal coherences). Using functional magnetic resonance imaging, we show mechanisms defining boundaries between textures with different coherences in primary and association auditory cortices, whereas texture coherence is represented only in association cortex. Furthermore, participants' superior detection of boundaries across which texture coherence increased (as opposed to decreased) was reflected in a greater neural response in auditory association cortex at these boundaries. The results suggest a hierarchical mechanism for processing acoustic textures that is relevant to auditory object analysis: boundaries between objects are first detected as a change in statistical rules over frequency-time space, before a representation that corresponds to the characteristics of the perceived object is formed.
Auditory Pattern Recognition and Brief Tone Discrimination of Children with Reading Disorders
ERIC Educational Resources Information Center
Walker, Marianna M.; Givens, Gregg D.; Cranford, Jerry L.; Holbert, Don; Walker, Letitia
2006-01-01
Auditory pattern recognition skills in children with reading disorders were investigated using perceptual tests involving discrimination of frequency and duration tonal patterns. A behavioral test battery involving recognition of the pattern of presentation of tone triads was used in which individual components differed in either frequency or…
Contrast Gain Control in Auditory Cortex
Rabinowitz, Neil C.; Willmore, Ben D.B.; Schnupp, Jan W.H.; King, Andrew J.
2011-01-01
Summary The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neuron's preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds. PMID:21689603
Blast-induced tinnitus and hyperactivity in the auditory cortex of rats.
Luo, Hao; Pace, Edward; Zhang, Jinsheng
2017-01-06
Blast exposure can cause tinnitus and hearing impairment by damaging the auditory periphery and direct impact to the brain, which trigger neural plasticity in both auditory and non-auditory centers. However, the underlying neurophysiological mechanisms of blast-induced tinnitus are still unknown. In this study, we induced tinnitus in rats using blast exposure and investigated changes in spontaneous firing and bursting activity in the auditory cortex (AC) at one day, one month, and three months after blast exposure. Our results showed that spontaneous activity in the tinnitus-positive group began changing at one month after blast exposure, and manifested as robust hyperactivity at all frequency regions at three months after exposure. We also observed an increased bursting rate in the low-frequency region at one month after blast exposure and in all frequency regions at three months after exposure. Taken together, spontaneous firing and bursting activity in the AC played an important role in blast-induced chronic tinnitus as opposed to acute tinnitus, thus favoring a bottom-up mechanism. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Auditory word recognition: extrinsic and intrinsic effects of word frequency.
Connine, C M; Titone, D; Wang, J
1993-01-01
Two experiments investigated the influence of word frequency in a phoneme identification task. Speech voicing continua were constructed so that one endpoint was a high-frequency word and the other endpoint was a low-frequency word (e.g., best-pest). Experiment 1 demonstrated that ambiguous tokens were labeled such that a high-frequency word was formed (intrinsic frequency effect). Experiment 2 manipulated the frequency composition of the list (extrinsic frequency effect). A high-frequency list bias produced an exaggerated influence of frequency; a low-frequency list bias showed a reverse frequency effect. Reaction time effects were discussed in terms of activation and postaccess decision models of frequency coding. The results support a late use of frequency in auditory word recognition.
Sisneros, Joseph A
2009-03-01
The plainfin midshipman fish (Porichthys notatus Girard, 1854) is a vocal species of batrachoidid fish that generates acoustic signals for intraspecific communication during social and reproductive activity and has become a good model for investigating the neural and endocrine mechanisms of vocal-acoustic communication. Reproductively active female plainfin midshipman fish use their auditory sense to detect and locate "singing" males, which produce a multiharmonic advertisement call to attract females for spawning. The seasonal onset of male advertisement calling in the midshipman fish coincides with an increase in the range of frequency sensitivity of the female's inner ear saccule, the main organ of hearing, thus leading to enhanced encoding of the dominant frequency components of male advertisement calls. Non-reproductive females treated with either testosterone or 17β-estradiol exhibit a dramatic increase in the inner ear's frequency sensitivity that mimics the reproductive female's auditory phenotype and leads to an increased detection of the male's advertisement call. This novel form of auditory plasticity provides an adaptable mechanism that enhances coupling between sender and receiver in vocal communication. This review focuses on recent evidence for seasonal reproductive-state and steroid-dependent plasticity of auditory frequency sensitivity in the peripheral auditory system of the midshipman fish. The potential steroid-dependent mechanism(s) that lead to this novel form of auditory and behavioral plasticity are also discussed. © 2009 ISZS, Blackwell Publishing and IOZ/CAS.
Broadened population-level frequency tuning in the auditory cortex of tinnitus patients.
Sekiya, Kenichi; Takahashi, Mariko; Murakami, Shingo; Kakigi, Ryusuke; Okamoto, Hidehiko
2017-03-01
Tinnitus is a phantom auditory perception without an external sound source and is one of the most common public health concerns that impair the quality of life of many individuals. However, its neural mechanisms remain unclear. We herein examined population-level frequency tuning in the auditory cortex of unilateral tinnitus patients with similar hearing levels in both ears using magnetoencephalography. We compared auditory-evoked neural activities elicited by a stimulation to the tinnitus and nontinnitus ears. Objective magnetoencephalographic data suggested that population-level frequency tuning corresponding to the tinnitus ear was significantly broader than that corresponding to the nontinnitus ear in the human auditory cortex. The results obtained support the hypothesis that pathological alterations in inhibitory neural networks play an important role in the perception of subjective tinnitus. NEW & NOTEWORTHY Although subjective tinnitus is one of the most common public health concerns that impair the quality of life of many individuals, no standard treatment or objective diagnostic method currently exists. We herein revealed that population-level frequency tuning was significantly broader in the tinnitus ear than in the nontinnitus ear. The results of the present study provide an insight into the development of an objective diagnostic method for subjective tinnitus. Copyright © 2017 the American Physiological Society.
Encoding frequency contrast in primate auditory cortex
Scott, Brian H.; Semple, Malcolm N.
2014-01-01
Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM stimuli (e.g., modulation frequency, modulation depth, carrier frequency) were robustly encoded in the temporal dynamics of the spike trains. For example, changes in the carrier frequency produced highly reproducible changes in shapes of the modulation period histogram, consistent with the notion that the instantaneous probability of discharge mirrors the moment-by-moment spectrum at low modulation rates. The upper limit for phase locking was similar across SAM and SFM within neurons, suggesting shared biophysical constraints on temporal processing. Using spike train classification methods, we found that neural thresholds for modulation depth discrimination are typically far lower than would be predicted from frequency tuning to static tones. This “dynamic hyperacuity” suggests a substantial central enhancement of the neural representation of frequency changes relative to the auditory periphery. Spike timing information was superior to average rate information when discriminating among SFM signals, and even when discriminating among static tones varying in frequency. This finding held even when differences in total spike count across stimuli were normalized, indicating both the primacy and generality of temporal response dynamics in cortical auditory processing. PMID:24598525
De Martino, Federico; Moerel, Michelle; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia
2015-12-29
Columnar arrangements of neurons with similar preference have been suggested as the fundamental processing units of the cerebral cortex. Within these columnar arrangements, feed-forward information enters at middle cortical layers whereas feedback information arrives at superficial and deep layers. This interplay of feed-forward and feedback processing is at the core of perception and behavior. Here we provide in vivo evidence consistent with a columnar organization of the processing of sound frequency in the human auditory cortex. We measure submillimeter functional responses to sound frequency sweeps at high magnetic fields (7 tesla) and show that frequency preference is stable through cortical depth in primary auditory cortex. Furthermore, we demonstrate that-in this highly columnar cortex-task demands sharpen the frequency tuning in superficial cortical layers more than in middle or deep layers. These findings are pivotal to understanding mechanisms of neural information processing and flow during the active perception of sounds.
Perceptual consequences of disrupted auditory nerve activity.
Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold
2005-06-01
Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique contribution of neural synchrony to sensory perception but also provide guidance for translational research in terms of better diagnosis and management of human communication disorders.
Scheerer, N E; Jacobson, D S; Jones, J A
2016-02-09
Auditory feedback plays an important role in the acquisition of fluent speech; however, this role may change once speech is acquired and individuals no longer experience persistent developmental changes to the brain and vocal tract. For this reason, we investigated whether the role of auditory feedback in sensorimotor learning differs across children and adult speakers. Participants produced vocalizations while they heard their vocal pitch predictably or unpredictably shifted downward one semitone. The participants' vocal pitches were measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback modified subsequent speech motor commands. Sensorimotor learning was observed in both children and adults, with participants' initial vocal pitch increasing following trials where they were exposed to predictable, but not unpredictable, frequency-altered feedback. Participants' vocal pitch was also measured across each vocalization, to index the extent to which the deviant auditory feedback was used to modify ongoing vocalizations. While both children and adults were found to increase their vocal pitch following predictable and unpredictable changes to their auditory feedback, adults produced larger compensatory responses. The results of the current study demonstrate that both children and adults rapidly integrate information derived from their auditory feedback to modify subsequent speech motor commands. However, these results also demonstrate that children and adults differ in their ability to use auditory feedback to generate compensatory vocal responses during ongoing vocalization. Since vocal variability also differed across the children and adult groups, these results also suggest that compensatory vocal responses to frequency-altered feedback manipulations initiated at vocalization onset may be modulated by vocal variability. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Shepard, Kathryn N.; Chong, Kelly K.
2016-01-01
Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups’ postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning. PMID:27957529
Processing of frequency-modulated sounds in the lateral auditory belt cortex of the rhesus monkey.
Tian, Biao; Rauschecker, Josef P
2004-11-01
Single neurons were recorded from the lateral belt areas, anterolateral (AL), mediolateral (ML), and caudolateral (CL), of nonprimary auditory cortex in 4 adult rhesus monkeys under gas anesthesia, while the neurons were stimulated with frequency-modulated (FM) sweeps. Responses to FM sweeps, measured as the firing rate of the neurons, were invariably greater than those to tone bursts. In our stimuli, frequency changed linearly from low to high frequencies (FM direction "up") or high to low frequencies ("down") at varying speeds (FM rates). Neurons were highly selective to the rate and direction of the FM sweep. Significant differences were found between the 3 lateral belt areas with regard to their FM rate preferences: whereas neurons in ML responded to the whole range of FM rates, AL neurons responded better to slower FM rates in the range of naturally occurring communication sounds. CL neurons generally responded best to fast FM rates at a speed of several hundred Hz/ms, which have the broadest frequency spectrum. These selectivities are consistent with a role of AL in the decoding of communication sounds and of CL in the localization of sounds, which works best with broader bandwidths. Together, the results support the hypothesis of parallel streams for the processing of different aspects of sounds, including auditory objects and auditory space.
Lustenberger, Caroline; Patel, Yogi A; Alagapan, Sankaraleengam; Page, Jessica M; Price, Betsy; Boyle, Michael R; Fröhlich, Flavio
2018-04-01
Auditory rhythmic sensory stimulation modulates brain oscillations by increasing phase-locking to the temporal structure of the stimuli and by increasing the power of specific frequency bands, resulting in Auditory Steady State Responses (ASSR). The ASSR is altered in different diseases of the central nervous system such as schizophrenia. However, in order to use the ASSR as biological markers for disease states, it needs to be understood how different vigilance states and underlying brain activity affect the ASSR. Here, we compared the effects of auditory rhythmic stimuli on EEG brain activity during wake and NREM sleep, investigated the influence of the presence of dominant sleep rhythms on the ASSR, and delineated the topographical distribution of these modulations. Participants (14 healthy males, 20-33 years) completed on the same day a 60 min nap session and two 30 min wakefulness sessions (before and after the nap). During these sessions, amplitude modulated (AM) white noise auditory stimuli at different frequencies were applied. High-density EEG was continuously recorded and time-frequency analyses were performed to assess ASSR during wakefulness and NREM periods. Our analysis revealed that depending on the electrode location, stimulation frequency applied and window/frequencies analysed the ASSR was significantly modulated by sleep pressure (before and after sleep), vigilance state (wake vs. NREM sleep), and the presence of slow wave activity and sleep spindles. Furthermore, AM stimuli increased spindle activity during NREM sleep but not during wakefulness. Thus, (1) electrode location, sleep history, vigilance state and ongoing brain activity needs to be carefully considered when investigating ASSR and (2) auditory rhythmic stimuli during sleep might represent a powerful tool to boost sleep spindles. Copyright © 2017 Elsevier Inc. All rights reserved.
[Functional anatomy of the cochlear nerve and the central auditory system].
Simon, E; Perrot, X; Mertens, P
2009-04-01
The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.
Penna, Mario; Velásquez, Nelson; Solís, Rigoberto
2008-04-01
Thresholds for evoked vocal responses and thresholds of multiunit midbrain auditory responses to pure tones and synthetic calls were investigated in males of Pleurodema thaul, as behavioral thresholds well above auditory sensitivity have been reported for other anurans. Thresholds for evoked vocal responses to synthetic advertisement calls played back at increasing intensity averaged 43 dB RMS SPL (range 31-52 dB RMS SPL), measured at the subjects' position. Number of pulses increased with stimulus intensities, reaching a plateau at about 18-39 dB above threshold and decreased at higher intensities. Latency to call followed inverse trends relative to number of pulses. Neural audiograms yielded an average best threshold in the high frequency range of 46.6 dB RMS SPL (range 41-51 dB RMS SPL) and a center frequency of 1.9 kHz (range 1.7-2.6 kHz). Auditory thresholds for a synthetic call having a carrier frequency of 2.1 kHz averaged 44 dB RMS SPL (range 39-47 dB RMS SPL). The similarity between thresholds for advertisement calling and auditory thresholds for the advertisement call indicates that male P. thaul use the full extent of their auditory sensitivity in acoustic interactions, likely an evolutionary adaptation allowing chorusing activity in low-density aggregations.
Keough, Dwayne; Jones, Jeffery A.
2009-01-01
Singing requires accurate control of the fundamental frequency (F0) of the voice. This study examined trained singers’ and untrained singers’ (nonsingers’) sensitivity to subtle manipulations in auditory feedback and the subsequent effect on the mapping between F0 feedback and vocal control. Participants produced the consonant-vowel ∕ta∕ while receiving auditory feedback that was shifted up and down in frequency. Results showed that singers and nonsingers compensated to a similar degree when presented with frequency-altered feedback (FAF); however, singers’ F0 values were consistently closer to the intended pitch target. Moreover, singers initiated their compensatory responses when auditory feedback was shifted up or down 6 cents or more, compared to nonsingers who began compensating when feedback was shifted up 26 cents and down 22 cents. Additionally, examination of the first 50 ms of vocalization indicated that participants commenced subsequent vocal utterances, during FAF, near the F0 value on previous shift trials. Interestingly, nonsingers commenced F0 productions below the pitch target and increased their F0 until they matched the note. Thus, singers and nonsingers rely on an internal model to regulate voice F0, but singers’ models appear to be more sensitive in response to subtle discrepancies in auditory feedback. PMID:19640048
Keough, Dwayne; Jones, Jeffery A
2009-08-01
Singing requires accurate control of the fundamental frequency (F0) of the voice. This study examined trained singers' and untrained singers' (nonsingers') sensitivity to subtle manipulations in auditory feedback and the subsequent effect on the mapping between F0 feedback and vocal control. Participants produced the consonant-vowel /ta/ while receiving auditory feedback that was shifted up and down in frequency. Results showed that singers and nonsingers compensated to a similar degree when presented with frequency-altered feedback (FAF); however, singers' F0 values were consistently closer to the intended pitch target. Moreover, singers initiated their compensatory responses when auditory feedback was shifted up or down 6 cents or more, compared to nonsingers who began compensating when feedback was shifted up 26 cents and down 22 cents. Additionally, examination of the first 50 ms of vocalization indicated that participants commenced subsequent vocal utterances, during FAF, near the F0 value on previous shift trials. Interestingly, nonsingers commenced F0 productions below the pitch target and increased their F0 until they matched the note. Thus, singers and nonsingers rely on an internal model to regulate voice F0, but singers' models appear to be more sensitive in response to subtle discrepancies in auditory feedback.
Bhandiwad, Ashwin A; Whitchurch, Elizabeth A; Colleye, Orphal; Zeddies, David G; Sisneros, Joseph A
2017-03-01
Adult female and nesting (type I) male midshipman fish (Porichthys notatus) exhibit an adaptive form of auditory plasticity for the enhanced detection of social acoustic signals. Whether this adaptive plasticity also occurs in "sneaker" type II males is unknown. Here, we characterize auditory-evoked potentials recorded from hair cells in the saccule of reproductive and non-reproductive "sneaker" type II male midshipman to determine whether this sexual phenotype exhibits seasonal, reproductive state-dependent changes in auditory sensitivity and frequency response to behaviorally relevant auditory stimuli. Saccular potentials were recorded from the middle and caudal region of the saccule while sound was presented via an underwater speaker. Our results indicate saccular hair cells from reproductive type II males had thresholds based on measures of sound pressure and acceleration (re. 1 µPa and 1 ms -2 , respectively) that were ~8-21 dB lower than non-reproductive type II males across a broad range of frequencies, which include the dominant higher frequencies in type I male vocalizations. This increase in type II auditory sensitivity may potentially facilitate eavesdropping by sneaker males and their assessment of vocal type I males for the selection of cuckoldry sites during the breeding season.
Basic Auditory Processing and Developmental Dyslexia in Chinese
ERIC Educational Resources Information Center
Wang, Hsiao-Lan Sharon; Huss, Martina; Hamalainen, Jarmo A.; Goswami, Usha
2012-01-01
The present study explores the relationship between basic auditory processing of sound rise time, frequency, duration and intensity, phonological skills (onset-rime and tone awareness, sound blending, RAN, and phonological memory) and reading disability in Chinese. A series of psychometric, literacy, phonological, auditory, and character…
NASA Astrophysics Data System (ADS)
Moore, Brian C. J.
Psychoacoustics
ERIC Educational Resources Information Center
Hill, P. R.; Hogben, J. H.; Bishop, D. M. V.
2005-01-01
It has been proposed that specific language impairment (SLI) is caused by an impairment of auditory processing, but it is unclear whether this problem affects temporal processing, frequency discrimination (FD), or both. Furthermore, there are few longitudinal studies in this area, making it hard to establish whether any deficit represents a…
Auditory Attention to Frequency and Time: An Analogy to Visual Local-Global Stimuli
ERIC Educational Resources Information Center
Justus, Timothy; List, Alexandra
2005-01-01
Two priming experiments demonstrated exogenous attentional persistence to the fundamental auditory dimensions of frequency (Experiment 1) and time (Experiment 2). In a divided-attention task, participants responded to an independent dimension, the identification of three-tone sequence patterns, for both prime and probe stimuli. The stimuli were…
ERIC Educational Resources Information Center
Hämäläinen, Jarmo A.; Salminen, Hanne K.; Leppänen, Paavo H. T.
2013-01-01
A review of research that uses behavioral, electroencephalographic, and/or magnetoencephalographic methods to investigate auditory processing deficits in individuals with dyslexia is presented. Findings show that measures of frequency, rise time, and duration discrimination as well as amplitude modulation and frequency modulation detection were…
Auditory Stream Segregation and the Perception of Across-Frequency Synchrony
ERIC Educational Resources Information Center
Micheyl, Christophe; Hunter, Cynthia; Oxenham, Andrew J.
2010-01-01
This study explored the extent to which sequential auditory grouping affects the perception of temporal synchrony. In Experiment 1, listeners discriminated between 2 pairs of asynchronous "target" tones at different frequencies, A and B, in which the B tone either led or lagged. Thresholds were markedly higher when the target tones were temporally…
Social Context–Induced Song Variation Affects Female Behavior and Gene Expression
Woolley, Sarah C; Doupe, Allison J
2008-01-01
Social cues modulate the performance of communicative behaviors in a range of species, including humans, and such changes can make the communication signal more salient. In songbirds, males use song to attract females, and song organization can differ depending on the audience to which a male sings. For example, male zebra finches (Taeniopygia guttata) change their songs in subtle ways when singing to a female (directed song) compared with when they sing in isolation (undirected song), and some of these changes depend on altered neural activity from a specialized forebrain-basal ganglia circuit, the anterior forebrain pathway (AFP). In particular, variable activity in the AFP during undirected song is thought to actively enable syllable variability, whereas the lower and less-variable AFP firing during directed singing is associated with more stereotyped song. Consequently, directed song has been suggested to reflect a “performance” state, and undirected song a form of vocal motor “exploration.” However, this hypothesis predicts that directed–undirected song differences, despite their subtlety, should matter to female zebra finches, which is a question that has not been investigated. We tested female preferences for this natural variation in song in a behavioral approach assay, and we found that both mated and socially naive females could discriminate between directed and undirected song—and strongly preferred directed song. These preferences, which appeared to reflect attention especially to aspects of song variability controlled by the AFP, were enhanced by experience, as they were strongest for mated females responding to their mate's directed songs. We then measured neural activity using expression of the immediate early gene product ZENK, and found that social context and song familiarity differentially modulated the number of ZENK-expressing cells in telencephalic auditory areas. Specifically, the number of ZENK-expressing cells in the caudomedial mesopallium (CMM) was most affected by whether a song was directed or undirected, whereas the caudomedial nidopallium (NCM) was most affected by whether a song was familiar or unfamiliar. Together these data demonstrate that females detect and prefer the features of directed song and suggest that high-level auditory areas including the CMM are involved in this social perception. PMID:18351801
Volume of the human septal forebrain region is a predictor of source memory accuracy.
Butler, Tracy; Blackmon, Karen; Zaborszky, Laszlo; Wang, Xiuyuan; DuBois, Jonathan; Carlson, Chad; Barr, William B; French, Jacqueline; Devinsky, Orrin; Kuzniecky, Ruben; Halgren, Eric; Thesen, Thomas
2012-01-01
Septal nuclei, components of basal forebrain, are strongly and reciprocally connected with hippocampus, and have been shown in animals to play a critical role in memory. In humans, the septal forebrain has received little attention. To examine the role of human septal forebrain in memory, we acquired high-resolution magnetic resonance imaging scans from 25 healthy subjects and calculated septal forebrain volume using recently developed probabilistic cytoarchitectonic maps. We indexed memory with the California Verbal Learning Test-II. Linear regression showed that bilateral septal forebrain volume was a significant positive predictor of recognition memory accuracy. More specifically, larger septal forebrain volume was associated with the ability to recall item source/context accuracy. Results indicate specific involvement of septal forebrain in human source memory, and recall the need for additional research into the role of septal nuclei in memory and other impairments associated with human diseases.
Diesch, Eugen; Andermann, Martin; Flor, Herta; Rupp, Andre
2010-05-01
The steady-state auditory evoked magnetic field was recorded in tinnitus patients and controls, both either musicians or non-musicians, all of them with high-frequency hearing loss. Stimuli were AM-tones with two modulation frequencies and three carrier frequencies matching the "audiometric edge", i.e. the frequency above which hearing loss increases more rapidly, the tinnitus frequency or the frequency 1 1/2 octaves above the audiometric edge in controls, and a frequency 1 1/2 octaves below the audiometric edge. Stimuli equated in carrier frequency, but differing in modulation frequency, were simultaneously presented to the two ears. The modulation frequency-specific components of the dual steady-state response were recovered by bandpass filtering. In both hemispheres, the source amplitude of the response was larger for contralateral than ipsilateral input. In non-musicians with tinnitus, this laterality effect was enhanced in the hemisphere contralateral and reduced in the hemisphere ipsilateral to the tinnitus ear, especially for the tinnitus frequency. The hemisphere-by-input laterality dominance effect was smaller in musicians than in non-musicians. In both patient groups, source amplitude change over time, i.e. amplitude slope, was increasing with tonal frequency for contralateral input and decreasing for ipsilateral input. However, slope was smaller for musicians than non-musicians. In patients, source amplitude was negatively correlated with the MRI-determined volume of the medial partition of Heschl's gyrus. Tinnitus patients show an altered excitatory-inhibitory balance reflecting the downregulation of inhibition and resulting in a steeper dominance hierarchy among simultaneous processes in auditory cortex. Direction and extent of this alteration are modulated by musicality and auditory cortex volume. 2010 Elsevier Inc. All rights reserved.
Kloepper, L N; Nachtigall, P E; Gisiner, R; Breese, M
2010-11-01
Toothed whales and dolphins possess a hypertrophied auditory system that allows for the production and hearing of ultrasonic signals. Although the fossil record provides information on the evolution of the auditory structures found in extant odontocetes, it cannot provide information on the evolutionary pressures leading to the hypertrophied auditory system. Investigating the effect of hearing loss may provide evidence for the reason for the development of high-frequency hearing in echolocating animals by demonstrating how high-frequency hearing assists in the functioning echolocation system. The discrimination abilities of a false killer whale (Pseudorca crassidens) were measured prior to and after documented high-frequency hearing loss. In 1992, the subject had good hearing and could hear at frequencies up to 100 kHz. In 2008, the subject had lost hearing at frequencies above 40 kHz. First in 1992, and then again in 2008, the subject performed an identical echolocation task, discriminating between machined hollow aluminum cylinder targets of differing wall thickness. Performances were recorded for individual target differences and compared between both experimental years. Performances on individual targets dropped between 1992 and 2008, with a maximum performance reduction of 36.1%. These data indicate that, with a loss in high-frequency hearing, there was a concomitant reduction in echolocation discrimination ability, and suggest that the development of a hypertrophied auditory system capable of hearing at ultrasonic frequencies evolved in response to pressures for fine-scale echolocation discrimination.
Functional Topography of Human Auditory Cortex
Rauschecker, Josef P.
2016-01-01
Functional and anatomical studies have clearly demonstrated that auditory cortex is populated by multiple subfields. However, functional characterization of those fields has been largely the domain of animal electrophysiology, limiting the extent to which human and animal research can inform each other. In this study, we used high-resolution functional magnetic resonance imaging to characterize human auditory cortical subfields using a variety of low-level acoustic features in the spectral and temporal domains. Specifically, we show that topographic gradients of frequency preference, or tonotopy, extend along two axes in human auditory cortex, thus reconciling historical accounts of a tonotopic axis oriented medial to lateral along Heschl's gyrus and more recent findings emphasizing tonotopic organization along the anterior–posterior axis. Contradictory findings regarding topographic organization according to temporal modulation rate in acoustic stimuli, or “periodotopy,” are also addressed. Although isolated subregions show a preference for high rates of amplitude-modulated white noise (AMWN) in our data, large-scale “periodotopic” organization was not found. Organization by AM rate was correlated with dominant pitch percepts in AMWN in many regions. In short, our data expose early auditory cortex chiefly as a frequency analyzer, and spectral frequency, as imposed by the sensory receptor surface in the cochlea, seems to be the dominant feature governing large-scale topographic organization across human auditory cortex. SIGNIFICANCE STATEMENT In this study, we examine the nature of topographic organization in human auditory cortex with fMRI. Topographic organization by spectral frequency (tonotopy) extended in two directions: medial to lateral, consistent with early neuroimaging studies, and anterior to posterior, consistent with more recent reports. Large-scale organization by rates of temporal modulation (periodotopy) was correlated with confounding spectral content of amplitude-modulated white-noise stimuli. Together, our results suggest that the organization of human auditory cortex is driven primarily by its response to spectral acoustic features, and large-scale periodotopy spanning across multiple regions is not supported. This fundamental information regarding the functional organization of early auditory cortex will inform our growing understanding of speech perception and the processing of other complex sounds. PMID:26818527
Brainstem Correlates of Temporal Auditory Processing in Children with Specific Language Impairment
ERIC Educational Resources Information Center
Basu, Madhavi; Krishnan, Ananthanarayan; Weber-Fox, Christine
2010-01-01
Deficits in identification and discrimination of sounds with short inter-stimulus intervals or short formant transitions in children with specific language impairment (SLI) have been taken to reflect an underlying temporal auditory processing deficit. Using the sustained frequency following response (FFR) and the onset auditory brainstem responses…
Injury- and Use-Related Plasticity in the Adult Auditory System.
ERIC Educational Resources Information Center
Irvine, Dexter R. F.
2000-01-01
This article discusses findings concerning the plasticity of auditory cortical processing mechanisms in adults, including the effects of restricted cochlear damage or behavioral training with acoustic stimuli on the frequency selectivity of auditory cortical neurons and evidence for analogous injury- and use-related plasticity in the adult human…
Henry, Kenneth S.; Heinz, Michael G.
2013-01-01
People with sensorineural hearing loss have substantial difficulty understanding speech under degraded listening conditions. Behavioral studies suggest that this difficulty may be caused by changes in auditory processing of the rapidly-varying temporal fine structure (TFS) of acoustic signals. In this paper, we review the presently known effects of sensorineural hearing loss on processing of TFS and slower envelope modulations in the peripheral auditory system of mammals. Cochlear damage has relatively subtle effects on phase locking by auditory-nerve fibers to the temporal structure of narrowband signals under quiet conditions. In background noise, however, sensorineural loss does substantially reduce phase locking to the TFS of pure-tone stimuli. For auditory processing of broadband stimuli, sensorineural hearing loss has been shown to severely alter the neural representation of temporal information along the tonotopic axis of the cochlea. Notably, auditory-nerve fibers innervating the high-frequency part of the cochlea grow increasingly responsive to low-frequency TFS information and less responsive to temporal information near their characteristic frequency (CF). Cochlear damage also increases the correlation of the response to TFS across fibers of varying CF, decreases the traveling-wave delay between TFS responses of fibers with different CFs, and can increase the range of temporal modulation frequencies encoded in the periphery for broadband sounds. Weaker neural coding of temporal structure in background noise and degraded coding of broadband signals along the tonotopic axis of the cochlea are expected to contribute considerably to speech perception problems in people with sensorineural hearing loss. PMID:23376018
Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns
Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Valente, Giancarlo; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia
2017-01-01
Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2–4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice). PMID:28420788
ERIC Educational Resources Information Center
Megnin-Viggars, Odette; Goswami, Usha
2013-01-01
Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…
Kojima, Satoshi; Doupe, Allison J.
2008-01-01
Acoustic experience critically influences auditory cortical development as well as emergence of highly selective auditory neurons in the songbird sensorimotor circuit. In adult zebra finches, these “song-selective” neurons respond better to the bird's own song (BOS) than to songs of other conspecifics. Birds learn their songs by memorizing a tutor's song and then matching auditory feedback of their voice to the tutor song memory. Song-selective neurons in the pallial-basal ganglia circuit called the anterior forebrain pathway (AFP) reflect the development of BOS. However, during learning, they also respond strongly to tutor song and are compromised in their adult selectivity when birds are prevented from matching BOS to tutor, suggesting that selectivity depends on tutor song learning as well as sensorimotor matching of BOS feedback to the tutor song memory. We examined the contribution of sensory learning of tutor song to song selectivity by recording from AFP neurons in birds reared without exposure to adult conspecifics. We found that AFP neurons in these “isolate” birds had highly tuned responses to isolate BOS. The selectivity was as high, and in the striato-pallidal nucleus Area X, even higher than that in normal birds, due to abnormally weak responsiveness to conspecific song. These results demonstrate that sensory learning of tutor song is not necessary for BOS tuning of AFP neurons. Because isolate birds develop their song via sensorimotor learning, our data further illustrate the importance of individual sensorimotor learning for song selectivity and provide insight into possible functions of song-selective neurons. PMID:17625059
Social experience affects neuronal responses to male calls in adult female zebra finches.
Menardy, F; Touiki, K; Dutrieux, G; Bozon, B; Vignal, C; Mathevon, N; Del Negro, C
2012-04-01
Plasticity studies have consistently shown that behavioural relevance can change the neural representation of sounds in the auditory system, but what occurs in the context of natural acoustic communication where significance could be acquired through social interaction remains to be explored. The zebra finch, a highly social songbird species that forms lifelong pair bonds and uses a vocalization, the distance call, to identify its mate, offers an opportunity to address this issue. Here, we recorded spiking activity in females while presenting distance calls that differed in their degree of familiarity: calls produced by the mate, by a familiar male, or by an unfamiliar male. We focused on the caudomedial nidopallium (NCM), a secondary auditory forebrain region. Both the mate's call and the familiar call evoked responses that differed in magnitude from responses to the unfamiliar call. This distinction between responses was seen both in single unit recordings from anesthetized females and in multiunit recordings from awake freely moving females. In contrast, control females that had not heard them previously displayed responses of similar magnitudes to all three calls. In addition, more cells showed highly selective responses in mated than in control females, suggesting that experience-dependent plasticity in call-evoked responses resulted in enhanced discrimination of auditory stimuli. Our results as a whole demonstrate major changes in the representation of natural vocalizations in the NCM within the context of individual recognition. The functional properties of NCM neurons may thus change continuously to adapt to the social environment. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Forlano, Paul M; Licorish, Roshney R; Ghahramani, Zachary N; Timothy, Miky; Ferrari, Melissa; Palmer, William C; Sisneros, Joseph A
2017-10-01
Little is known regarding the coordination of audition with decision-making and subsequent motor responses that initiate social behavior including mate localization during courtship. Using the midshipman fish model, we tested the hypothesis that the time spent by females attending and responding to the advertisement call is correlated with the activation of a specific subset of catecholaminergic (CA) and social decision-making network (SDM) nuclei underlying auditory- driven sexual motivation. In addition, we quantified the relationship of neural activation between CA and SDM nuclei in all responders with the goal of providing a map of functional connectivity of the circuitry underlying a motivated state responsive to acoustic cues during mate localization. In order to make a baseline qualitative comparison of this functional brain map to unmotivated females, we made a similar correlative comparison of brain activation in females who were unresponsive to the advertisement call playback. Our results support an important role for dopaminergic neurons in the periventricular posterior tuberculum and ventral thalamus, putative A11 and A13 tetrapod homologues, respectively, as well as the posterior parvocellular preoptic area and dorsomedial telencephalon, (laterobasal amygdala homologue) in auditory attention and appetitive sexual behavior in fishes. These findings may also offer insights into the function of these highly conserved nuclei in the context of auditory-driven reproductive social behavior across vertebrates. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Hearing conspecific vocal signals alters peripheral auditory sensitivity
Gall, Megan D.; Wilczynski, Walter
2015-01-01
We investigated whether hearing advertisement calls over several nights, as happens in natural frog choruses, modified the responses of the peripheral auditory system in the green treefrog, Hyla cinerea. Using auditory evoked potentials (AEP), we found that exposure to 10 nights of a simulated male chorus lowered auditory thresholds in males and females, while exposure to random tones had no effect in males, but did result in lower thresholds in females. The threshold change was larger at the lower frequencies stimulating the amphibian papilla than at higher frequencies stimulating the basilar papilla. Suprathreshold responses to tonal stimuli were assessed for two peaks in the AEP recordings. For the peak P1 (assessed for 0.8–1.25 kHz), peak amplitude increased following chorus exposure. For peak P2 (assessed for 2–4 kHz), peak amplitude decreased at frequencies between 2.5 and 4.0 kHz, but remained unaltered at 2.0 kHz. Our results show for the first time, to our knowledge, that hearing dynamic social stimuli, like frog choruses, can alter the responses of the auditory periphery in a way that could enhance the detection of and response to conspecific acoustic communication signals. PMID:25972471
Okamoto, Hidehiko; Stracke, Henning; Lagemann, Lothar; Pantev, Christo
2010-01-01
The capability of involuntarily tracking certain sound signals during the simultaneous presence of noise is essential in human daily life. Previous studies have demonstrated that top-down auditory focused attention can enhance excitatory and inhibitory neural activity, resulting in sharpening of frequency tuning of auditory neurons. In the present study, we investigated bottom-up driven involuntary neural processing of sound signals in noisy environments by means of magnetoencephalography. We contrasted two sound signal sequencing conditions: "constant sequencing" versus "random sequencing." Based on a pool of 16 different frequencies, either identical (constant sequencing) or pseudorandomly chosen (random sequencing) test frequencies were presented blockwise together with band-eliminated noises to nonattending subjects. The results demonstrated that the auditory evoked fields elicited in the constant sequencing condition were significantly enhanced compared with the random sequencing condition. However, the enhancement was not significantly different between different band-eliminated noise conditions. Thus the present study confirms that by constant sound signal sequencing under nonattentive listening the neural activity in human auditory cortex can be enhanced, but not sharpened. Our results indicate that bottom-up driven involuntary neural processing may mainly amplify excitatory neural networks, but may not effectively enhance inhibitory neural circuits.
ERIC Educational Resources Information Center
Swink, Shannon; Stuart, Andrew
2012-01-01
The effect of gender on the N1-P2 auditory complex was examined while listening and speaking with altered auditory feedback. Fifteen normal hearing adult males and 15 females participated. N1-P2 components were evoked while listening to self-produced nonaltered and frequency shifted /a/ tokens and during production of /a/ tokens during nonaltered…
Vasconcelos, Raquel O.; Fonseca, Paulo J.; Amorim, M. Clara P.; Ladich, Friedrich
2011-01-01
Many fishes rely on their auditory skills to interpret crucial information about predators and prey, and to communicate intraspecifically. Few studies, however, have examined how complex natural sounds are perceived in fishes. We investigated the representation of conspecific mating and agonistic calls in the auditory system of the Lusitanian toadfish Halobatrachus didactylus, and analysed auditory responses to heterospecific signals from ecologically relevant species: a sympatric vocal fish (meagre Argyrosomus regius) and a potential predator (dolphin Tursiops truncatus). Using auditory evoked potential (AEP) recordings, we showed that both sexes can resolve fine features of conspecific calls. The toadfish auditory system was most sensitive to frequencies well represented in the conspecific vocalizations (namely the mating boatwhistle), and revealed a fine representation of duration and pulsed structure of agonistic and mating calls. Stimuli and corresponding AEP amplitudes were highly correlated, indicating an accurate encoding of amplitude modulation. Moreover, Lusitanian toadfish were able to detect T. truncatus foraging sounds and A. regius calls, although at higher amplitudes. We provide strong evidence that the auditory system of a vocal fish, lacking accessory hearing structures, is capable of resolving fine features of complex vocalizations that are probably important for intraspecific communication and other relevant stimuli from the auditory scene. PMID:20861044
Speech training alters consonant and vowel responses in multiple auditory cortex fields
Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.
2015-01-01
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927
Lahav, Amir; Skoe, Erika
2014-01-01
The intrauterine environment allows the fetus to begin hearing low-frequency sounds in a protected fashion, ensuring initial optimal development of the peripheral and central auditory system. However, the auditory nursery provided by the womb vanishes once the preterm newborn enters the high-frequency (HF) noisy environment of the neonatal intensive care unit (NICU). The present article draws a concerning line between auditory system development and HF noise in the NICU, which we argue is not necessarily conducive to fostering this development. Overexposure to HF noise during critical periods disrupts the functional organization of auditory cortical circuits. As a result, we theorize that the ability to tune out noise and extract acoustic information in a noisy environment may be impaired, leading to increased risks for a variety of auditory, language, and attention disorders. Additionally, HF noise in the NICU often masks human speech sounds, further limiting quality exposure to linguistic stimuli. Understanding the impact of the sound environment on the developing auditory system is an important first step in meeting the developmental demands of preterm newborns undergoing intensive care.
Auditory Discrimination Learning: Role of Working Memory.
Zhang, Yu-Xuan; Moore, David R; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal
2016-01-01
Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.
Kates, James M; Arehart, Kathryn H
2015-10-01
This paper uses mutual information to quantify the relationship between envelope modulation fidelity and perceptual responses. Data from several previous experiments that measured speech intelligibility, speech quality, and music quality are evaluated for normal-hearing and hearing-impaired listeners. A model of the auditory periphery is used to generate envelope signals, and envelope modulation fidelity is calculated using the normalized cross-covariance of the degraded signal envelope with that of a reference signal. Two procedures are used to describe the envelope modulation: (1) modulation within each auditory frequency band and (2) spectro-temporal processing that analyzes the modulation of spectral ripple components fit to successive short-time spectra. The results indicate that low modulation rates provide the highest information for intelligibility, while high modulation rates provide the highest information for speech and music quality. The low-to-mid auditory frequencies are most important for intelligibility, while mid frequencies are most important for speech quality and high frequencies are most important for music quality. Differences between the spectral ripple components used for the spectro-temporal analysis were not significant in five of the six experimental conditions evaluated. The results indicate that different modulation-rate and auditory-frequency weights may be appropriate for indices designed to predict different types of perceptual relationships.
Specialization of the auditory processing in harbor porpoise, characterized by brain-stem potentials
NASA Astrophysics Data System (ADS)
Bibikov, Nikolay G.
2002-05-01
Brain-stem auditory evoked potentials (BAEPs) were recorded from the head surface of the three awaked harbor porpoises (Phocoena phocoena). Silver disk placed on the skin surface above the vertex bone was used as an active electrode. The experiments were performed at the Karadag biological station (the Crimea peninsula). Clicks and tone bursts were used as stimuli. The temporal and frequency selectivity of the auditory system was estimated using the methods of simultaneous and forward masking. An evident minimum of the BAEPs thresholds was observed in the range of 125-135 kHz, where the main spectral component of species-specific echolocation signal is located. In this frequency range the tonal forward masking demonstrated a strong frequency selectivity. Off-response to such tone bursts was a typical observation. An evident BAEP could be recorded up to the frequencies 190-200 kHz, however, outside the acoustical fovea the frequency selectivity was rather poor. Temporal resolution was estimated by measuring BAER recovery functions for double clicks, double tone bursts, and double noise bursts. The half-time of BAERs recovery was in the range of 0.1-0.2 ms. The data indicate that the porpoise auditory system is strongly adapted to detect ultrasonic closely spaced sounds like species-specific locating signals and echoes.
Auditory Discrimination Learning: Role of Working Memory
Zhang, Yu-Xuan; Moore, David R.; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal
2016-01-01
Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience. PMID:26799068
Kates, James M.; Arehart, Kathryn H.
2015-01-01
This paper uses mutual information to quantify the relationship between envelope modulation fidelity and perceptual responses. Data from several previous experiments that measured speech intelligibility, speech quality, and music quality are evaluated for normal-hearing and hearing-impaired listeners. A model of the auditory periphery is used to generate envelope signals, and envelope modulation fidelity is calculated using the normalized cross-covariance of the degraded signal envelope with that of a reference signal. Two procedures are used to describe the envelope modulation: (1) modulation within each auditory frequency band and (2) spectro-temporal processing that analyzes the modulation of spectral ripple components fit to successive short-time spectra. The results indicate that low modulation rates provide the highest information for intelligibility, while high modulation rates provide the highest information for speech and music quality. The low-to-mid auditory frequencies are most important for intelligibility, while mid frequencies are most important for speech quality and high frequencies are most important for music quality. Differences between the spectral ripple components used for the spectro-temporal analysis were not significant in five of the six experimental conditions evaluated. The results indicate that different modulation-rate and auditory-frequency weights may be appropriate for indices designed to predict different types of perceptual relationships. PMID:26520329
Frequency-specific attentional modulation in human primary auditory cortex and midbrain.
Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Poser, Benedikt A; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina
2018-07-01
Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning, yet comparable evidence in humans is scarce. Moreover, whether the spotlight operates in human midbrain is unknown. To address these issues, we studied the spectral tuning of frequency channels in human PAC and inferior colliculus (IC), using 7-T functional magnetic resonance imaging (FMRI) and frequency mapping, while participants focused on different frequency-specific sounds. We found that shifts in frequency-specific attention alter the response gain, but not tuning profile, of PAC frequency channels. The gain modulation was strongest in low-frequency channels and varied near-monotonically across the tonotopic axis, giving rise to the attentional spotlight. We observed less prominent, non-tonotopic spatial patterns of attentional modulation in IC. These results indicate that the frequency-specific attentional spotlight in human PAC as measured with FMRI arises primarily from tonotopic gain modulation, rather than adapted frequency tuning. Moreover, frequency-specific attentional modulation of afferent sound processing in human IC seems to be considerably weaker, suggesting that the spotlight diminishes toward this lower-order processing stage. Our study sheds light on how the human auditory pathway adapts to the different demands of selective hearing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Noise Trauma Induced Plastic Changes in Brain Regions outside the Classical Auditory Pathway
Chen, Guang-Di; Sheppard, Adam; Salvi, Richard
2017-01-01
The effects of intense noise exposure on the classical auditory pathway have been extensively investigated; however, little is known about the effects of noise-induced hearing loss on non-classical auditory areas in the brain such as the lateral amygdala (LA) and striatum (Str). To address this issue, we compared the noise-induced changes in spontaneous and tone-evoked responses from multiunit clusters (MUC) in the LA and Str with those seen in auditory cortex (AC). High-frequency octave band noise (10–20 kHz) and narrow band noise (16–20 kHz) induced permanent thresho ld shifts (PTS) at high-frequencies within and above the noise band but not at low frequencies. While the noise trauma significantly elevated spontaneous discharge rate (SR) in the AC, SRs in the LA and Str were only slightly increased across all frequencies. The high-frequency noise trauma affected tone-evoked firing rates in frequency and time dependent manner and the changes appeared to be related to severity of noise trauma. In the LA, tone-evoked firing rates were reduced at the high-frequencies (trauma area) whereas firing rates were enhanced at the low-frequencies or at the edge-frequency dependent on severity of hearing loss at the high frequencies. The firing rate temporal profile changed from a broad plateau to one sharp, delayed peak. In the AC, tone-evoked firing rates were depressed at high frequencies and enhanced at the low frequencies while the firing rate temporal profiles became substantially broader. In contrast, firing rates in the Str were generally decreased and firing rate temporal profiles become more phasic and less prolonged. The altered firing rate and pattern at low frequencies induced by high frequency hearing loss could have perceptual consequences. The tone-evoked hyperactivity in low-frequency MUC could manifest as hyperacusis whereas the discharge pattern changes could affect temporal resolution and integration. PMID:26701290
Richard, Nelly; Laursen, Bettina; Grupe, Morten; Drewes, Asbjørn M; Graversen, Carina; Sørensen, Helge B D; Bastlund, Jesper F
2017-04-01
Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.
Auditory and Linguistic Processes in the Perception of Intonation Contours.
ERIC Educational Resources Information Center
Studdert-Kennedy, Michael; Hadding, Kerstin
By examining the relations among sections of the fundamental frequency contour used in judging an utterance as a question or statement, the experiment described in this report seeks a more detailed understanding of auditory-linguistic interaction in the perception of intonation contours. The perceptual process may be divided into stages (auditory,…
Effect of FM Auditory Trainers on Attending Behaviors of Learning-Disabled Children.
ERIC Educational Resources Information Center
Blake, Ruth; And Others
1991-01-01
This study investigated the effect of FM (frequency modulation) auditory trainer use on attending behaviors of 36 students (ages 5-10) with learning disabilities. Children wearing the auditory trainers scored better than control students on eye contact, having body turned toward sound source, and absence of extraneous body movement and vocal…
Responses of auditory-cortex neurons to structural features of natural sounds.
Nelken, I; Rotman, Y; Bar Yosef, O
1999-01-14
Sound-processing strategies that use the highly non-random structure of natural sounds may confer evolutionary advantage to many species. Auditory processing of natural sounds has been studied almost exclusively in the context of species-specific vocalizations, although these form only a small part of the acoustic biotope. To study the relationships between properties of natural soundscapes and neuronal processing mechanisms in the auditory system, we analysed sound from a range of different environments. Here we show that for many non-animal sounds and background mixtures of animal sounds, energy in different frequency bands is coherently modulated. Co-modulation of different frequency bands in background noise facilitates the detection of tones in noise by humans, a phenomenon known as co-modulation masking release (CMR). We show that co-modulation also improves the ability of auditory-cortex neurons to detect tones in noise, and we propose that this property of auditory neurons may underlie behavioural CMR. This correspondence may represent an adaptation of the auditory system for the use of an attribute of natural sounds to facilitate real-world processing tasks.
Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu
2016-10-01
The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.
Frequency-specific adaptation and its underlying circuit model in the auditory midbrain.
Shen, Li; Zhao, Lingyun; Hong, Bo
2015-01-01
Receptive fields of sensory neurons are considered to be dynamic and depend on the stimulus history. In the auditory system, evidence of dynamic frequency-receptive fields has been found following stimulus-specific adaptation (SSA). However, the underlying mechanism and circuitry of SSA have not been fully elucidated. Here, we studied how frequency-receptive fields of neurons in rat inferior colliculus (IC) changed when exposed to a biased tone sequence. Pure tone with one specific frequency (adaptor) was presented markedly more often than others. The adapted tuning was compared with the original tuning measured with an unbiased sequence. We found inhomogeneous changes in frequency tuning in IC, exhibiting a center-surround pattern with respect to the neuron's best frequency. Central adaptors elicited strong suppressive and repulsive changes while flank adaptors induced facilitative and attractive changes. Moreover, we proposed a two-layer model of the underlying network, which not only reproduced the adaptive changes in the receptive fields but also predicted novelty responses to oddball sequences. These results suggest that frequency-specific adaptation in auditory midbrain can be accounted for by an adapted frequency channel and its lateral spreading of adaptation, which shed light on the organization of the underlying circuitry.
Frequency-specific adaptation and its underlying circuit model in the auditory midbrain
Shen, Li; Zhao, Lingyun; Hong, Bo
2015-01-01
Receptive fields of sensory neurons are considered to be dynamic and depend on the stimulus history. In the auditory system, evidence of dynamic frequency-receptive fields has been found following stimulus-specific adaptation (SSA). However, the underlying mechanism and circuitry of SSA have not been fully elucidated. Here, we studied how frequency-receptive fields of neurons in rat inferior colliculus (IC) changed when exposed to a biased tone sequence. Pure tone with one specific frequency (adaptor) was presented markedly more often than others. The adapted tuning was compared with the original tuning measured with an unbiased sequence. We found inhomogeneous changes in frequency tuning in IC, exhibiting a center-surround pattern with respect to the neuron's best frequency. Central adaptors elicited strong suppressive and repulsive changes while flank adaptors induced facilitative and attractive changes. Moreover, we proposed a two-layer model of the underlying network, which not only reproduced the adaptive changes in the receptive fields but also predicted novelty responses to oddball sequences. These results suggest that frequency-specific adaptation in auditory midbrain can be accounted for by an adapted frequency channel and its lateral spreading of adaptation, which shed light on the organization of the underlying circuitry. PMID:26483641
ERIC Educational Resources Information Center
Marcus, Ann; Sinnott, Brigit; Bradley, Stephen; Grey, Ian
2010-01-01
This study aimed to examine the effectiveness of a simplified habit reversal procedure (SHR) using differential reinforcement of incompatible behaviour (DRI) and a stimulus prompt (GaitSpot Auditory Squeakers) to reduce the frequency of idiopathic toe-walking (ITW) and increase the frequency of correct heel-to-toe-walking in three children with…
Cheng, Liang; Wang, Shao-Hui; Peng, Kang; Liao, Xiao-Mei
2017-01-01
Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks) and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons.
NASA Astrophysics Data System (ADS)
Fishman, Yonatan I.; Arezzo, Joseph C.; Steinschneider, Mitchell
2004-09-01
Auditory stream segregation refers to the organization of sequential sounds into ``perceptual streams'' reflecting individual environmental sound sources. In the present study, sequences of alternating high and low tones, ``...ABAB...,'' similar to those used in psychoacoustic experiments on stream segregation, were presented to awake monkeys while neural activity was recorded in primary auditory cortex (A1). Tone frequency separation (ΔF), tone presentation rate (PR), and tone duration (TD) were systematically varied to examine whether neural responses correlate with effects of these variables on perceptual stream segregation. ``A'' tones were fixed at the best frequency of the recording site, while ``B'' tones were displaced in frequency from ``A'' tones by an amount=ΔF. As PR increased, ``B'' tone responses decreased in amplitude to a greater extent than ``A'' tone responses, yielding neural response patterns dominated by ``A'' tone responses occurring at half the alternation rate. Increasing TD facilitated the differential attenuation of ``B'' tone responses. These findings parallel psychoacoustic data and suggest a physiological model of stream segregation whereby increasing ΔF, PR, or TD enhances spatial differentiation of ``A'' tone and ``B'' tone responses along the tonotopic map in A1.
Cheng, Liang; Wang, Shao-Hui; Peng, Kang
2017-01-01
Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks) and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons. PMID:28589040
Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise.
Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W
2011-03-08
How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.
Horacek, Jiri; Brunovsky, Martin; Novak, Tomas; Skrdlantova, Lucie; Klirova, Monika; Bubenikova-Valesova, Vera; Krajca, Vladimir; Tislerova, Barbora; Kopecek, Milan; Spaniel, Filip; Mohr, Pavel; Höschl, Cyril
2007-01-01
Auditory hallucinations are characteristic symptoms of schizophrenia with high clinical importance. It was repeatedly reported that low frequency (
Bishop, Dorothy V.M.; McArthur, Genevieve M.
2005-01-01
It has frequently been claimed that children with specific language impairment (SLI) have impaired auditory perception, but there is much controversy about the role of such deficits in causing their language problems, and it has been difficult to establish solid, replicable findings in this area. Discrepancies in this field may arise because (a) a focus on mean results obscures the heterogeneity in the population and (b) insufficient attention has been paid to maturational aspects of auditory processing. We conducted a study of 16 young people with specific language impairment (SLI) and 16 control participants, 24 of whom had had auditory event-related potentials (ERPs) and frequency discrimination thresholds assessed 18 months previously. When originally assessed, around one third of the listeners with SLI had poor behavioural frequency discrimination thresholds, and these tended to be the younger participants. However, most of the SLI group had age-inappropriate late components of the auditory ERP, regardless of their frequency discrimination. At follow-up, the behavioural thresholds of those with poor frequency discrimination improved, though some remained outside the control range. At follow-up, ERPs for many of the individuals in the SLI group were still not age-appropriate. In several cases, waveforms of individuals in the SLI group resembled those of younger typically-developing children, though in other cases the waveform was unlike that of control cases at any age. Electrophysiological methods may reveal underlying immaturity or other abnormality of auditory processing even when behavioural thresholds look normal. This study emphasises the variability seen in SLI, and the importance of studying individual cases rather than focusing on group means. PMID:15871598
Amaral, Joice A T; Nogueira, Marcela L; Roque, Adriano L; Guida, Heraldo L; De Abreu, Luiz Carlos; Raimundo, Rodrigo Daminello; Vanderlei, Luiz Carlos M; Ribeiro, Vivian L; Ferreira, Celso; Valenti, Vitor E
2014-03-01
The effects of chronic music auditory stimulation on the cardiovascular system have been investigated in the literature. However, data regarding the acute effects of different styles of music on cardiac autonomic regulation are lacking. The literature has indicated that auditory stimulation with white noise above 50 dB induces cardiac responses. We aimed to evaluate the acute effects of classical baroque and heavy metal music of different intensities on cardiac autonomic regulation. The study was performed in 16 healthy men aged 18-25 years. All procedures were performed in the same soundproof room. We analyzed heart rate variability (HRV) in time (standard deviation of normal-to-normal R-R intervals [SDNN], root-mean square of differences [RMSSD] and percentage of adjacent NN intervals with a difference of duration greater than 50 ms [pNN50]) and frequency (low frequency [LF], high frequency [HF] and LF/HF ratio) domains. HRV was recorded at rest for 10 minutes. Subsequently, the volunteers were exposed to one of the two musical styles (classical baroque or heavy metal music) for five minutes through an earphone, followed by a five-minute period of rest, and then they were exposed to the other style for another five minutes. The subjects were exposed to three equivalent sound levels (60-70dB, 70-80dB and 80-90dB). The sequence of songs was randomized for each individual. Auditory stimulation with heavy metal music did not influence HRV indices in the time and frequency domains in the three equivalent sound level ranges. The same was observed with classical baroque musical auditory stimulation with the three equivalent sound level ranges. Musical auditory stimulation of different intensities did not influence cardiac autonomic regulation in men.
Deike, Susann; Deliano, Matthias; Brechmann, André
2016-10-01
One hypothesis concerning the neural underpinnings of auditory streaming states that frequency tuning of tonotopically organized neurons in primary auditory fields in combination with physiological forward suppression is necessary for the separation of representations of high-frequency A and low-frequency B tones. The extent of spatial overlap between the tonotopic activations of A and B tones is thought to underlie the perceptual organization of streaming sequences into one coherent or two separate streams. The present study attempts to interfere with these mechanisms by transcranial direct current stimulation (tDCS) and to probe behavioral outcomes reflecting the perception of ABAB streaming sequences. We hypothesized that tDCS by modulating cortical excitability causes a change in the separateness of the representations of A and B tones, which leads to a change in the proportions of one-stream and two-stream percepts. To test this, 22 subjects were presented with ambiguous ABAB sequences of three different frequency separations (∆F) and had to decide on their current percept after receiving sham, anodal, or cathodal tDCS over the left auditory cortex. We could confirm our hypothesis at the most ambiguous ∆F condition of 6 semitones. For anodal compared with sham and cathodal stimulation, we found a significant decrease in the proportion of two-stream perception and an increase in the proportion of one-stream perception. The results demonstrate the feasibility of using tDCS to probe mechanisms underlying auditory streaming through the use of various behavioral measures. Moreover, this approach allows one to probe the functions of auditory regions and their interactions with other processing stages. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle
NASA Astrophysics Data System (ADS)
Oppenheim, Jacob N.; Magnasco, Marcelo O.
2013-01-01
The time-frequency uncertainty principle states that the product of the temporal and frequency extents of a signal cannot be smaller than 1/(4π). We study human ability to simultaneously judge the frequency and the timing of a sound. Our subjects often exceeded the uncertainty limit, sometimes by more than tenfold, mostly through remarkable timing acuity. Our results establish a lower bound for the nonlinearity and complexity of the algorithms employed by our brains in parsing transient sounds, rule out simple “linear filter” models of early auditory processing, and highlight timing acuity as a central feature in auditory object processing.
NADPH-diaphorase activity and neurovascular coupling in the rat cerebral cortex.
Vlasenko, O V; Maisky, V A; Maznychenko, A V; Pilyavskii, A I
2008-01-01
The distribution of NADPH-diaphorase-reactive (NADPH-dr) neurons and neuronal processes in the cerebral cortex and basal forebrain and their association with parenchymal vessels were studied in normal adult rats using NADPH-d histochemical protocol. The intensely stained cortical interneurons and reactive subcortically originating afferents, and stained microvessels were examined through a light microscope at law (x250) and high (x630) magnifications. NADPH-dr interneurons were concentrated in layers 2-6 of the M1 and M2 areas. However, clear predominance in their concentration (14 +/- 0.8 P < 0.05 per section) was found in layer 6. A mean number of labeled neurons in auditory (AuV), granular and agranular (GI, AIP) areas of the insular cortex was calculated to reach 12.3 +/- 0.7, 18.5 +/- 1.0 and 23.3 +/- 1.7 units per section, respectively (P < 0.05). The distinct apposition of labelled neurons to intracortical vessels was found in the M1, M2. The order of frequency of neurovascular coupling in different zones of the cerebral cortex was as following sequence: AuV (31.2%, n = 1040) > GI (18.0%, n = 640) > S1 (13.3%, n = 720) > M1 (6.3%, n = 1360). A large number of structural associations between labeled cells and vessels in the temporal and insular cortex indicate that NADPH-d-reactive interneurons can contribute to regulation of the cerebral regional blood flow in these areas.
Development of auditory sensitivity in budgerigars (Melopsittacus undulatus)
NASA Astrophysics Data System (ADS)
Brittan-Powell, Elizabeth F.; Dooling, Robert J.
2004-06-01
Auditory feedback influences the development of vocalizations in songbirds and parrots; however, little is known about the development of hearing in these birds. The auditory brainstem response was used to track the development of auditory sensitivity in budgerigars from hatch to 6 weeks of age. Responses were first obtained from 1-week-old at high stimulation levels at frequencies at or below 2 kHz, showing that budgerigars do not hear well at hatch. Over the next week, thresholds improved markedly, and responses were obtained for almost all test frequencies throughout the range of hearing by 14 days. By 3 weeks posthatch, birds' best sensitivity shifted from 2 to 2.86 kHz, and the shape of the auditory brainstem response (ABR) audiogram became similar to that of adult budgerigars. About a week before leaving the nest, ABR audiograms of young budgerigars are very similar to those of adult birds. These data complement what is known about vocal development in budgerigars and show that hearing is fully developed by the time that vocal learning begins.
ERIC Educational Resources Information Center
Kargas, Niko; López, Beatriz; Reddy, Vasudevi; Morris, Paul
2015-01-01
Current views suggest that autism spectrum disorders (ASDs) are characterised by enhanced low-level auditory discrimination abilities. Little is known, however, about whether enhanced abilities are universal in ASD and how they relate to symptomatology. We tested auditory discrimination for intensity, frequency and duration in 21 adults with ASD…
ERIC Educational Resources Information Center
Beauchamp, Chris M.; Stelmack, Robert M.
2006-01-01
The relation between intelligence and speed of auditory discrimination was investigated during an auditory oddball task with backward masking. In target discrimination conditions that varied in the interval between the target and the masking stimuli and in the tonal frequency of the target and masking stimuli, higher ability participants (HA)…
ERIC Educational Resources Information Center
Lincoln, Michelle; Packman, Ann; Onslow, Mark; Jones, Mark
2010-01-01
Purpose: To investigate the impact on percentage of syllables stuttered of various durations of delayed auditory feedback (DAF), levels of frequency-altered feedback (FAF), and masking auditory feedback (MAF) during conversational speech. Method: Eleven adults who stuttered produced 10-min conversational speech samples during a control condition…
Neilans, Erikson G; Dent, Micheal L
2015-02-01
Auditory scene analysis has been suggested as a universal process that exists across all animals. Relative to humans, however, little work has been devoted to how animals perceptually isolate different sound sources. Frequency separation of sounds is arguably the most common parameter studied in auditory streaming, but it is not the only factor contributing to how the auditory scene is perceived. Researchers have found that in humans, even at large frequency separations, synchronous tones are heard as a single auditory stream, whereas asynchronous tones with the same frequency separations are perceived as 2 distinct sounds. These findings demonstrate how both the timing and frequency separation of sounds are important for auditory scene analysis. It is unclear how animals, such as budgerigars (Melopsittacus undulatus), perceive synchronous and asynchronous sounds. In this study, budgerigars and humans (Homo sapiens) were tested on their perception of synchronous, asynchronous, and partially overlapping pure tones using the same psychophysical procedures. Species differences were found between budgerigars and humans in how partially overlapping sounds were perceived, with budgerigars more likely to segregate overlapping sounds and humans more apt to fuse the 2 sounds together. The results also illustrated that temporal cues are particularly important for stream segregation of overlapping sounds. Lastly, budgerigars were found to segregate partially overlapping sounds in a manner predicted by computational models of streaming, whereas humans were not. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Wang, Qingcui; Bao, Ming; Chen, Lihan
2014-01-01
Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.
ERIC Educational Resources Information Center
Saltuklaroglu, Tim; Kalinowski, Joseph; Robbins, Mary; Crawcour, Stephen; Bowers, Andrew
2009-01-01
Background: Stuttering is prone to strike during speech initiation more so than at any other point in an utterance. The use of auditory feedback (AAF) has been found to produce robust decreases in the stuttering frequency by creating an electronic rendition of choral speech (i.e., speaking in unison). However, AAF requires users to self-initiate…
Thresholding of auditory cortical representation by background noise
Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju
2014-01-01
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029
The impact of perilaryngeal vibration on the self-perception of loudness and the Lombard effect.
Brajot, François-Xavier; Nguyen, Don; DiGiovanni, Jeffrey; Gracco, Vincent L
2018-04-05
The role of somatosensory feedback in speech and the perception of loudness was assessed in adults without speech or hearing disorders. Participants completed two tasks: loudness magnitude estimation of a short vowel and oral reading of a standard passage. Both tasks were carried out in each of three conditions: no-masking, auditory masking alone, and mixed auditory masking plus vibration of the perilaryngeal area. A Lombard effect was elicited in both masking conditions: speakers unconsciously increased vocal intensity. Perilaryngeal vibration further increased vocal intensity above what was observed for auditory masking alone. Both masking conditions affected fundamental frequency and the first formant frequency as well, but only vibration was associated with a significant change in the second formant frequency. An additional analysis of pure-tone thresholds found no difference in auditory thresholds between masking conditions. Taken together, these findings indicate that perilaryngeal vibration effectively masked somatosensory feedback, resulting in an enhanced Lombard effect (increased vocal intensity) that did not alter speakers' self-perception of loudness. This implies that the Lombard effect results from a general sensorimotor process, rather than from a specific audio-vocal mechanism, and that the conscious self-monitoring of speech intensity is not directly based on either auditory or somatosensory feedback.
AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX
Weinberger, Norman M.
2009-01-01
Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002
Auditory steady-state response in cochlear implant patients.
Torres-Fortuny, Alejandro; Arnaiz-Marquez, Isabel; Hernández-Pérez, Heivet; Eimil-Suárez, Eduardo
2018-03-19
Auditory steady state responses to continuous amplitude modulated tones at rates between 70 and 110Hz, have been proposed as a feasible alternative to objective frequency specific audiometry in cochlear implant subjects. The aim of the present study is to obtain physiological thresholds by means of auditory steady-state response in cochlear implant patients (Clarion HiRes 90K), with acoustic stimulation, on free field conditions and to verify its biological origin. 11 subjects comprised the sample. Four amplitude modulated tones of 500, 1000, 2000 and 4000Hz were used as stimuli, using the multiple frequency technique. The recording of auditory steady-state response was also recorded at 0dB HL of intensity, non-specific stimulus and using a masking technique. The study enabled the electrophysiological thresholds to be obtained for each subject of the explored sample. There were no auditory steady-state responses at either 0dB or non-specific stimulus recordings. It was possible to obtain the masking thresholds. A difference was identified between behavioral and electrophysiological thresholds of -6±16, -2±13, 0±22 and -8±18dB at frequencies of 500, 1000, 2000 and 4000Hz respectively. The auditory steady state response seems to be a suitable technique to evaluate the hearing threshold in cochlear implant subjects. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.
Pre-attentive, context-specific representation of fear memory in the auditory cortex of rat.
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
2013-01-01
Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.
Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature
NASA Astrophysics Data System (ADS)
Kwon, Minseok
While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system, the regulation of time constant update for filters in signal/control path as well as level-independent frequency glides with fixed frequency modulation. First, we scrutinized performance development in keyword recognition using the proposed methods in quiet and noise-corrupted environments. The results argue that multi-scale integration should be used along with CE in order to avoid ambiguous continuity in unvoiced segments. Moreover, the inclusion of the all modifications was observed to guarantee the noise-type-independent robustness particularly with severe interference. Moreover, the CASA with the auditory model was implemented into a single/dual-channel ASR using reference TIMIT corpus so as to get more general result. Hidden Markov model (HTK) toolkit was used for phone recognition in various environmental conditions. In a single-channel ASR, the results argue that unmasked acoustic features (unmasked GFCC) should combine with target estimates from the mask to compensate for missing information. From the observation of a dual-channel ASR, the combined GFCC guarantees the highest performance regardless of interferences within speech. Moreover, consistent improvement of noise robustness by GFCC (unmasked or combined) shows the validity of our proposed CASA implementation in dual microphone system. In conclusion, the proposed framework proves the robustness of the acoustic features in various background interferences via both direct distance evaluation and statistical assessment. In addition, the introduction of dual microphone system using the framework in this study shows the potential of the effective implementation of the auditory model-based CASA in ASR.
Sensitivity and specificity of auditory steady‐state response testing
Rabelo, Camila Maia; Schochat, Eliane
2011-01-01
INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442
Lavender, Ashley L; Bartol, Soraya M; Bartol, Ian K
2014-07-15
Sea turtles reside in different acoustic environments with each life history stage and may have different hearing capacity throughout ontogeny. For this study, two independent yet complementary techniques for hearing assessment, i.e. behavioral and electrophysiological audiometry, were employed to (1) measure hearing in post-hatchling and juvenile loggerhead sea turtles Caretta caretta (19-62 cm straight carapace length) to determine whether these migratory turtles exhibit an ontogenetic shift in underwater auditory detection and (2) evaluate whether hearing frequency range and threshold sensitivity are consistent in behavioral and electrophysiological tests. Behavioral trials first required training turtles to respond to known frequencies, a multi-stage, time-intensive process, and then recording their behavior when they were presented with sound stimuli from an underwater speaker using a two-response forced-choice paradigm. Electrophysiological experiments involved submerging restrained, fully conscious turtles just below the air-water interface and recording auditory evoked potentials (AEPs) when sound stimuli were presented using an underwater speaker. No significant differences in behavior-derived auditory thresholds or AEP-derived auditory thresholds were detected between post-hatchling and juvenile sea turtles. While hearing frequency range (50-1000/1100 Hz) and highest sensitivity (100-400 Hz) were consistent in audiograms pooled by size class for both behavior and AEP experiments, both post-hatchlings and juveniles had significantly higher AEP-derived than behavior-derived auditory thresholds, indicating that behavioral assessment is a more sensitive testing approach. The results from this study suggest that post-hatchling and juvenile loggerhead sea turtles are low-frequency specialists, exhibiting little differences in threshold sensitivity and frequency bandwidth despite residence in acoustically distinct environments throughout ontogeny. © 2014. Published by The Company of Biologists Ltd.
"Change deafness" arising from inter-feature masking within a single auditory object.
Barascud, Nicolas; Griffiths, Timothy D; McAlpine, David; Chait, Maria
2014-03-01
Our ability to detect prominent changes in complex acoustic scenes depends not only on the ear's sensitivity but also on the capacity of the brain to process competing incoming information. Here, employing a combination of psychophysics and magnetoencephalography (MEG), we investigate listeners' sensitivity in situations when two features belonging to the same auditory object change in close succession. The auditory object under investigation is a sequence of tone pips characterized by a regularly repeating frequency pattern. Signals consisted of an initial, regularly alternating sequence of three short (60 msec) pure tone pips (in the form ABCABC…) followed by a long pure tone with a frequency that is either expected based on the on-going regular pattern ("LONG expected"-i.e., "LONG-expected") or constitutes a pattern violation ("LONG-unexpected"). The change in LONG-expected is manifest as a change in duration (when the long pure tone exceeds the established duration of a tone pip), whereas the change in LONG-unexpected is manifest as a change in both the frequency pattern and a change in the duration. Our results reveal a form of "change deafness," in that although changes in both the frequency pattern and the expected duration appear to be processed effectively by the auditory system-cortical signatures of both changes are evident in the MEG data-listeners often fail to detect changes in the frequency pattern when that change is closely followed by a change in duration. By systematically manipulating the properties of the changing features and measuring behavioral and MEG responses, we demonstrate that feature changes within the same auditory object, which occur close together in time, appear to compete for perceptual resources.
Kerbler, Georg M.; Nedelska, Zuzana; Fripp, Jurgen; Laczó, Jan; Vyhnalek, Martin; Lisý, Jiří; Hamlin, Adam S.; Rose, Stephen; Hort, Jakub; Coulson, Elizabeth J.
2015-01-01
The basal forebrain degenerates in Alzheimer’s disease (AD) and this process is believed to contribute to the cognitive decline observed in AD patients. Impairment in spatial navigation is an early feature of the disease but whether basal forebrain dysfunction in AD is responsible for the impaired navigation skills of AD patients is not known. Our objective was to investigate the relationship between basal forebrain volume and performance in real space as well as computer-based navigation paradigms in an elderly cohort comprising cognitively normal controls, subjects with amnestic mild cognitive impairment and those with AD. We also tested whether basal forebrain volume could predict the participants’ ability to perform allocentric- vs. egocentric-based navigation tasks. The basal forebrain volume was calculated from 1.5 T magnetic resonance imaging (MRI) scans, and navigation skills were assessed using the human analog of the Morris water maze employing allocentric, egocentric, and mixed allo/egocentric real space as well as computerized tests. When considering the entire sample, we found that basal forebrain volume correlated with spatial accuracy in allocentric (cued) and mixed allo/egocentric navigation tasks but not the egocentric (uncued) task, demonstrating an important role of the basal forebrain in mediating cue-based spatial navigation capacity. Regression analysis revealed that, although hippocampal volume reflected navigation performance across the entire sample, basal forebrain volume contributed to mixed allo/egocentric navigation performance in the AD group, whereas hippocampal volume did not. This suggests that atrophy of the basal forebrain contributes to aspects of navigation impairment in AD that are independent of hippocampal atrophy. PMID:26441643
Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise
Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P.; Ahlfors, Seppo P.; Huang, Samantha; Raij, Tommi; Sams, Mikko; Vasios, Christos E.; Belliveau, John W.
2011-01-01
How can we concentrate on relevant sounds in noisy environments? A “gain model” suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A “tuning model” suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for “frequency tagging” of attention effects on maskers. Noise masking reduced early (50–150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50–150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise. PMID:21368107
Kantrowitz, Joshua T.; Epstein, Michael L.; Beggel, Odeta; Rohrig, Stephanie; Lehrfeld, Jonathan M.; Revheim, Nadine; Lehrfeld, Nayla P.; Reep, Jacob; Parker, Emily; Silipo, Gail; Ahissar, Merav; Javitt, Daniel C.
2016-01-01
Schizophrenia is associated with deficits in cortical plasticity that affect sensory brain regions and lead to impaired cognitive performance. Here we examined underlying neural mechanisms of auditory plasticity deficits using combined behavioural and neurophysiological assessment, along with neuropharmacological manipulation targeted at the N-methyl-D-aspartate type glutamate receptor (NMDAR). Cortical plasticity was assessed in a cohort of 40 schizophrenia/schizoaffective patients relative to 42 healthy control subjects using a fixed reference tone auditory plasticity task. In a second cohort (n = 21 schizophrenia/schizoaffective patients, n = 13 healthy controls), event-related potential and event-related time–frequency measures of auditory dysfunction were assessed during administration of the NMDAR agonist d-serine. Mismatch negativity was used as a functional read-out of auditory-level function. Clinical trials registration numbers were NCT01474395/NCT02156908. Schizophrenia/schizoaffective patients showed significantly reduced auditory plasticity versus healthy controls (P = 0.001) that correlated with measures of cognitive, occupational and social dysfunction. In event-related potential/time-frequency analyses, patients showed highly significant reductions in sensory N1 that reflected underlying impairments in θ responses (P < 0.001), along with reduced θ and β-power modulation during retention and motor-preparation intervals. Repeated administration of d-serine led to intercorrelated improvements in (i) auditory plasticity (P < 0.001); (ii) θ-frequency response (P < 0.05); and (iii) mismatch negativity generation to trained versus untrained tones (P = 0.02). Schizophrenia/schizoaffective patients show highly significant deficits in auditory plasticity that contribute to cognitive, occupational and social dysfunction. d-serine studies suggest first that NMDAR dysfunction may contribute to underlying cortical plasticity deficits and, second, that repeated NMDAR agonist administration may enhance cortical plasticity in schizophrenia. PMID:27913408
Abnormal Auditory Gain in Hyperacusis: Investigation with a Computational Model
Diehl, Peter U.; Schaette, Roland
2015-01-01
Hyperacusis is a frequent auditory disorder that is characterized by abnormal loudness perception where sounds of relatively normal volume are perceived as too loud or even painfully loud. As hyperacusis patients show decreased loudness discomfort levels (LDLs) and steeper loudness growth functions, it has been hypothesized that hyperacusis might be caused by an increase in neuronal response gain in the auditory system. Moreover, since about 85% of hyperacusis patients also experience tinnitus, the conditions might be caused by a common mechanism. However, the mechanisms that give rise to hyperacusis have remained unclear. Here, we have used a computational model of the auditory system to investigate candidate mechanisms for hyperacusis. Assuming that perceived loudness is proportional to the summed activity of all auditory nerve (AN) fibers, the model was tuned to reproduce normal loudness perception. We then evaluated a variety of potential hyperacusis gain mechanisms by determining their effects on model equal-loudness contours and comparing the results to the LDLs of hyperacusis patients with normal hearing thresholds. Hyperacusis was best accounted for by an increase in non-linear gain in the central auditory system. Good fits to the average patient LDLs were obtained for a general increase in gain that affected all frequency channels to the same degree, and also for a frequency-specific gain increase in the high-frequency range. Moreover, the gain needed to be applied after subtraction of spontaneous activity of the AN, which is in contrast to current theories of tinnitus generation based on amplification of spontaneous activity. Hyperacusis and tinnitus might therefore be caused by different changes in neuronal processing in the central auditory system. PMID:26236277
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Wenstrup, J J
1999-11-01
The auditory cortex of the mustached bat (Pteronotus parnellii) displays some of the most highly developed physiological and organizational features described in mammalian auditory cortex. This study examines response properties and organization in the medial geniculate body (MGB) that may contribute to these features of auditory cortex. About 25% of 427 auditory responses had simple frequency tuning with single excitatory tuning curves. The remainder displayed more complex frequency tuning using two-tone or noise stimuli. Most of these were combination-sensitive, responsive to combinations of different frequency bands within sonar or social vocalizations. They included FM-FM neurons, responsive to different harmonic elements of the frequency modulated (FM) sweep in the sonar signal, and H1-CF neurons, responsive to combinations of the bat's first sonar harmonic (H1) and a higher harmonic of the constant frequency (CF) sonar signal. Most combination-sensitive neurons (86%) showed facilitatory interactions. Neurons tuned to frequencies outside the biosonar range also displayed combination-sensitive responses, perhaps related to analyses of social vocalizations. Complex spectral responses were distributed throughout dorsal and ventral divisions of the MGB, forming a major feature of this bat's analysis of complex sounds. The auditory sector of the thalamic reticular nucleus also was dominated by complex spectral responses to sounds. The ventral division was organized tonotopically, based on best frequencies of singly tuned neurons and higher best frequencies of combination-sensitive neurons. Best frequencies were lowest ventrolaterally, increasing dorsally and then ventromedially. However, representations of frequencies associated with higher harmonics of the FM sonar signal were reduced greatly. Frequency organization in the dorsal division was not tonotopic; within the middle one-third of MGB, combination-sensitive responses to second and third harmonic CF sonar signals (60-63 and 90-94 kHz) occurred in adjacent regions. In the rostral one-third, combination-sensitive responses to second, third, and fourth harmonic FM frequency bands predominated. These FM-FM neurons, thought to be selective for delay between an emitted pulse and echo, showed some organization of delay selectivity. The organization of frequency sensitivity in the MGB suggests a major rewiring of the output of the central nucleus of the inferior colliculus, by which collicular neurons tuned to the bat's FM sonar signals mostly project to the dorsal, not the ventral, division. Because physiological differences between collicular and MGB neurons are minor, a major role of the tecto-thalamic projection in the mustached bat may be the reorganization of responses to provide for cortical representations of sonar target features.
Is the Role of External Feedback in Auditory Skill Learning Age Dependent?
ERIC Educational Resources Information Center
Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat
2017-01-01
Purpose: The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Method: Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for…
How Hearing Loss Impacts Communication. Tipsheet: Serving Students Who Are Hard of Hearing
ERIC Educational Resources Information Center
Atcherson, Samuel R.; Johnson, Marni I.
2009-01-01
Hearing, or auditory processing, involves the use of many hearing skills in a single or combined fashion. The sounds that humans hear can be characterized by their intensity (loudness), frequency (pitch), and timing. Impairment of any of the auditory structures from the visible ear to the central auditory nervous system within the brain can have a…
Kagerer, Florian A; Viswanathan, Priya; Contreras-Vidal, Jose L; Whitall, Jill
2014-04-01
Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (nine per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high-threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set. Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier sub-cortical circuitry in those with higher thresholds.
Kagerer, Florian A.; Viswanathan, Priya; Contreras-Vidal, Jose L.; Whitall, Jill
2014-01-01
Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (9 per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set (p=0.05). Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier subcortical circuitry in those with higher thresholds. PMID:24449013
Speaker-independent factors affecting the perception of foreign accent in a second languagea)
Levi, Susannah V.; Winters, Stephen J.; Pisoni, David B.
2012-01-01
Previous research on foreign accent perception has largely focused on speaker-dependent factors such as age of learning and length of residence. Factors that are independent of a speaker’s language learning history have also been shown to affect perception of second language speech. The present study examined the effects of two such factors—listening context and lexical frequency—on the perception of foreign-accented speech. Listeners rated foreign accent in two listening contexts: auditory-only, where listeners only heard the target stimuli, and auditory+orthography, where listeners were presented with both an auditory signal and an orthographic display of the target word. Results revealed that higher frequency words were consistently rated as less accented than lower frequency words. The effect of the listening context emerged in two interactions: the auditory +orthography context reduced the effects of lexical frequency, but increased the perceived differences between native and non-native speakers. Acoustic measurements revealed some production differences for words of different levels of lexical frequency, though these differences could not account for all of the observed interactions from the perceptual experiment. These results suggest that factors independent of the speakers’ actual speech articulations can influence the perception of degree of foreign accent. PMID:17471745
NASA Astrophysics Data System (ADS)
Chen, Huaiyu; Cao, Li
2017-06-01
In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.
Laursen, Bettina; Mørk, Arne; Kristiansen, Uffe; Bastlund, Jesper Frank
2014-01-01
P300 (P3) event-related potentials (ERPs) have been suggested to be an endogenous marker of cognitive function and auditory oddball paradigms are frequently used to evaluate P3 ERPs in clinical settings. Deficits in P3 amplitude and latency reflect some of the neurological dysfunctions related to several psychiatric and neurological diseases, e.g., Alzheimer's disease (AD). However, only a very limited number of rodent studies have addressed the back-translational validity of the P3-like ERPs as suitable markers of cognition. Thus, the potential of rodent P3-like ERPs to predict pro-cognitive effects in humans remains to be fully validated. The current study characterizes P3-like ERPs in the 192-IgG-SAP (SAP) rat model of the cholinergic degeneration associated with AD. Following training in a combined auditory oddball and lever-press setup, rats were subjected to bilateral intracerebroventricular infusion of 1.25 μg SAP or PBS (sham lesion) and recording electrodes were implanted in hippocampal CA1. Relative to sham-lesioned rats, SAP-lesioned rats had significantly reduced amplitude of P3-like ERPs. P3 amplitude was significantly increased in SAP-treated rats following pre-treatment with 1 mg/kg donepezil. Infusion of SAP reduced the hippocampal choline acetyltransferase activity by 75%. Behaviorally defined cognitive performance was comparable between treatment groups. The present study suggests that AD-like deficits in P3-like ERPs may be mimicked by the basal forebrain cholinergic degeneration induced by SAP. SAP-lesioned rats may constitute a suitable model to test the efficacy of pro-cognitive substances in an applied experimental setup.
Sound level-dependent growth of N1m amplitude with low and high-frequency tones.
Soeta, Yoshiharu; Nakagawa, Seiji
2009-04-22
The aim of this study was to determine whether the amplitude and/or latency of the N1m deflection of auditory-evoked magnetic fields are influenced by the level and frequency of sound. The results indicated that the amplitude of the N1m increased with sound level. The growth in amplitude with increasing sound level was almost constant with low frequencies (250-1000 Hz); however, this growth decreased with high frequencies (>2000 Hz). The behavior of the amplitude may reflect a difference in the increase in the activation of the peripheral and/or central auditory systems.
Lee, Seung-Hwan; Wynn, Jonathan K; Green, Michael F; Kim, Hyun; Lee, Kang-Joon; Nam, Min; Park, Joong-Kyu; Chung, Young-Cho
2006-04-01
Electrophysiological studies have demonstrated gamma and beta frequency oscillations in response to auditory stimuli. The purpose of this study was to test whether auditory hallucinations (AH) in schizophrenia patients reflect abnormalities in gamma and beta frequency oscillations and to investigate source generators of these abnormalities. This theory was tested using quantitative electroencephalography (qEEG) and low-resolution electromagnetic tomography (LORETA) source imaging. Twenty-five schizophrenia patients with treatment refractory AH, lasting for at least 2 years, and 23 schizophrenia patients with non-AH (N-AH) in the past 2 years were recruited for the study. Spectral analysis of the qEEG and source imaging of frequency bands of artifact-free 30 s epochs were examined during rest. AH patients showed significantly increased beta 1 and beta 2 frequency amplitude compared with N-AH patients. Gamma and beta (2 and 3) frequencies were significantly correlated in AH but not in N-AH patients. Source imaging revealed significantly increased beta (1 and 2) activity in the left inferior parietal lobule and the left medial frontal gyrus in AH versus N-AH patients. These results imply that AH is reflecting increased beta frequency oscillations with neural generators localized in speech-related areas.
Lelkes, Zoltán; Abdurakhmanova, Shamsiiat; Porkka-Heiskanen, Tarja
2017-09-18
The cholinergic basal forebrain contributes to cortical activation and receives rich innervations from the ascending activating system. It is involved in the mediation of the arousing actions of noradrenaline and histamine. Glutamatergic stimulation in the basal forebrain results in cortical acetylcholine release and suppression of sleep. However, it is not known to what extent the cholinergic versus non-cholinergic basal forebrain projection neurones contribute to the arousing action of glutamate. To clarify this question, we administered N-methyl-D-aspartate (NMDA), a glutamate agonist, into the basal forebrain in intact rats and after destruction of the cholinergic cells in the basal forebrain with 192 immunoglobulin (Ig)G-saporin. In eight Han-Wistar rats with implanted electroencephalogram/electromyogram (EEG/EMG) electrodes and guide cannulas for microdialysis probes, 0.23 μg 192 IgG-saporin was administered into the basal forebrain, while the eight control animals received artificial cerebrospinal fluid. Two weeks later, a microdialysis probe targeted into the basal forebrain was perfused with cerebrospinal fluid on the baseline day and for 3 h with 0.3 mmNMDA on the subsequent day. Sleep-wake activity was recorded for 24 h on both days. NMDA exhibited a robust arousing effect in both the intact and the lesioned rats. Wakefulness was increased and both non-REM and REM sleep were decreased significantly during the 3-h NMDA perfusion. Destruction of the basal forebrain cholinergic neurones did not abolish the wake-enhancing action of NMDA. Thus, the cholinergic basal forebrain structures are not essential for the mediation of the arousing action of glutamate. © 2017 European Sleep Research Society.
Crowell, Sara E.; Wells-Berlin, Alicia M.; Therrien, Ronald E.; Yannuzzi, Sally E.; Carr, Catherine E.
2016-01-01
Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000−3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.
Crowell, Sara E; Wells-Berlin, Alicia M; Therrien, Ronald E; Yannuzzi, Sally E; Carr, Catherine E
2016-05-01
Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000-3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.
Tonal frequency affects amplitude but not topography of rhesus monkey cranial EEG components.
Teichert, Tobias
2016-06-01
The rhesus monkey is an important model of human auditory function in general and auditory deficits in neuro-psychiatric diseases such as schizophrenia in particular. Several rhesus monkey studies have described homologs of clinically relevant auditory evoked potentials such as pitch-based mismatch negativity, a fronto-central negativity that can be observed when a series of regularly repeating sounds is disrupted by a sound of different tonal frequency. As a result it is well known how differences of tonal frequency are represented in rhesus monkey EEG. However, to date there is no study that systematically quantified how absolute tonal frequency itself is represented. In particular, it is not known if frequency affects rhesus monkey EEG component amplitude and topography in the same way as previously shown for humans. A better understanding of the effect of frequency may strengthen inter-species homology and will provide a more solid foundation on which to build the interpretation of frequency MMN in the rhesus monkey. Using arrays of up to 32 cranial EEG electrodes in 4 rhesus macaques we identified 8 distinct auditory evoked components including the N85, a fronto-central negativity that is the presumed homolog of the human N1. In line with human data, the amplitudes of most components including the N85 peaked around 1000 Hz and were strongly attenuated above ∼1750 Hz. Component topography, however, remained largely unaffected by frequency. This latter finding may be consistent with the known absence of certain anatomical structures in the rhesus monkey that are believed to cause the changes in topography in the human by inducing a rotation of generator orientation as a function of tonal frequency. Overall, the findings are consistent with the assumption of a homolog representation of tonal frequency in human and rhesus monkey EEG. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Richard, Nelly; Laursen, Bettina; Grupe, Morten; Drewes, Asbjørn M.; Graversen, Carina; Sørensen, Helge B. D.; Bastlund, Jesper F.
2017-04-01
Objective. Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. Approach. An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. Main results. While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. Significance. The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.
Brown, Erik C.; Rothermel, Robert; Nishida, Masaaki; Juhász, Csaba; Muzik, Otto; Hoechstetter, Karsten; Sood, Sandeep; Chugani, Harry T.; Asano, Eishi
2008-01-01
We determined if high-frequency gamma-oscillations (50- to 150-Hz) were induced by simple auditory communication over the language network areas in children with focal epilepsy. Four children (ages: 7, 9, 10 and 16 years) with intractable left-hemispheric focal epilepsy underwent extraoperative electrocorticography (ECoG) as well as language mapping using neurostimulation and auditory-language-induced gamma-oscillations on ECoG. The audible communication was recorded concurrently and integrated with ECoG recording to allow for accurate time-lock upon ECoG analysis. In three children, who successfully completed the auditory-language task, high-frequency gamma-augmentation sequentially involved: i) the posterior superior temporal gyrus when listening to the question, ii) the posterior lateral temporal region and the posterior frontal region in the time interval between question completion and the patient’s vocalization, and iii) the pre- and post-central gyri immediately preceding and during the patient’s vocalization. The youngest child, with attention deficits, failed to cooperate during the auditory-language task, and high-frequency gamma-augmentation was noted only in the posterior superior temporal gyrus when audible questions were given. The size of language areas suggested by statistically-significant high-frequency gamma-augmentation was larger than that defined by neurostimulation. The present method can provide in-vivo imaging of electrophysiological activities over the language network areas during language processes. Further studies are warranted to determine whether recording of language-induced gamma-oscillations can supplement language mapping using neurostimulation in presurgical evaluation of children with focal epilepsy. PMID:18455440
Bmi-1 cooperates with Foxg1 to maintain neural stem cell self-renewal in the forebrain
Fasano, Christopher A.; Phoenix, Timothy N.; Kokovay, Erzsebet; Lowry, Natalia; Elkabetz, Yechiel; Dimos, John T.; Lemischka, Ihor R.; Studer, Lorenz; Temple, Sally
2009-01-01
Neural stem cells (NSCs) persist throughout life in two forebrain areas: the subventricular zone (SVZ) and the hippocampus. Why forebrain NSCs self-renew more extensively than those from other regions remains unclear. Prior studies have shown that the polycomb factor Bmi-1 is necessary for NSC self-renewal and that it represses the cell cycle inhibitors p16, p19, and p21. Here we show that overexpression of Bmi-1 enhances self-renewal of forebrain NSCs significantly more than those derived from spinal cord, demonstrating a regional difference in responsiveness. We show that forebrain NSCs require the forebrain-specific transcription factor Foxg1 for Bmi-1-dependent self-renewal, and that repression of p21 is a focus of this interaction. Bmi-1 enhancement of NSC self-renewal is significantly greater with increasing age and passage. Importantly, when Bmi-1 is overexpressed in cultured adult forebrain NSCs, they expand dramatically and continue to make neurons even after multiple passages, when control NSCs have become restricted to glial differentiation. Together these findings demonstrate the importance of Bmi-1 and Foxg1 cooperation to maintenance of NSC multipotency and self-renewal, and establish a useful method for generating abundant forebrain neurons ex vivo, outside the neurogenic niche. PMID:19270157
Suga, Motomu; Nishimura, Yukika; Kawakubo, Yuki; Yumoto, Masato; Kasai, Kiyoto
2016-07-01
Auditory mismatch negativity (MMN) and its magnetoencephalographic (MEG) counterpart (MMNm) are an established biological index in schizophrenia research. MMN in response to duration and frequency deviants may have differential relevance to the pathophysiology and clinical stages of schizophrenia. MEG has advantage in that it almost purely detects MMNm arising from the auditory cortex. However, few previous MEG studies on schizophrenia have simultaneously assessed MMNm in response to duration and frequency deviants or examined the effect of chronicity on the group difference. Forty-two patients with chronic schizophrenia and 74 matched control subjects participated in the study. Using a whole-head MEG, MMNm in response to duration and frequency deviants of tones was recorded while participants passively listened to an auditory sequence. Compared to healthy subjects, patients with schizophrenia exhibited significantly reduced powers of MMNm in response to duration deviant in both hemispheres, whereas MMNm in response to frequency deviant did not differ between the two groups. These results did not change according to the chronicity of the illness. These results, obtained by using a sequence-enabling simultaneous assessment of both types of MMNm, suggest that MEG recording of MMN in response to duration deviant may be a more sensitive biological marker of schizophrenia than MMN in response to frequency deviant. Our findings represent an important first step towards establishment of MMN as a biomarker for schizophrenia in real-world clinical psychiatry settings. © 2016 The Authors. Psychiatry and Clinical Neurosciences © 2016 Japanese Society of Psychiatry and Neurology.
Neural mechanisms underlying auditory feedback control of speech
Reilly, Kevin J.; Guenther, Frank H.
2013-01-01
The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557
Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne
2016-12-01
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.
Zhong, Ziwei; Henry, Kenneth S.; Heinz, Michael G.
2014-01-01
People with sensorineural hearing loss often have substantial difficulty understanding speech under challenging listening conditions. Behavioral studies suggest that reduced sensitivity to the temporal structure of sound may be responsible, but underlying neurophysiological pathologies are incompletely understood. Here, we investigate the effects of noise-induced hearing loss on coding of envelope (ENV) structure in the central auditory system of anesthetized chinchillas. ENV coding was evaluated noninvasively using auditory evoked potentials recorded from the scalp surface in response to sinusoidally amplitude modulated tones with carrier frequencies of 1, 2, 4, and 8 kHz and a modulation frequency of 140 Hz. Stimuli were presented in quiet and in three levels of white background noise. The latency of scalp-recorded ENV responses was consistent with generation in the auditory midbrain. Hearing loss amplified neural coding of ENV at carrier frequencies of 2 kHz and above. This result may reflect enhanced ENV coding from the periphery and/or an increase in the gain of central auditory neurons. In contrast to expectations, hearing loss was not associated with a stronger adverse effect of increasing masker intensity on ENV coding. The exaggerated neural representation of ENV information shown here at the level of the auditory midbrain helps to explain previous findings of enhanced sensitivity to amplitude modulation in people with hearing loss under some conditions. Furthermore, amplified ENV coding may potentially contribute to speech perception problems in people with cochlear hearing loss by acting as a distraction from more salient acoustic cues, particularly in fluctuating backgrounds. PMID:24315815
Farahani, Ehsan Darestani; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid
2017-03-01
Investigating the neural generators of auditory steady-state responses (ASSRs), i.e., auditory evoked brain responses, with a wide range of screening and diagnostic applications, has been the focus of various studies for many years. Most of these studies employed a priori assumptions regarding the number and location of neural generators. The aim of this study is to reconstruct ASSR sources with minimal assumptions in order to gain in-depth insight into the number and location of brain regions that are activated in response to low- as well as high-frequency acoustically amplitude modulated signals. In order to reconstruct ASSR sources, we applied independent component analysis with subsequent equivalent dipole modeling to single-subject EEG data (young adults, 20-30 years of age). These data were based on white noise stimuli, amplitude modulated at 4, 20, 40, or 80Hz. The independent components that exhibited a significant ASSR were clustered among all participants by means of a probabilistic clustering method based on a Gaussian mixture model. Results suggest that a widely distributed network of sources, located in cortical as well as subcortical regions, is active in response to 4, 20, 40, and 80Hz amplitude modulated noises. Some of these sources are located beyond the central auditory pathway. Comparison of brain sources in response to different modulation frequencies suggested that the identified brain sources in the brainstem, the left and the right auditory cortex show a higher responsiveness to 40Hz than to the other modulation frequencies. Copyright © 2017 Elsevier Inc. All rights reserved.
Auditory steady state response in sound field.
Hernández-Pérez, H; Torres-Fortuny, A
2013-02-01
Physiological and behavioral responses were compared in normal-hearing subjects via analyses of the auditory steady-state response (ASSR) and conventional audiometry under sound field conditions. The auditory stimuli, presented through a loudspeaker, consisted of four carrier tones (500, 1000, 2000, and 4000 Hz), presented singly for behavioral testing but combined (multiple frequency technique), to estimate thresholds using the ASSR. Twenty normal-hearing adults were examined. The average differences between the physiological and behavioral thresholds were between 17 and 22 dB HL. The Spearman rank correlation between ASSR and behavioral thresholds was significant for all frequencies (p < 0.05). Significant differences were found in the ASSR amplitude among frequencies, and strong correlations between the ASSR amplitude and the stimulus level (p < 0.05). The ASSR in sound field testing was found to yield hearing threshold estimates deemed to be reasonably well correlated with behaviorally assessed thresholds.
Psycho acoustical Measures in Individuals with Congenital Visual Impairment.
Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh
2017-12-01
In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.
Behavioral and subcortical signatures of musical expertise in Mandarin Chinese speakers
Tervaniemi, Mari; Aalto, Daniel
2018-01-01
Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for fo or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers. PMID:29300756
Effects of underwater noise on auditory sensitivity of a cyprinid fish.
Scholik, A R; Yan, H Y
2001-02-01
The ability of a fish to interpret acoustic information in its environment is crucial for its survival. Thus, it is important to understand how underwater noise affects fish hearing. In this study, the fathead minnow (Pimephales promelas) was used to examine: (1) the immediate effects of white noise exposure (0.3-4.0 kHz, 142 dB re: 1 microPa) on auditory thresholds and (2) recovery after exposure. Audiograms were measured using the auditory brainstem response protocol and compared to baseline audiograms of fathead minnows not exposed to noise. Immediately after exposure to 24 h of white noise, five out of the eight frequencies tested showed a significantly higher threshold compared to the baseline fish. Recovery was found to depend on both duration of noise exposure and auditory frequency. These results support the hypothesis that the auditory threshold of the fathead minnow can be altered by white noise, especially in its most sensitive hearing range (0.8-2.0 kHz), and provide evidence that these effects can be long term (>14 days).
Click train encoding in primary and non-primary auditory cortex of anesthetized macaque monkeys.
Oshurkova, E; Scheich, H; Brosch, M
2008-06-02
We studied encoding of temporally modulated sounds in 28 multiunits in the primary auditory cortical field (AI) and in 35 multiunits in the secondary auditory cortical field (caudomedial auditory cortical field, CM) by presenting periodic click trains with click rates between 1 and 300 Hz lasting for 2-4 s. We found that all multiunits increased or decreased their firing rate during the steady state portion of the click train and that all except two multiunits synchronized their firing to individual clicks in the train. Rate increases and synchronized responses were most prevalent and strongest at low click rates, as expressed by best modulation frequency, limiting frequency, percentage of responsive multiunits, and average rate response and vector strength. Synchronized responses occurred up to 100 Hz; rate response occurred up to 300 Hz. Both auditory fields responded similarly to low click rates but differed at click rates above approximately 12 Hz at which more multiunits in AI than in CM exhibited synchronized responses and increased rate responses and more multiunits in CM exhibited decreased rate responses. These findings suggest that the auditory cortex of macaque monkeys encodes temporally modulated sounds similar to the auditory cortex of other mammals. Together with other observations presented in this and other reports, our findings also suggest that AI and CM have largely overlapping sensitivities for acoustic stimulus features but encode these features differently.
Kuriki, Shinya; Kobayashi, Yusuke; Kobayashi, Takanari; Tanaka, Keita; Uchikawa, Yoshinori
2013-02-01
The auditory steady-state response (ASSR) is a weak potential or magnetic response elicited by periodic acoustic stimuli with a maximum response at about a 40-Hz periodicity. In most previous studies using amplitude-modulated (AM) tones of stimulus sound, long lasting tones of more than 10 s in length were used. However, characteristics of the ASSR elicited by short AM tones have remained unclear. In this study, we examined magnetoencephalographic (MEG) ASSR using a sequence of sinusoidal AM tones of 0.78 s in length with various tone frequencies of 440-990 Hz in about one octave variation. It was found that the amplitude of the ASSR was invariant with tone frequencies when the level of sound pressure was adjusted along an equal-loudness curve. The amplitude also did not depend on the existence of preceding tone or difference in frequency of the preceding tone. When the sound level of AM tones was changed with tone frequencies in the same range of 440-990 Hz, the amplitude of ASSR varied in a proportional manner to the sound level. These characteristics are favorable for the use of ASSR in studying temporal processing of auditory information in the auditory cortex. The lack of adaptation in the ASSR elicited by a sequence of short tones may be ascribed to the neural activity of widely accepted generator of magnetic ASSR in the primary auditory cortex. Copyright © 2012 Elsevier B.V. All rights reserved.
Spectral context affects temporal processing in awake auditory cortex
Beitel, Ralph E.; Vollmer, Maike; Heiser, Marc A; Schreiner, Christoph E.
2013-01-01
Amplitude modulation encoding is critical for human speech perception and complex sound processing in general. The modulation transfer function (MTF) is a staple of auditory psychophysics, and has been shown to predict speech intelligibility performance in a range of adverse listening conditions and hearing impairments, including cochlear implant-supported hearing. Although both tonal and broadband carriers have been employed in psychophysical studies of modulation detection and discrimination, relatively little is known about differences in the cortical representation of such signals. We obtained MTFs in response to sinusoidal amplitude modulation (SAM) for both narrowband tonal carriers and 2-octave bandwidth noise carriers in the auditory core of awake squirrel monkeys. MTFs spanning modulation frequencies from 4 to 512 Hz were obtained using 16 channel linear recording arrays sampling across all cortical laminae. Carrier frequency for tonal SAM and center frequency for noise SAM was set at the estimated best frequency for each penetration. Changes in carrier type affected both rate and temporal MTFs in many neurons. Using spike discrimination techniques, we found that discrimination of modulation frequency was significantly better for tonal SAM than for noise SAM, though the differences were modest at the population level. Moreover, spike trains elicited by tonal and noise SAM could be readily discriminated in most cases. Collectively, our results reveal remarkable sensitivity to the spectral content of modulated signals, and indicate substantial interdependence between temporal and spectral processing in neurons of the core auditory cortex. PMID:23719811
Frequency tagging to track the neural processing of contrast in fast, continuous sound sequences.
Nozaradan, Sylvie; Mouraux, André; Cousineau, Marion
2017-07-01
The human auditory system presents a remarkable ability to detect rapid changes in fast, continuous acoustic sequences, as best illustrated in speech and music. However, the neural processing of rapid auditory contrast remains largely unclear, probably due to the lack of methods to objectively dissociate the response components specifically related to the contrast from the other components in response to the sequence of fast continuous sounds. To overcome this issue, we tested a novel use of the frequency-tagging approach allowing contrast-specific neural responses to be tracked based on their expected frequencies. The EEG was recorded while participants listened to 40-s sequences of sounds presented at 8Hz. A tone or interaural time contrast was embedded every fifth sound (AAAAB), such that a response observed in the EEG at exactly 8 Hz/5 (1.6 Hz) or harmonics should be the signature of contrast processing by neural populations. Contrast-related responses were successfully identified, even in the case of very fine contrasts. Moreover, analysis of the time course of the responses revealed a stable amplitude over repetitions of the AAAAB patterns in the sequence, except for the response to perceptually salient contrasts that showed a buildup and decay across repetitions of the sounds. Overall, this new combination of frequency-tagging with an oddball design provides a valuable complement to the classic, transient, evoked potentials approach, especially in the context of rapid auditory information. Specifically, we provide objective evidence on the neural processing of contrast embedded in fast, continuous sound sequences. NEW & NOTEWORTHY Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia might be an impaired processing of fast auditory changes, highlighting how the encoding of rapid acoustic information is critical for auditory communication. Here, we present a novel electrophysiological approach to capture in humans neural markers of contrasts in fast continuous tone sequences. Contrast-specific responses were successfully identified, even for very fine contrasts, providing direct insight on the encoding of rapid auditory information. Copyright © 2017 the American Physiological Society.
Ross, Deborah A.; Puñal, Vanessa M.; Agashe, Shruti; Dweck, Isaac; Mueller, Jerel; Grill, Warren M.; Wilson, Blake S.
2016-01-01
Understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5–80 μA, 100–300 Hz, n = 172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals' judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site compared with the reference frequency used in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site's response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency-tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated, and to provide a greater range of evoked percepts. SIGNIFICANCE STATEMENT Patients with hearing loss stemming from causes that interrupt the auditory pathway after the cochlea need a brain prosthetic to restore hearing. Recently, prosthetic stimulation in the human inferior colliculus (IC) was evaluated in a clinical trial. Thus far, speech understanding was limited for the subjects and this limitation is thought to be partly due to challenges in harnessing the sound frequency representation in the IC. Here, we tested the effects of IC stimulation in monkeys trained to report the sound frequencies they heard. Our results indicate that the IC can be used to introduce a range of frequency percepts and suggest that placement of a greater number of electrode contacts may improve the effectiveness of such implants. PMID:27147659
[Which colours can we hear?: light stimulation of the hearing system].
Wenzel, G I; Lenarz, T; Schick, B
2014-02-01
The success of conventional hearing aids and electrical auditory prostheses for hearing impaired patients is still limited in noisy environments and for sounds more complex than speech (e. g. music). This is partially due to the difficulty of frequency-specific activation of the auditory system using these devices. Stimulation of the auditory system using light pulses represents an alternative to mechanical and electrical stimulation. Light is a source of energy that can be very exactly focused and applied with little scattering, thus offering perspectives for optimal activation of the auditory system. Studies investigating light stimulation of sectors along the auditory pathway have shown stimulation of the auditory system is possible using light pulses. However, further studies and developments are needed before a new generation of light stimulation-based auditory prostheses can be made available for clinical application.
Ptok, M; Meisen, R
2008-01-01
The rapid auditory processing defi-cit theory holds that impaired reading/writing skills are not caused exclusively by a cognitive deficit specific to representation and processing of speech sounds but arise due to sensory, mainly auditory, deficits. To further explore this theory we compared different measures of auditory low level skills to writing skills in school children. prospective study. School children attending third and fourth grade. just noticeable differences for intensity and frequency (JNDI, JNDF), gap detection (GD) monaural and binaural temporal order judgement (TOJb and TOJm); grade in writing, language and mathematics. correlation analysis. No relevant correlation was found between any auditory low level processing variable and writing skills. These data do not support the rapid auditory processing deficit theory.
Auditory Cortex Is Required for Fear Potentiation of Gap Detection
Weible, Aldis P.; Liu, Christine; Niell, Cristopher M.
2014-01-01
Auditory cortex is necessary for the perceptual detection of brief gaps in noise, but is not necessary for many other auditory tasks such as frequency discrimination, prepulse inhibition of startle responses, or fear conditioning with pure tones. It remains unclear why auditory cortex should be necessary for some auditory tasks but not others. One possibility is that auditory cortex is causally involved in gap detection and other forms of temporal processing in order to associate meaning with temporally structured sounds. This predicts that auditory cortex should be necessary for associating meaning with gaps. To test this prediction, we developed a fear conditioning paradigm for mice based on gap detection. We found that pairing a 10 or 100 ms gap with an aversive stimulus caused a robust enhancement of gap detection measured 6 h later, which we refer to as fear potentiation of gap detection. Optogenetic suppression of auditory cortex during pairing abolished this fear potentiation, indicating that auditory cortex is critically involved in associating temporally structured sounds with emotionally salient events. PMID:25392510
Cortical evoked potentials to an auditory illusion: binaural beats.
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi
2009-08-01
To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.
Cortical Evoked Potentials to an Auditory Illusion: Binaural Beats
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi
2009-01-01
Objective: To define brain activity corresponding to an auditory illusion of 3 and 6 Hz binaural beats in 250 Hz or 1,000 Hz base frequencies, and compare it to the sound onset response. Methods: Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000 Hz to one ear and 3 or 6 Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3 Hz and 6 Hz, in base frequencies of 250 Hz and 1000 Hz. Tones were 2,000 ms in duration and presented with approximately 1 s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. Results: All stimuli evoked tone-onset P50, N100 and P200 components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P50 had significantly different sources than the beats-evoked oscillations; and N100 and P200 sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Conclusions: Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Significance: Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp. PMID:19616993
Auditory and tactile gap discrimination by observers with normal and impaired hearing.
Desloge, Joseph G; Reed, Charlotte M; Braida, Louis D; Perez, Zachary D; Delhorne, Lorraine A; Villabona, Timothy J
2014-02-01
Temporal processing ability for the senses of hearing and touch was examined through the measurement of gap-duration discrimination thresholds (GDDTs) employing the same low-frequency sinusoidal stimuli in both modalities. GDDTs were measured in three groups of observers (normal-hearing, hearing-impaired, and normal-hearing with simulated hearing loss) covering an age range of 21-69 yr. GDDTs for a baseline gap of 6 ms were measured for four different combinations of 100-ms leading and trailing markers (250-250, 250-400, 400-250, and 400-400 Hz). Auditory measurements were obtained for monaural presentation over headphones and tactile measurements were obtained using sinusoidal vibrations presented to the left middle finger. The auditory GDDTs of the hearing-impaired listeners, which were larger than those of the normal-hearing observers, were well-reproduced in the listeners with simulated loss. The magnitude of the GDDT was generally independent of modality and showed effects of age in both modalities. The use of different-frequency compared to same-frequency markers led to a greater deterioration in auditory GDDTs compared to tactile GDDTs and may reflect differences in bandwidth properties between the two sensory systems.
The temporal representation of speech in a nonlinear model of the guinea pig cochlea
NASA Astrophysics Data System (ADS)
Holmes, Stephen D.; Sumner, Christian J.; O'Mard, Lowel P.; Meddis, Ray
2004-12-01
The temporal representation of speechlike stimuli in the auditory-nerve output of a guinea pig cochlea model is described. The model consists of a bank of dual resonance nonlinear filters that simulate the vibratory response of the basilar membrane followed by a model of the inner hair cell/auditory nerve complex. The model is evaluated by comparing its output with published physiological auditory nerve data in response to single and double vowels. The evaluation includes analyses of individual fibers, as well as ensemble responses over a wide range of best frequencies. In all cases the model response closely follows the patterns in the physiological data, particularly the tendency for the temporal firing pattern of each fiber to represent the frequency of a nearby formant of the speech sound. In the model this behavior is largely a consequence of filter shapes; nonlinear filtering has only a small contribution at low frequencies. The guinea pig cochlear model produces a useful simulation of the measured physiological response to simple speech sounds and is therefore suitable for use in more advanced applications including attempts to generalize these principles to the response of human auditory system, both normal and impaired. .
Popov, Vladimir V; Sysueva, Evgeniya V; Nechaev, Dmitry I; Lemazina, Alena A; Supin, Alexander Ya
2016-08-01
Using the auditory evoked response technique, sensitivity to local acoustic stimulation of the ventro-lateral head surface was investigated in a beluga whale (Delphinapterus leucas). The stimuli were tone pip trains of carrier frequencies ranging from 16 to 128 kHz with a pip rate of 1 kHz. For higher frequencies (90-128 kHz), the low-threshold point was located next to the medial side of the middle portion of the lower jaw. For middle (32-64 kHz) and lower (16-22.5 kHz) frequencies, the low-threshold point was located at the lateral side of the middle portion of the lower jaw. For lower frequencies, there was an additional low-threshold point next to the bulla-meatus complex. Based on these data, several frequency-specific paths of sound conduction to the auditory bulla are suggested: (i) through an area on the lateral surface of the lower jaw and further through the intra-jaw fat-body channel (for a wide frequency range); (ii) through an area on the ventro-lateral head surface and further through the medial opening of the lower jaw and intra-jaw fat-body channel (for a high-frequency range); and (iii) through an area on the lateral (near meatus) head surface and further through the lateral fat-body channel (for a low-frequency range).
The Auditory Skills Necessary for Echolocation: A New Explanation.
ERIC Educational Resources Information Center
Carlson-Smith, C.; Wiener, W. R.
1996-01-01
This study employed an audiometric test battery with nine blindfolded undergraduate students to explore success factors in echolocation. Echolocation performance correlated significantly with several specific auditory measures. No relationship was found between high-frequency sensitivity and echolocation performance. (Author/PB)
NASA Astrophysics Data System (ADS)
Markovitz, Craig D.; Hogan, Patrick S.; Wesen, Kyle A.; Lim, Hubert H.
2015-04-01
Objective. The corticofugal system can alter coding along the ascending sensory pathway. Within the auditory system, electrical stimulation of the auditory cortex (AC) paired with a pure tone can cause egocentric shifts in the tuning of auditory neurons, making them more sensitive to the pure tone frequency. Since tinnitus has been linked with hyperactivity across auditory neurons, we sought to develop a new neuromodulation approach that could suppress a wide range of neurons rather than enhance specific frequency-tuned neurons. Approach. We performed experiments in the guinea pig to assess the effects of cortical stimulation paired with broadband noise (PN-Stim) on ascending auditory activity within the central nucleus of the inferior colliculus (CNIC), a widely studied region for AC stimulation paradigms. Main results. All eight stimulated AC subregions induced extensive suppression of activity across the CNIC that was not possible with noise stimulation alone. This suppression built up over time and remained after the PN-Stim paradigm. Significance. We propose that the corticofugal system is designed to decrease the brain’s input gain to irrelevant stimuli and PN-Stim is able to artificially amplify this effect to suppress neural firing across the auditory system. The PN-Stim concept may have potential for treating tinnitus and other neurological disorders.
Iwasaki, Mai; Poulsen, Thomas M.; Oka, Kotaro; Hessler, Neal A.
2013-01-01
A critical function of singing by male songbirds is to attract a female mate. Previous studies have suggested that the anterior forebrain system is involved in this courtship behavior. Neural activity in this system, including the striatal Area X, is strikingly dependent on the function of male singing. When males sing to attract a female bird rather than while alone, less variable neural activity results in less variable song spectral features, which may be attractive to the female. These characteristics of neural activity and singing thus may reflect a male's motivation for courtship. Here, we compared the variability of neural activity and song features between courtship singing directed to a female with whom a male had previously formed a pair-bond or to other females. Surprisingly, across all units, there was no clear tendency for a difference in variability of neural activity or song features between courtship of paired females, nonpaired females, or dummy females. However, across the population of recordings, there was a significant relationship between the relative variability of syllable frequency and neural activity: when syllable frequency was less variable to paired than nonpaired females, neural activity was also less variable (and vice-versa). These results show that the lower variability of neural activity and syllable frequency during directed singing is not a binary distinction from undirected singing, but can vary in intensity, possibly related to the relative preference of a male for his singing target. PMID:24312344
Wilson, Uzma S.; Kaf, Wafaa A.; Danesh, Ali A.; Lichtenhan, Jeffery T.
2016-01-01
Objective To determine the clinical utility of narrow-band chirp evoked 40-Hz sinusoidal auditory steady state responses (s-ASSR) in the assessment of low-frequency hearing in noisy participants. Design Tone bursts and narrow-band chirps were used to respectively evoke auditory brainstem responses (tb-ABR) and 40-Hz s-ASSR thresholds with the Kalman-weighted filtering technique and were compared to behavioral thresholds at 500, 2000, and 4000 Hz. A repeated measure ANOVA and post-hoc t-tests, and simple regression analyses were performed for each of the three stimulus frequencies. Study Sample Thirty young adults aged 18–25 with normal hearing participated in this study. Results When 4000 equivalent responses averages were used, the range of mean s-ASSR thresholds from 500, 2000, and 4000 Hz were 17–22 dB lower (better) than when 2000 averages were used. The range of mean tb-ABR thresholds were lower by 11–15 dB for 2000 and 4000 Hz when twice as many equivalent response averages were used, while mean tb-ABR thresholds for 500 Hz were indistinguishable regardless of additional response averaging Conclusion Narrow band chirp evoked 40-Hz s-ASSR requires a ~15 dB smaller correction factor than tb-ABR for estimating low-frequency auditory threshold in noisy participants when adequate response averaging is used. PMID:26795555
Hoare, Derek J; Kowalkowski, Victoria L; Hall, Deborah A
2012-08-01
That auditory perceptual training may alleviate tinnitus draws on two observations: (1) tinnitus probably arises from altered activity within the central auditory system following hearing loss and (2) sound-based training can change central auditory activity. Training that provides sound enrichment across hearing loss frequencies has therefore been hypothesised to alleviate tinnitus. We tested this prediction with two randomised trials of frequency discrimination training involving a total of 70 participants with chronic subjective tinnitus. Participants trained on either (1) a pure-tone standard at a frequency within their region of normal hearing, (2) a pure-tone standard within the region of hearing loss or (3) a high-pass harmonic complex tone spanning a region of hearing loss. Analysis of the primary outcome measure revealed an overall reduction in self-reported tinnitus handicap after training that was maintained at a 1-month follow-up assessment, but there were no significant differences between groups. Secondary analyses also report the effects of different domains of tinnitus handicap on the psychoacoustical characteristics of the tinnitus percept (sensation level, bandwidth and pitch) and on duration of training. Our overall findings and conclusions cast doubt on the superiority of a purely acoustic mechanism to underpin tinnitus remediation. Rather, the nonspecific patterns of improvement are more suggestive that auditory perceptual training affects impact on a contributory mechanism such as selective attention or emotional state.
Joanisse, Marc F; DeSouza, Diedre D
2014-01-01
Functional Magnetic Resonance Imaging (fMRI) was used to investigate the extent, magnitude, and pattern of brain activity in response to rapid frequency-modulated sounds. We examined this by manipulating the direction (rise vs. fall) and the rate (fast vs. slow) of the apparent pitch of iterated rippled noise (IRN) bursts. Acoustic parameters were selected to capture features used in phoneme contrasts, however the stimuli themselves were not perceived as speech per se. Participants were scanned as they passively listened to sounds in an event-related paradigm. Univariate analyses revealed a greater level and extent of activation in bilateral auditory cortex in response to frequency-modulated sweeps compared to steady-state sounds. This effect was stronger in the left hemisphere. However, no regions showed selectivity for either rate or direction of frequency modulation. In contrast, multivoxel pattern analysis (MVPA) revealed feature-specific encoding for direction of modulation in auditory cortex bilaterally. Moreover, this effect was strongest when analyses were restricted to anatomical regions lying outside Heschl's gyrus. We found no support for feature-specific encoding of frequency modulation rate. Differential findings of modulation rate and direction of modulation are discussed with respect to their relevance to phonetic discrimination.
Puschmann, Sebastian; Weerda, Riklef; Klump, Georg; Thiel, Christiane M
2013-05-01
Psychophysical experiments show that auditory change detection can be disturbed in situations in which listeners have to monitor complex auditory input. We made use of this change deafness effect to segregate the neural correlates of physical change in auditory input from brain responses related to conscious change perception in an fMRI experiment. Participants listened to two successively presented complex auditory scenes, which consisted of six auditory streams, and had to decide whether scenes were identical or whether the frequency of one stream was changed between presentations. Our results show that physical changes in auditory input, independent of successful change detection, are represented at the level of auditory cortex. Activations related to conscious change perception, independent of physical change, were found in the insula and the ACC. Moreover, our data provide evidence for significant effective connectivity between auditory cortex and the insula in the case of correctly detected auditory changes, but not for missed changes. This underlines the importance of the insula/anterior cingulate network for conscious change detection.
Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J
2017-01-01
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.
Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.
2017-01-01
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359
Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung
2017-01-01
Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework’s simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications. PMID:28350887
Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung
2017-01-01
Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework's simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications.
Eye field requires the function of Sfrp1 as a Wnt antagonist.
Kim, Hyung-Seok; Shin, Jimann; Kim, Seok-Hyung; Chun, Hang-Suk; Kim, Jun-Dae; Kim, Young-Seop; Kim, Myoung-Jin; Rhee, Myungchull; Yeo, Sang-Yeob; Huh, Tae-Lin
2007-02-27
Wnts have been shown to provide a posteriorizing signal that has to be repressed in the specification of vertebrate forebrain region. Previous studies have shown that Wnt activation by LiCl treatment causes an expansion of optic stalk and mid-hindbrain boundary, whereas eye and ventral diencephalon in the forebrain region were reduced. However, the molecular mechanism, by which inhibits Wnt activity in the forebrain remains poorly defined. To investigate relationship between forebrain specification and Wnt signaling, the zebrafish homologue of secreted frizzled related protein1 (sfrp1) has been characterized. The transcripts of sfrp1 are detected in the presumptive forebrain at gastrula and in the ventral telencephalon, ventral diencephalon, midbrain and optic vesicles at 24h after postfertilization (hpf). Overexpression of sfrp1 causes an anteriorization of embryo, with enlarged head and reduced posterior structure as in the embryo overexpressing dominant-negative form of Frizzled8a or Dkk1. Its overexpression restored the eye defects in the Wnt8b-overexpressing embryos, but not in the LiCl-treated embryos. These results suggest that Sfrp1 expressed in the forebrain and eye field plays a critical role in the extracellular events of antagonizing Wnt activity for the forebrain specification.
The effect of the inner-hair-cell mediated transduction on the shape of neural tuning curves
NASA Astrophysics Data System (ADS)
Altoè, Alessandro; Pulkki, Ville; Verhulst, Sarah
2018-05-01
The inner hair cells of the mammalian cochlea transform the vibrations of their stereocilia into releases of neurotransmitter at the ribbon synapses, thereby controlling the activity of the afferent auditory fibers. The mechanical-to-neural transduction is a highly nonlinear process and it introduces differences between the frequency-tuning of the stereocilia and that of the afferent fibers. Using a computational model of the inner hair cell that is based on in vitro data, we estimated that smaller vibrations of the stereocilia are necessary to drive the afferent fibers above threshold at low (≤0.5 kHz) than at high (≥4 kHz) driving frequencies. In the base of the cochlea, the transduction process affects the low-frequency tails of neural tuning curves. In particular, it introduces differences between the frequency-tuning of the stereocilia and that of the auditory fibers resembling those between basilar membrane velocity and auditory fibers tuning curves in the chinchilla base. For units with a characteristic frequency between 1 and 4 kHz, the transduction process yields shallower neural than stereocilia tuning curves as the characteristic frequency decreases. This study proposes that transduction contributes to the progressive broadening of neural tuning curves from the base to the apex.
A bio-inspired auditory perception model for amplitude-frequency clustering (keynote Paper)
NASA Astrophysics Data System (ADS)
Arena, Paolo; Fortuna, Luigi; Frasca, Mattia; Ganci, Gaetana; Patane, Luca
2005-06-01
In this paper a model for auditory perception is introduced. This model is based on a network of integrate-and-fire and resonate-and-fire neurons and is aimed to control the phonotaxis behavior of a roving robot. The starting point is the model of phonotaxis in Gryllus Bimaculatus: the model consists of four integrate-and-fire neurons and is able of discriminating the calling song of male cricket and orienting the robot towards the sound source. This paper aims to extend the model to include an amplitude-frequency clustering. The proposed spiking network shows different behaviors associated with different characteristics of the input signals (amplitude and frequency). The behavior implemented on the robot is similar to the cricket behavior, where some frequencies are associated with the calling song of male crickets, while other ones indicate the presence of predators. Therefore, the whole model for auditory perception is devoted to control different responses (attractive or repulsive) depending on the input characteristics. The performance of the control system has been evaluated with several experiments carried out on a roving robot.
Mouterde, Solveig C; Elie, Julie E; Mathevon, Nicolas; Theunissen, Frédéric E
2017-03-29
One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging. SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio. Copyright © 2017 Mouterde et al.
Effect of current on the maximum possible reward.
Gallistel, C R; Leon, M; Waraczynski, M; Hanau, M S
1991-12-01
Using a 2-lever choice paradigm with concurrent variable interval schedules of reward, it was found that when pulse frequency is increased, the preference-determining rewarding effect of 0.5-s trains of brief cathodal pulses delivered to the medial forebrain bundle of the rat saturates (stops increasing) at values ranging from 200 to 631 pulses/s (pps). Raising the current lowered the saturation frequency, which confirms earlier, more extensive findings showing that the rewarding effect of short trains saturates at pulse frequencies that vary from less than 100 pps to more than 800 pps, depending on the current. It was also found that the maximum possible reward--the magnitude of the reward at or beyond the saturation pulse frequency--increases with increasing current. Thus, increasing the current reduces the saturation frequency but increases the subjective magnitude of the maximum possible reward.
Auditory frequency generalization in the goldfish (Carassius auratus)1
Fay, Richard R.
1970-01-01
Auditory frequency generalization in the goldfish was studied at five points within the best hearing range through the use of classical respiratory conditioning. Each experimental group received single-stimulus conditioning sessions at one of five stimulus frequencies (100, 200, 400, 800, and 1600 Hz), and were subsequently tested for generalization at eight neighboring frequencies. All stimuli were presented 30 db above absolute threshold. Significant generalization decrements were found for all subjects. For the subjects conditioned in the range between 100 and 800 Hz, a nearly complete failure to generalize was found at one octave above and below the training frequency. The subjects conditioned at 1600 Hz produced relatively more flat gradients between 900 and 2000 Hz. The widths of the generalization gradients, expressed in Hz, increased as a power function of frequency with a slope greater than one. PMID:16811481
Zhai, Qian; Lai, Dengming; Cui, Ping; Zhou, Rui; Chen, Qixing; Hou, Jinchao; Su, Yunting; Pan, Libiao; Ye, Hui; Zhao, Jing-Wei; Fang, Xiangming
2017-10-01
Basal forebrain cholinergic neurons are proposed as a major neuromodulatory system in inflammatory modulation. However, the function of basal forebrain cholinergic neurons in sepsis is unknown, and the neural pathways underlying cholinergic anti-inflammation remain unexplored. Animal research. University research laboratory. Male wild-type C57BL/6 mice and ChAT-ChR2-EYFP (ChAT) transgenic mice. The cholinergic neuronal activity of the basal forebrain was manipulated optogenetically. Cecal ligation and puncture was produced to induce sepsis. Left cervical vagotomy and 6-hydroxydopamine injection to the spleen were used. Photostimulation of basal forebrain cholinergic neurons induced a significant decrease in the levels of tumor necrosis factor-α and interleukin-6 in the serum and spleen. When cecal ligation and puncture was combined with left cervical vagotomy in photostimulated ChAT mice, these reductions in tumor necrosis factor-α and interleukin-6 were partly reversed. Furthermore, photostimulating basal forebrain cholinergic neurons induced a large increase in c-Fos expression in the basal forebrain, the dorsal motor nucleus of the vagus, and the ventral part of the solitary nucleus. Among them, 35.2% were tyrosine hydroxylase positive neurons. Furthermore, chemical denervation showed that dopaminergic neurotransmission to the spleen is indispensable for the anti-inflammation. These results are the first to demonstrate that selectively activating basal forebrain cholinergic neurons is sufficient to attenuate systemic inflammation in sepsis. Specifically, photostimulation of basal forebrain cholinergic neurons activated dopaminergic neurons in dorsal motor nucleus of the vagus/ventral part of the solitary nucleus, and this dopaminergic efferent signal was further transmitted by the vagus nerve to the spleen. This cholinergic-to-dopaminergic neural circuitry, connecting central cholinergic neurons to the peripheral organ, might have mediated the anti-inflammatory effect in sepsis.
Influence of aging on human sound localization
Dobreva, Marina S.; O'Neill, William E.
2011-01-01
Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age. PMID:21368004
Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio
2012-01-01
Approximately 2-4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs.
Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio
2012-01-01
Background Approximately 2–4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. Methodology/Principal Findings A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). Conclusions/Significance This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs. PMID:22808289
Kantrowitz, Joshua T; Epstein, Michael L; Beggel, Odeta; Rohrig, Stephanie; Lehrfeld, Jonathan M; Revheim, Nadine; Lehrfeld, Nayla P; Reep, Jacob; Parker, Emily; Silipo, Gail; Ahissar, Merav; Javitt, Daniel C
2016-12-01
Schizophrenia is associated with deficits in cortical plasticity that affect sensory brain regions and lead to impaired cognitive performance. Here we examined underlying neural mechanisms of auditory plasticity deficits using combined behavioural and neurophysiological assessment, along with neuropharmacological manipulation targeted at the N-methyl-D-aspartate type glutamate receptor (NMDAR). Cortical plasticity was assessed in a cohort of 40 schizophrenia/schizoaffective patients relative to 42 healthy control subjects using a fixed reference tone auditory plasticity task. In a second cohort (n = 21 schizophrenia/schizoaffective patients, n = 13 healthy controls), event-related potential and event-related time-frequency measures of auditory dysfunction were assessed during administration of the NMDAR agonist d-serine. Mismatch negativity was used as a functional read-out of auditory-level function. Clinical trials registration numbers were NCT01474395/NCT02156908 Schizophrenia/schizoaffective patients showed significantly reduced auditory plasticity versus healthy controls (P = 0.001) that correlated with measures of cognitive, occupational and social dysfunction. In event-related potential/time-frequency analyses, patients showed highly significant reductions in sensory N1 that reflected underlying impairments in θ responses (P < 0.001), along with reduced θ and β-power modulation during retention and motor-preparation intervals. Repeated administration of d-serine led to intercorrelated improvements in (i) auditory plasticity (P < 0.001); (ii) θ-frequency response (P < 0.05); and (iii) mismatch negativity generation to trained versus untrained tones (P = 0.02). Schizophrenia/schizoaffective patients show highly significant deficits in auditory plasticity that contribute to cognitive, occupational and social dysfunction. d-serine studies suggest first that NMDAR dysfunction may contribute to underlying cortical plasticity deficits and, second, that repeated NMDAR agonist administration may enhance cortical plasticity in schizophrenia. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
One-year audiologic monitoring of individuals exposed to the 1995 Oklahoma City bombing.
Van Campen, L E; Dennis, J M; Hanlin, R C; King, S B; Velderman, A M
1999-05-01
This longitudinal study evaluated subjective, behavioral, and objective auditory function in 83 explosion survivors. Subjects were evaluated quarterly for 1 year with conventional pure-tone and extended high-frequencies audiometry, otoscopic inspections, immittance and speech audiometry, and questionnaires. There was no obvious relationship between subject location and symptoms or test results. Tinnitus, distorted hearing, loudness sensitivity, and otalgia were common symptoms. On average, 76 percent of subjects had predominantly sensorineural hearing loss at one or more frequencies. Twenty-four percent of subjects required amplification. Extended high frequencies showed evidence of acoustic trauma even when conventional frequencies fell within the normal range. Males had significantly poorer responses than females across frequencies. Auditory status of the group was significantly compromised and unchanged at the end of 1-year postblast.
Development of the auditory system
Litovsky, Ruth
2015-01-01
Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262
The hazard of exposure to impulse noise as a function of frequency, volume 2
NASA Astrophysics Data System (ADS)
Patterson, James H., Jr.; Carrier, Melvin, Jr.; Bordwell, Kevin; Lomba, Ilia M.; Gautier, Roger P.
1991-06-01
The energy spectrum of a noise is known to be an important variable in determining the effects of a traumatic exposure. However, existing criteria for exposure to impulse noise do not consider the frequency spectrum of an impulse as a variable in the evaluation of the hazards to the auditory system. This report presents the results of a study that was designed to determine the relative potential that impulsive energy concentrated at different frequencies has in causing auditory systems trauma. One hundred and eighteen (118) chinchilla, divided into 20 groups with 5 to 7 animals per group, were used in these experiments. Pre- and post-exposure hearing thresholds were measured at 10 test frequencies between 0.125 and 8 kHz on each animal using avoidance conditioning procedures. Quantitative histology (cochleograms) was used to determine the extent and pattern of the sensory cell damage. The noise exposure stimuli consisted of six different computer-generated narrow band tone bursts having center frequencies located at 0.260, 0.775, 1.025, 1.350, 2.450, and 3.550 kHz. Each narrow band exposure stimulus was presented at two to four different intensities. An analysis of the audiometric and histological data allowed a frequency weighting function to be derived. The weighting function clearly demonstrates that equivalent amounts of impulsive energy concentrated at different frequencies is not equally hazardous to auditory function.
Gender difference in the theta/alpha ratio during the induction of peaceful audiovisual modalities.
Yang, Chia-Yen; Lin, Ching-Po
2015-09-01
Gender differences in emotional perception have been found in numerous psychological and psychophysiological studies. The conducting modalities in diverse characteristics of different sensory systems make it interesting to determine how cooperation and competition contribute to emotional experiences. We have previously estimated the bias from the match attributes of auditory and visual modalities and revealed specific brain activity frequency patterns related to a peaceful mood. In that multimodality experiment, we focused on how inner-quiet information is processed in the human brain, and found evidence of auditory domination from the theta-band activity. However, a simple quantitative description of these three frequency bands is lacking, and no studies have assessed the effects of peacefulness on the emotional state. Therefore, the aim of this study was to use magnetoencephalography to determine if gender differences exist (and when and where) in the frequency interactions underpinning the perception of peacefulness. This study provides evidence of auditory and visual domination in perceptual bias during multimodality processing of peaceful consciousness. The results of power ratio analyses suggest that the values of the theta/alpha ratio are associated with a modality as well as hemispheric asymmetries in the anterior-to-posterior direction, which shift from right to left with the auditory to visual stimulations in a peaceful mood. This means that the theta/alpha ratio might be useful for evaluating emotion. Moreover, the difference was found to be most pronounced for auditory domination and visual sensitivity in the female group.
Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123
Micheyl, Christophe; Steinschneider, Mitchell
2016-01-01
Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198
Jaeger, Manuela; Bleichner, Martin G; Bauer, Anna-Katharina R; Mirkovic, Bojana; Debener, Stefan
2018-02-27
The acoustic envelope of human speech correlates with the syllabic rate (4-8 Hz) and carries important information for intelligibility, which is typically compromised in multi-talker, noisy environments. In order to better understand the dynamics of selective auditory attention to low frequency modulated sound sources, we conducted a two-stream auditory steady-state response (ASSR) selective attention electroencephalogram (EEG) study. The two streams consisted of 4 and 7 Hz amplitude and frequency modulated sounds presented from the left and right side. One of two streams had to be attended while the other had to be ignored. The attended stream always contained a target, allowing for the behavioral confirmation of the attention manipulation. EEG ASSR power analysis revealed a significant increase in 7 Hz power for the attend compared to the ignore conditions. There was no significant difference in 4 Hz power when the 4 Hz stream had to be attended compared to when it had to be ignored. This lack of 4 Hz attention modulation could be explained by a distracting effect of a third frequency at 3 Hz (beat frequency) perceivable when the 4 and 7 Hz streams are presented simultaneously. Taken together our results show that low frequency modulations at syllabic rate are modulated by selective spatial attention. Whether attention effects act as enhancement of the attended stream or suppression of to be ignored stream may depend on how well auditory streams can be segregated.
Hollins, Mark
2009-01-01
During haptic exploration of surfaces, complex mechanical oscillations—of surface displacement and air pressure—are generated, which are then transduced by receptors in the skin and in the inner ear. Tactile and auditory signals thus convey redundant information about texture, partially carried in the spectral content of these signals. It is no surprise, then, that the representation of temporal frequency is linked in the auditory and somatosensory systems. An emergent hypothesis is that there exists a supramodal representation of temporal frequency, and by extension texture. PMID:19721886
The Rhythm of Perception: Entrainment to Acoustic Rhythms Induces Subsequent Perceptual Oscillation.
Hickok, Gregory; Farahbod, Haleh; Saberi, Kourosh
2015-07-01
Acoustic rhythms are pervasive in speech, music, and environmental sounds. Recent evidence for neural codes representing periodic information suggests that they may be a neural basis for the ability to detect rhythm. Further, rhythmic information has been found to modulate auditory-system excitability, which provides a potential mechanism for parsing the acoustic stream. Here, we explored the effects of a rhythmic stimulus on subsequent auditory perception. We found that a low-frequency (3 Hz), amplitude-modulated signal induces a subsequent oscillation of the perceptual detectability of a brief nonperiodic acoustic stimulus (1-kHz tone); the frequency but not the phase of the perceptual oscillation matches the entrained stimulus-driven rhythmic oscillation. This provides evidence that rhythmic contexts have a direct influence on subsequent auditory perception of discrete acoustic events. Rhythm coding is likely a fundamental feature of auditory-system design that predates the development of explicit human enjoyment of rhythm in music or poetry. © The Author(s) 2015.
Demodulation processes in auditory perception
NASA Astrophysics Data System (ADS)
Feth, Lawrence L.
1994-08-01
The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.
Sound texture perception via statistics of the auditory periphery: Evidence from sound synthesis
McDermott, Josh H.; Simoncelli, Eero P.
2014-01-01
Rainstorms, insect swarms, and galloping horses produce “sound textures” – the collective result of many similar acoustic events. Sound textures are distinguished by temporal homogeneity, suggesting they could be recognized with time-averaged statistics. To test this hypothesis, we processed real-world textures with an auditory model containing filters tuned for sound frequencies and their modulations, and measured statistics of the resulting decomposition. We then assessed the realism and recognizability of novel sounds synthesized to have matching statistics. Statistics of individual frequency channels, capturing spectral power and sparsity, generally failed to produce compelling synthetic textures. However, combining them with correlations between channels produced identifiable and natural-sounding textures. Synthesis quality declined if statistics were computed from biologically implausible auditory models. The results suggest that sound texture perception is mediated by relatively simple statistics of early auditory representations, presumably computed by downstream neural populations. The synthesis methodology offers a powerful tool for their further investigation. PMID:21903084
Hay, Rachel A; Roach, Brian J; Srihari, Vinod H; Woods, Scott W; Ford, Judith M; Mathalon, Daniel H
2015-02-01
Neurophysiological abnormalities in auditory deviance processing, as reflected by the mismatch negativity (MMN), have been observed across the course of schizophrenia. Studies in early schizophrenia patients have typically shown varying degrees of MMN amplitude reduction for different deviant types, suggesting that different auditory deviants are uniquely processed and may be differentially affected by duration of illness. To explore this further, we examined the MMN response to 4 auditory deviants (duration, frequency, duration+frequency "double deviant", and intensity) in 24 schizophrenia-spectrum patients early in the illness (ESZ) and 21 healthy controls. ESZ showed significantly reduced MMN relative to healthy controls for all deviant types (p<0.05), with no significant interaction with deviant type. No correlations with clinical symptoms were present (all ps>0.05). These findings support the conclusion that neurophysiological mechanisms underlying processing of auditory deviants are compromised early in illness, and these deficiencies are not specific to the type of deviant presented. Copyright © 2015 Elsevier B.V. All rights reserved.
Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten
2010-01-01
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain
Gross, Joachim; Hoogenboom, Nienke; Thut, Gregor; Schyns, Philippe; Panzeri, Stefano; Belin, Pascal; Garrod, Simon
2013-01-01
Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations. PMID:24391472
The Structural Connectome of the Human Central Homeostatic Network.
Edlow, Brian L; McNab, Jennifer A; Witzel, Thomas; Kinney, Hannah C
2016-04-01
Homeostatic adaptations to stress are regulated by interactions between the brainstem and regions of the forebrain, including limbic sites related to respiratory, autonomic, affective, and cognitive processing. Neuroanatomic connections between these homeostatic regions, however, have not been thoroughly identified in the human brain. In this study, we perform diffusion spectrum imaging tractography using the MGH-USC Connectome MRI scanner to visualize structural connections in the human brain linking autonomic and cardiorespiratory nuclei in the midbrain, pons, and medulla oblongata with forebrain sites critical to homeostatic control. Probabilistic tractography analyses in six healthy adults revealed connections between six brainstem nuclei and seven forebrain regions, several over long distances between the caudal medulla and cerebral cortex. The strongest evidence for brainstem-homeostatic forebrain connectivity in this study was between the brainstem midline raphe and the medial temporal lobe. The subiculum and amygdala were the sampled forebrain nodes with the most extensive brainstem connections. Within the human brainstem-homeostatic forebrain connectome, we observed that a lateral forebrain bundle, whose connectivity is distinct from that of rodents and nonhuman primates, is the primary conduit for connections between the brainstem and medial temporal lobe. This study supports the concept that interconnected brainstem and forebrain nodes form an integrated central homeostatic network (CHN) in the human brain. Our findings provide an initial foundation for elucidating the neuroanatomic basis of homeostasis in the normal human brain, as well as for mapping CHN disconnections in patients with disorders of homeostasis, including sudden and unexpected death, and epilepsy.
Cykowski, Matthew D; Takei, Hidehiro; Van Eldik, Linda J; Schmitt, Frederick A; Jicha, Gregory A; Powell, Suzanne Z; Nelson, Peter T
2016-05-01
Transactivating responsive sequence (TAR) DNA-binding protein 43-kDa (TDP-43) pathology has been described in various brain diseases, but the full anatomical distribution and clinical and biological implications of that pathology are incompletely characterized. Here, we describe TDP-43 neuropathology in the basal forebrain, hypothalamus, and adjacent nuclei in 98 individuals (mean age, 86 years; median final mini-mental state examination score, 27). On examination blinded to clinical and pathologic diagnoses, we identified TDP-43 pathology that most frequently involved the ventromedial basal forebrain in 19 individuals (19.4%). As expected, many of these brains had comorbid pathologies including those of Alzheimer disease (AD), Lewy body disease (LBD), and/or hippocampal sclerosis of aging (HS-Aging). The basal forebrain TDP-43 pathology was strongly associated with comorbid HS-Aging (odds ratio = 6.8, p = 0.001), whereas there was no significant association between basal forebrain TDP-43 pathology and either AD or LBD neuropathology. In this sample, there were some cases with apparent preclinical TDP-43 pathology in the basal forebrain that may indicate that this is an early affected area in HS-Aging. We conclude that TDP-43 pathology in the basal forebrain is strongly associated with HS-Aging. These results raise questions about a specific pathogenetic relationship between basal forebrain TDP-43 and non-HS-Aging comorbid diseases (AD and LBD). © 2016 American Association of Neuropathologists, Inc. All rights reserved.
Takei, Hidehiro; Van Eldik, Linda J.; Schmitt, Frederick A.; Jicha, Gregory A.; Powell, Suzanne Z.; Nelson, Peter T.
2016-01-01
Transactivating responsive sequence (TAR) DNA-binding protein 43-kDa (TDP-43) pathology has been described in various brain diseases, but the full anatomical distribution and clinical and biological implications of that pathology are incompletely characterized. Here, we describe TDP-43 neuropathology in the basal forebrain, hypothalamus, and adjacent nuclei in 98 individuals (mean age, 86 years; median final mini-mental state examination score, 27). On examination blinded to clinical and pathologic diagnoses, we identified TDP-43 pathology that most frequently involved the ventromedial basal forebrain in 19 individuals (19.4%). As expected, many of these brains had comorbid pathologies including those of Alzheimer disease (AD), Lewy body disease (LBD), and/or hippocampal sclerosis of aging (HS-Aging). The basal forebrain TDP-43 pathology was strongly associated with comorbid HS-Aging (odds ratio = 6.8, p = 0.001), whereas there was no significant association between basal forebrain TDP-43 pathology and either AD or LBD neuropathology. In this sample, there were some cases with apparent preclinical TDP-43 pathology in the basal forebrain that may indicate that this is an early affected area in HS-Aging. We conclude that TDP-43 pathology in the basal forebrain is strongly associated with HS-Aging. These results raise questions about a specific pathogenetic relationship between basal forebrain TDP-43 and non-HS-Aging comorbid diseases (AD and LBD). PMID:26971127
Mochida, Takemi; Gomi, Hiroaki; Kashino, Makio
2010-11-08
There has been plentiful evidence of kinesthetically induced rapid compensation for unanticipated perturbation in speech articulatory movements. However, the role of auditory information in stabilizing articulation has been little studied except for the control of voice fundamental frequency, voice amplitude and vowel formant frequencies. Although the influence of auditory information on the articulatory control process is evident in unintended speech errors caused by delayed auditory feedback, the direct and immediate effect of auditory alteration on the movements of articulators has not been clarified. This work examined whether temporal changes in the auditory feedback of bilabial plosives immediately affects the subsequent lip movement. We conducted experiments with an auditory feedback alteration system that enabled us to replace or block speech sounds in real time. Participants were asked to produce the syllable /pa/ repeatedly at a constant rate. During the repetition, normal auditory feedback was interrupted, and one of three pre-recorded syllables /pa/, /Φa/, or /pi/, spoken by the same participant, was presented once at a different timing from the anticipated production onset, while no feedback was presented for subsequent repetitions. Comparisons of the labial distance trajectories under altered and normal feedback conditions indicated that the movement quickened during the short period immediately after the alteration onset, when /pa/ was presented 50 ms before the expected timing. Such change was not significant under other feedback conditions we tested. The earlier articulation rapidly induced by the progressive auditory input suggests that a compensatory mechanism helps to maintain a constant speech rate by detecting errors between the internally predicted and actually provided auditory information associated with self movement. The timing- and context-dependent effects of feedback alteration suggest that the sensory error detection works in a temporally asymmetric window where acoustic features of the syllable to be produced may be coded.
Vocal development and auditory perception in CBA/CaJ mice
NASA Astrophysics Data System (ADS)
Radziwon, Kelly E.
Mice are useful laboratory subjects because of their small size, their modest cost, and the fact that researchers have created many different strains to study a variety of disorders. In particular, researchers have found nearly 100 naturally occurring mouse mutations with hearing impairments. For these reasons, mice have become an important model for studies of human deafness. Although much is known about the genetic makeup and physiology of the laboratory mouse, far less is known about mouse auditory behavior. To fully understand the effects of genetic mutations on hearing, it is necessary to determine the hearing abilities of these mice. Two experiments here examined various aspects of mouse auditory perception using CBA/CaJ mice, a commonly used mouse strain. The frequency difference limens experiment tested the mouse's ability to discriminate one tone from another based solely on the frequency of the tone. The mice had similar thresholds as wild mice and gerbils but needed a larger change in frequency than humans and cats. The second psychoacoustic experiment sought to determine which cue, frequency or duration, was more salient when the mice had to identify various tones. In this identification task, the mice overwhelmingly classified the tones based on frequency instead of duration, suggesting that mice are using frequency when differentiating one mouse vocalization from another. The other two experiments were more naturalistic and involved both auditory perception and mouse vocal production. Interest in mouse vocalizations is growing because of the potential for mice to become a model of human speech disorders. These experiments traced mouse vocal development from infant to adult, and they tested the mouse's preference for various vocalizations. This was the first known study to analyze the vocalizations of individual mice across development. Results showed large variation in calling rates among the three cages of adult mice but results were highly consistent across all infant vocalizations. Although the preference experiment did not reveal significant differences between various mouse vocalizations, suggestions are given for future attempts to identify mouse preferences for auditory stimuli.
A prediction of templates in the auditory cortex system
NASA Astrophysics Data System (ADS)
Ghanbeigi, Kimia
In this study variation of human auditory evoked mismatch field amplitudes in response to complex tones as a function of the removal in single partials in the onset period was investigated. It was determined: 1-A single frequency elimination in a sound stimulus plays a significant role in human brain sound recognition. 2-By comparing the mismatches of the brain response due to a single frequency elimination in the "Starting Transient" and "Sustain Part" of the sound stimulus, it is found that the brain is more sensitive to frequency elimination in the Starting Transient. This study involves 4 healthy subjects with normal hearing. Neural activity was recorded with stimulus whole-head MEG. Verification of spatial location in the auditory cortex was determined by comparing with MRI images. In the first set of stimuli, repetitive ('standard') tones with five selected onset frequencies were randomly embedded in the string of rare ('deviant') tones with randomly varying inter stimulus intervals. In the deviant tones one of the frequency components was omitted relative to the deviant tones during the onset period. The frequency of the test partial of the complex tone was intentionally selected to preclude its reinsertion by generation of harmonics or combination tones due to either the nonlinearity of the ear, the electronic equipment or the brain processing. In the second set of stimuli, time structured as above, repetitive ('standard') tones with five selected sustained frequency components were embedded in the string of rare '(deviant') tones for which one of these selected frequencies was omitted in the sustained tone. In both measurements, the carefully frequency selection precluded their reinsertion by generation of harmonics or combination tones due to the nonlinearity of the ear, the electronic equipment and brain processing. The same considerations for selecting the test frequency partial were applied. Results. By comparing MMN of the two data sets, the relative contribution to sound recognition of the omitted partial frequency components in the onset and sustained regions has been determined. Conclusion. The presence of significant mismatch negativity, due to neural activity of auditory cortex, emphasizes that the brain recognizes the elimination of a single frequency of carefully chosen anharmonic frequencies. It was shown this mismatch is more significant if the single frequency elimination occurs in the onset period.
Why is auditory frequency weighting so important in regulation of underwater noise?
Tougaard, Jakob; Dähne, Michael
2017-10-01
A key question related to regulating noise from pile driving, air guns, and sonars is how to take into account the hearing abilities of different animals by means of auditory frequency weighting. Recordings of pile driving sounds, both in the presence and absence of a bubble curtain, were evaluated against recent thresholds for temporary threshold shift (TTS) for harbor porpoises by means of four different weighting functions. The assessed effectivity, expressed as time until TTS, depended strongly on choice of weighting function: 2 orders of magnitude larger for an audiogram-weighted TTS criterion relative to an unweighted criterion, highlighting the importance of selecting the right frequency weighting.
What the cerveau isolé preparation tells us nowadays about sleep-wake mechanisms?
Gottesmann, C
1988-01-01
The intercollicular transected preparation opened a rich field for investigations of sleep-wake mechanisms. Initial results showed that brain stem ascending influences are essential for maintaining an activated cortex. It was subsequently shown that the forebrain also develops activating influences, since EEG desynchronization of the cortex reappears in the chronic cerveau isolé preparation, and continuous or almost continuous theta rhythm is able to occur in the acute cerveau isolé preparation. A brief "intermediate stage" of sleep occurs during natural sleep just prior to and after paradoxical sleep. It is characterized by cortical spindle bursts, hippocampal low frequency theta activity (two patterns of the acute cerveau isolé preparation) and is accompanied by a very low thalamic transmission level, suggesting a cerveau isolé-like state. The chronic cerveau isolé preparation also demonstrates that the executive processes of paradoxical sleep are located in the lower brain stem, while the occurrence of this sleep stage seems to be modulated by forebrain structures.
Soper, Colin; Wicker, Evan; Kulick, Catherine V.; N’Gouemo, Prosper; Forcelli, Patrick A.
2016-01-01
Because sites of seizure origin may be unknown or multifocal, identifying targets from which activation can suppress seizures originating in diverse networks is essential. We evaluated the ability of optogenetic activation of the deep/intermediate layers of the superior colliculus (DLSC) to fill this role. Optogenetic activation of DLSC suppressed behavioral and electrographic seizures in the pentylenetetrazole (forebrain+brainstem seizures) and Area Tempestas (forebrain/complex partial seizures) models; this effect was specific to activation of DLSC, and not neighboring structures. DLSC activation likewise attenuated seizures evoked by gamma butyrolactone (thalamocortical/absence seizures), or acoustic stimulation of genetically epilepsy prone rates (brainstem seizures). Anticonvulsant effects were seen with stimulation frequencies as low as 5 Hz. Unlike previous applications of optogenetics for the control of seizures, activation of DLSC exerted broad-spectrum anticonvulsant actions, attenuating seizures originating in diverse and distal brain networks. These data indicate that DLSC is a promising target for optogenetic control of epilepsy. PMID:26721319
Soper, Colin; Wicker, Evan; Kulick, Catherine V; N'Gouemo, Prosper; Forcelli, Patrick A
2016-03-01
Because sites of seizure origin may be unknown or multifocal, identifying targets from which activation can suppress seizures originating in diverse networks is essential. We evaluated the ability of optogenetic activation of the deep/intermediate layers of the superior colliculus (DLSC) to fill this role. Optogenetic activation of DLSC suppressed behavioral and electrographic seizures in the pentylenetetrazole (forebrain+brainstem seizures) and Area Tempestas (forebrain/complex partial seizures) models; this effect was specific to activation of DLSC, and not neighboring structures. DLSC activation likewise attenuated seizures evoked by gamma butyrolactone (thalamocortical/absence seizures), or acoustic stimulation of genetically epilepsy prone rates (brainstem seizures). Anticonvulsant effects were seen with stimulation frequencies as low as 5 Hz. Unlike previous applications of optogenetics for the control of seizures, activation of DLSC exerted broad-spectrum anticonvulsant actions, attenuating seizures originating in diverse and distal brain networks. These data indicate that DLSC is a promising target for optogenetic control of epilepsy. Copyright © 2015 Elsevier Inc. All rights reserved.
Noise-induced tinnitus: auditory evoked potential in symptomatic and asymptomatic patients.
Santos-Filha, Valdete Alves Valentins dos; Samelli, Alessandra Giannella; Matas, Carla Gentile
2014-07-01
We evaluated the central auditory pathways in workers with noise-induced tinnitus with normal hearing thresholds, compared the auditory brainstem response results in groups with and without tinnitus and correlated the tinnitus location to the auditory brainstem response findings in individuals with a history of occupational noise exposure. Sixty individuals participated in the study and the following procedures were performed: anamnesis, immittance measures, pure-tone air conduction thresholds at all frequencies between 0.25-8 kHz and auditory brainstem response. The mean auditory brainstem response latencies were lower in the Control group than in the Tinnitus group, but no significant differences between the groups were observed. Qualitative analysis showed more alterations in the lower brainstem in the Tinnitus group. The strongest relationship between tinnitus location and auditory brainstem response alterations was detected in individuals with bilateral tinnitus and bilateral auditory brainstem response alterations compared with patients with unilateral alterations. Our findings suggest the occurrence of a possible dysfunction in the central auditory nervous system (brainstem) in individuals with noise-induced tinnitus and a normal hearing threshold.
Fergus, Daniel J; Feng, Ni Y; Bass, Andrew H
2015-10-14
Successful animal communication depends on a receiver's ability to detect a sender's signal. Exemplars of adaptive sender-receiver coupling include acoustic communication, often important in the context of seasonal reproduction. During the reproductive summer season, both male and female midshipman fish (Porichthys notatus) exhibit similar increases in the steroid-dependent frequency sensitivity of the saccule, the main auditory division of the inner ear. This form of auditory plasticity enhances detection of the higher frequency components of the multi-harmonic, long-duration advertisement calls produced repetitively by males during summer nights of peak vocal and spawning activity. The molecular basis of this seasonal auditory plasticity has not been fully resolved. Here, we utilize an unbiased transcriptomic RNA sequencing approach to identify differentially expressed transcripts within the saccule's hair cell epithelium of reproductive summer and non-reproductive winter fish. We assembled 74,027 unique transcripts from our saccular epithelial sequence reads. Of these, 6.4 % and 3.0 % were upregulated in the reproductive and non-reproductive saccular epithelium, respectively. Gene ontology (GO) term enrichment analyses of the differentially expressed transcripts showed that the reproductive saccular epithelium was transcriptionally, translationally, and metabolically more active than the non-reproductive epithelium. Furthermore, the expression of a specific suite of candidate genes, including ion channels and components of steroid-signaling pathways, was upregulated in the reproductive compared to the non-reproductive saccular epithelium. We found reported auditory functions for 14 candidate genes upregulated in the reproductive midshipman saccular epithelium, 8 of which are enriched in mouse hair cells, validating their hair cell-specific functions across vertebrates. We identified a suite of differentially expressed genes belonging to neurotransmission and steroid-signaling pathways, consistent with previous work showing the importance of these characters in regulating hair cell auditory sensitivity in midshipman fish and, more broadly, vertebrates. The results were also consistent with auditory hair cells being generally more physiologically active when animals are in a reproductive state, a time of enhanced sensory-motor coupling between the auditory periphery and the upper harmonics of vocalizations. Together with several new candidate genes, our results identify discrete patterns of gene expression linked to frequency- and steroid-dependent plasticity of hair cell auditory sensitivity.
Spiousas, Ignacio; Etchemendy, Pablo E.; Eguia, Manuel C.; Calcagno, Esteban R.; Abregú, Ezequiel; Vergara, Ramiro O.
2017-01-01
Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it. PMID:28690556
Spiousas, Ignacio; Etchemendy, Pablo E; Eguia, Manuel C; Calcagno, Esteban R; Abregú, Ezequiel; Vergara, Ramiro O
2017-01-01
Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1-6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it.
Ponnath, Abhilash; Hoke, Kim L; Farris, Hamilton E
2013-04-01
Neural adaptation, a reduction in the response to a maintained stimulus, is an important mechanism for detecting stimulus change. Contributing to change detection is the fact that adaptation is often stimulus specific: adaptation to a particular stimulus reduces excitability to a specific subset of stimuli, while the ability to respond to other stimuli is unaffected. Phasic cells (e.g., cells responding to stimulus onset) are good candidates for detecting the most rapid changes in natural auditory scenes, as they exhibit fast and complete adaptation to an initial stimulus presentation. We made recordings of single phasic auditory units in the frog midbrain to determine if adaptation was specific to stimulus frequency and ear of input. In response to an instantaneous frequency step in a tone, 28% of phasic cells exhibited frequency specific adaptation based on a relative frequency change (delta-f=±16%). Frequency specific adaptation was not limited to frequency steps, however, as adaptation was also overcome during continuous frequency modulated stimuli and in response to spectral transients interrupting tones. The results suggest that adaptation is separated for peripheral (e.g., frequency) channels. This was tested directly using dichotic stimuli. In 45% of binaural phasic units, adaptation was ear specific: adaptation to stimulation of one ear did not affect responses to stimulation of the other ear. Thus, adaptation exhibited specificity for stimulus frequency and lateralization at the level of the midbrain. This mechanism could be employed to detect rapid stimulus change within and between sound sources in complex acoustic environments.
Ponnath, Abhilash; Hoke, Kim L.
2013-01-01
Neural adaptation, a reduction in the response to a maintained stimulus, is an important mechanism for detecting stimulus change. Contributing to change detection is the fact that adaptation is often stimulus specific: adaptation to a particular stimulus reduces excitability to a specific subset of stimuli, while the ability to respond to other stimuli is unaffected. Phasic cells (e.g., cells responding to stimulus onset) are good candidates for detecting the most rapid changes in natural auditory scenes, as they exhibit fast and complete adaptation to an initial stimulus presentation. We made recordings of single phasic auditory units in the frog midbrain to determine if adaptation was specific to stimulus frequency and ear of input. In response to an instantaneous frequency step in a tone, 28 % of phasic cells exhibited frequency specific adaptation based on a relative frequency change (delta-f = ±16 %). Frequency specific adaptation was not limited to frequency steps, however, as adaptation was also overcome during continuous frequency modulated stimuli and in response to spectral transients interrupting tones. The results suggest that adaptation is separated for peripheral (e.g., frequency) channels. This was tested directly using dichotic stimuli. In 45 % of binaural phasic units, adaptation was ear specific: adaptation to stimulation of one ear did not affect responses to stimulation of the other ear. Thus, adaptation exhibited specificity for stimulus frequency and lateralization at the level of the midbrain. This mechanism could be employed to detect rapid stimulus change within and between sound sources in complex acoustic environments. PMID:23344947
Nuttall, Helen E.; Moore, David R.; Barry, Johanna G.; Krumbholz, Katrin
2015-01-01
The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. PMID:25787954
Modeling complex tone perception: grouping harmonics with combination-sensitive neurons.
Medvedev, Andrei V; Chiao, Faye; Kanwal, Jagmeet S
2002-06-01
Perception of complex communication sounds is a major function of the auditory system. To create a coherent precept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as "combination-sensitivity," are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to "recognize" the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing.
Strength of German accent under altered auditory feedback
HOWELL, PETER; DWORZYNSKI, KATHARINA
2007-01-01
Borden’s (1979, 1980) hypothesis that speakers with vulnerable speech systems rely more heavily on feedback monitoring than do speakers with less vulnerable systems was investigated. The second language (L2) of a speaker is vulnerable, in comparison with the native language, so alteration to feedback should have a detrimental effect on it, according to this hypothesis. Here, we specifically examined whether altered auditory feedback has an effect on accent strength when speakers speak L2. There were three stages in the experiment. First, 6 German speakers who were fluent in English (their L2) were recorded under six conditions—normal listening, amplified voice level, voice shifted in frequency, delayed auditory feedback, and slowed and accelerated speech rate conditions. Second, judges were trained to rate accent strength. Training was assessed by whether it was successful in separating German speakers speaking English from native English speakers, also speaking English. In the final stage, the judges ranked recordings of each speaker from the first stage as to increasing strength of German accent. The results show that accents were more pronounced under frequency-shifted and delayed auditory feedback conditions than under normal or amplified feedback conditions. Control tests were done to ensure that listeners were judging accent, rather than fluency changes caused by altered auditory feedback. The findings are discussed in terms of Borden’s hypothesis and other accounts about why altered auditory feedback disrupts speech control. PMID:11414137
Auditory Temporal Resolution in Individuals with Diabetes Mellitus Type 2.
Mishra, Rajkishor; Sanju, Himanshu Kumar; Kumar, Prawin
2016-10-01
Introduction "Diabetes mellitus is a group of metabolic disorders characterized by elevated blood sugar and abnormalities in insulin secretion and action" (American Diabetes Association). Previous literature has reported connection between diabetes mellitus and hearing impairment. There is a dearth of literature on auditory temporal resolution ability in individuals with diabetes mellitus type 2. Objective The main objective of the present study was to assess auditory temporal resolution ability through GDT (Gap Detection Threshold) in individuals with diabetes mellitus type 2 with high frequency hearing loss. Methods Fifteen subjects with diabetes mellitus type 2 with high frequency hearing loss in the age range of 30 to 40 years participated in the study as the experimental group. Fifteen age-matched non-diabetic individuals with normal hearing served as the control group. We administered the Gap Detection Threshold (GDT) test to all participants to assess their temporal resolution ability. Result We used the independent t -test to compare between groups. Results showed that the diabetic group (experimental) performed significantly poorer compared with the non-diabetic group (control). Conclusion It is possible to conclude that widening of auditory filters and changes in the central auditory nervous system contributed to poorer performance for temporal resolution task (Gap Detection Threshold) in individuals with diabetes mellitus type 2. Findings of the present study revealed the deteriorating effect of diabetes mellitus type 2 at the central auditory processing level.
Cortico-Cortical Connectivity Within Ferret Auditory Cortex.
Bizley, Jennifer K; Bajo, Victoria M; Nodal, Fernando R; King, Andrew J
2015-10-15
Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency-matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non-overlapping, consistent with the non-tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. © 2015 Wiley Periodicals, Inc.
Comparison of auditory stream segregation in sighted and early blind individuals.
Boroujeni, Fatemeh Moghadasi; Heidari, Fatemeh; Rouzbahani, Masoumeh; Kamali, Mohammad
2017-01-18
An important characteristic of the auditory system is the capacity to analyze complex sounds and make decisions on the source of the constituent parts of these sounds. Blind individuals compensate for the lack of visual information by an increase input from other sensory modalities, including increased auditory information. The purpose of the current study was to compare the fission boundary (FB) threshold of sighted and early blind individuals through spectral aspects using a psychoacoustic auditory stream segregation (ASS) test. This study was conducted on 16 sighted and 16 early blind adult individuals. The applied stimuli were presented sequentially as the pure tones A and B and as a triplet ABA-ABA pattern at the intensity of 40dBSL. The A tone frequency was selected as the basis at values of 500, 1000, and 2000Hz. The B tone was presented with the difference of a 4-100% above the basis tone frequency. Blind individuals had significantly lower FB thresholds than sighted people. FB was independent of the frequency of the tone A when expressed as the difference in the number of equivalent rectangular bandwidths (ERBs). Early blindness may increase perceptual separation of the acoustic stimuli to form accurate representations of the world. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Impaired movement timing in neurological disorders: rehabilitation and treatment strategies.
Hove, Michael J; Keller, Peter E
2015-03-01
Timing abnormalities have been reported in many neurological disorders, including Parkinson's disease (PD). In PD, motor-timing impairments are especially debilitating in gait. Despite impaired audiomotor synchronization, PD patients' gait improves when they walk with an auditory metronome or with music. Building on that research, we make recommendations for optimizing sensory cues to improve the efficacy of rhythmic cuing in gait rehabilitation. Adaptive rhythmic metronomes (that synchronize with the patient's walking) might be especially effective. In a recent study we showed that adaptive metronomes synchronized consistently with PD patients' footsteps without requiring attention; this improved stability and reinstated healthy gait dynamics. Other strategies could help optimize sensory cues for gait rehabilitation. Groove music strongly engages the motor system and induces movement; bass-frequency tones are associated with movement and provide strong timing cues. Thus, groove and bass-frequency pulses could deliver potent rhythmic cues. These strategies capitalize on the close neural connections between auditory and motor networks; and auditory cues are typically preferred. However, moving visual cues greatly improve visuomotor synchronization and could warrant examination in gait rehabilitation. Together, a treatment approach that employs groove, auditory, bass-frequency, and adaptive (GABA) cues could help optimize rhythmic sensory cues for treating motor and timing deficits. © 2014 New York Academy of Sciences.
Li, Jianwen; Li, Yan; Zhang, Ming; Ma, Weifang; Ma, Xuezong
2014-01-01
The current use of hearing aids and artificial cochleas for deaf-mute individuals depends on their auditory nerve. Skin-hearing technology, a patented system developed by our group, uses a cutaneous sensory nerve to substitute for the auditory nerve to help deaf-mutes to hear sound. This paper introduces a new solution, multi-channel-array skin-hearing technology, to solve the problem of speech discrimination. Based on the filtering principle of hair cells, external voice signals at different frequencies are converted to current signals at corresponding frequencies using electronic multi-channel bandpass filtering technology. Different positions on the skin can be stimulated by the electrode array, allowing the perception and discrimination of external speech signals to be determined by the skin response to the current signals. Through voice frequency analysis, the frequency range of the band-pass filter can also be determined. These findings demonstrate that the sensory nerves in the skin can help to transfer the voice signal and to distinguish the speech signal, suggesting that the skin sensory nerves are good candidates for the replacement of the auditory nerve in addressing deaf-mutes’ hearing problems. Scientific hearing experiments can be more safely performed on the skin. Compared with the artificial cochlea, multi-channel-array skin-hearing aids have lower operation risk in use, are cheaper and are more easily popularized. PMID:25317171
Schenk, Barbara S; Baumgartner, Wolf Dieter; Hamzavi, Jafar Sasan
2003-12-01
The most obvious and best documented changes in speech of postlingually deafened speakers are the rate, fundamental frequency, and volume (energy). These changes are due to the lack of auditory feedback. But auditory feedback affects not only the suprasegmental parameters of speech. The aim of this study was to determine the change at the segmental level of speech in terms of vowel formants. Twenty-three postlingually deafened and 18 normally hearing speakers were recorded reading a German text. The frequencies of the first and second formants and the vowel spaces of selected vowels in word-in-context condition were compared. All first formant frequencies (F1) of the postlingually deafened speakers were significantly different from those of the normally hearing people. The values of F1 were higher for the vowels /e/ (418+/-61 Hz compared with 359+/-52 Hz, P=0.006) and /o/ (459+/-58 compared with 390+/-45 Hz, P=0.0003) and lower for /a/ (765+/-115 Hz compared with 851+/-146 Hz, P=0.038). The second formant frequency (F2) only showed a significant increase for the vowel/e/(2016+/-347 Hz compared with 2279+/-250 Hz, P=0.012). The postlingually deafened people were divided into two subgroups according to duration of deafness (shorter/longer than 10 years of deafness). There was no significant difference in formant changes between the two groups. Our report demonstrated an effect of auditory feedback also on segmental features of speech of postlingually deafened people.
Henry, Kenneth S.; Kale, Sushrut; Scheidt, Ryan E.; Heinz, Michael G.
2011-01-01
Non-invasive auditory brainstem responses (ABRs) are commonly used to assess cochlear pathology in both clinical and research environments. In the current study, we evaluated the relationship between ABR characteristics and more direct measures of cochlear function. We recorded ABRs and auditory nerve (AN) single-unit responses in seven chinchillas with noise induced hearing loss. ABRs were recorded for 1–8 kHz tone burst stimuli both before and several weeks after four hours of exposure to a 115 dB SPL, 50 Hz band of noise with a center frequency of 2 kHz. Shifts in ABR characteristics (threshold, wave I amplitude, and wave I latency) following hearing loss were compared to AN-fiber tuning curve properties (threshold and frequency selectivity) in the same animals. As expected, noise exposure generally resulted in an increase in ABR threshold and decrease in wave I amplitude at equal SPL. Wave I amplitude at equal sensation level (SL), however, was similar before and after noise exposure. In addition, noise exposure resulted in decreases in ABR wave I latency at equal SL and, to a lesser extent, at equal SPL. The shifts in ABR characteristics were significantly related to AN-fiber tuning curve properties in the same animal at the same frequency. Larger shifts in ABR thresholds and ABR wave I amplitude at equal SPL were associated with greater AN threshold elevation. Larger reductions in ABR wave I latency at equal SL, on the other hand, were associated with greater loss of AN frequency selectivity. This result is consistent with linear systems theory, which predicts shorter time delays for broader peripheral frequency tuning. Taken together with other studies, our results affirm that ABR thresholds and wave I amplitude provide useful estimates of cochlear sensitivity. Furthermore, comparisons of ABR wave I latency to normative data at the same SL may prove useful for detecting and characterizing loss of cochlear frequency selectivity. PMID:21699970
Bellis, Teri James; Ross, Jody
2011-09-01
It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. An experimental repeated measures design was employed. Participants consisted of two groups (adults, n=10; children, n=10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality×laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality×response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD. American Academy of Audiology.
Snider, Kaitlin H.; Dziema, Heather; Aten, Sydney; Loeser, Jacob; Norona, Frances E.; Hoyt, Kari; Obrietan, Karl
2017-01-01
A large body of literature has shown that the disruption of circadian clock timing has profound effects on mood, memory and complex thinking. Central to this time keeping process is the master circadian pacemaker located within the suprachiasmatic nucleus (SCN). Of note, within the central nervous system, clock timing is not exclusive to the SCN, but rather, ancillary oscillatory capacity has been detected in a wide range of cell types and brain regions, including forebrain circuits that underlie complex cognitive processes. These observations raise questions about the hierarchical and functional relationship between the SCN and forebrain oscillators, and, relatedly, about the underlying clock-gated synaptic circuitry that modulates cognition. Here, we utilized a clock knockout strategy in which the essential circadian timing gene Bmal1 was selectively deleted from excitatory forebrain neurons, whilst the SCN clock remained intact, to test the role of forebrain clock timing in learning, memory, anxiety, and behavioral despair. With this model system, we observed numerous effects on hippocampus-dependent measures of cognition. Mice lacking forebrain Bmal1 exhibited deficits in both acquisition and recall on the Barnes maze. Notably, loss of forebrain Bmal1 abrogated time-of-day dependent novel object location memory. However, the loss of Bmal1 did not alter performance on the elevated plus maze, open field assay, and tail suspension test, indicating that this phenotype specifically impairs cognition but not affect. Together, these data suggest that forebrain clock timing plays a critical role in shaping the efficiency of learning and memory retrieval over the circadian day. PMID:27091299
Auditory Spectral Integration in the Perception of Static Vowels
ERIC Educational Resources Information Center
Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun
2011-01-01
Purpose: To evaluate potential contributions of broadband spectral integration in the perception of static vowels. Specifically, can the auditory system infer formant frequency information from changes in the intensity weighting across harmonics when the formant itself is missing? Does this type of integration produce the same results in the lower…
Auditory Attentional Capture: Effects of Singleton Distractor Sounds
ERIC Educational Resources Information Center
Dalton, Polly; Lavie, Nilli
2004-01-01
The phenomenon of attentional capture by a unique yet irrelevant singleton distractor has typically been studied in visual search. In this article, the authors examine whether a similar phenomenon occurs in the auditory domain. Participants searched sequences of sounds for targets defined by frequency, intensity, or duration. The presence of a…
Visual and Auditory Memory: Relationships to Reading Achievement.
ERIC Educational Resources Information Center
Bruning, Roger H.; And Others
1978-01-01
Good and poor readers' visual and auditory memory were tested. No group differences existed for single mode presentation in recognition frequency or latency. With multimodal presentation, good readers had faster latencies. Dual coding and self-terminating memory search hypotheses were supported. Implications for the reading process and reading…
Using a Function Generator to Produce Auditory and Visual Demonstrations.
ERIC Educational Resources Information Center
Woods, Charles B.
1998-01-01
Identifies a function generator as an instrument that produces time-varying electrical signals of frequency, wavelength, and amplitude. Sending these signals to a speaker or a light-emitting diode can demonstrate how specific characteristics of auditory or visual stimuli relate to perceptual experiences. Provides specific instructions for using…
Firing-rate resonances in the peripheral auditory system of the cricket, Gryllus bimaculatus.
Rau, Florian; Clemens, Jan; Naumov, Victor; Hennig, R Matthias; Schreiber, Susanne
2015-11-01
In many communication systems, information is encoded in the temporal pattern of signals. For rhythmic signals that carry information in specific frequency bands, a neuronal system may profit from tuning its inherent filtering properties towards a peak sensitivity in the respective frequency range. The cricket Gryllus bimaculatus evaluates acoustic communication signals of both conspecifics and predators. The song signals of conspecifics exhibit a characteristic pulse pattern that contains only a narrow range of modulation frequencies. We examined individual neurons (AN1, AN2, ON1) in the peripheral auditory system of the cricket for tuning towards specific modulation frequencies by assessing their firing-rate resonance. Acoustic stimuli with a swept-frequency envelope allowed an efficient characterization of the cells' modulation transfer functions. Some of the examined cells exhibited tuned band-pass properties. Using simple computational models, we demonstrate how different, cell-intrinsic or network-based mechanisms such as subthreshold resonances, spike-triggered adaptation, as well as an interplay of excitation and inhibition can account for the experimentally observed firing-rate resonances. Therefore, basic neuronal mechanisms that share negative feedback as a common theme may contribute to selectivity in the peripheral auditory pathway of crickets that is designed towards mate recognition and predator avoidance.
Testing resonating vector strength: Auditory system, electric fish, and noise
NASA Astrophysics Data System (ADS)
Leo van Hemmen, J.; Longtin, André; Vollmayr, Andreas N.
2011-12-01
Quite often a response to some input with a specific frequency ν○ can be described through a sequence of discrete events. Here, we study the synchrony vector, whose length stands for the vector strength, and in doing so focus on neuronal response in terms of spike times. The latter are supposed to be given by experiment. Instead of singling out the stimulus frequency ν○ we study the synchrony vector as a function of the real frequency variable ν. Its length turns out to be a resonating vector strength in that it shows clear maxima in the neighborhood of ν○ and multiples thereof, hence, allowing an easy way of determining response frequencies. We study this "resonating" vector strength for two concrete but rather different cases, viz., a specific midbrain neuron in the auditory system of cat and a primary detector neuron belonging to the electric sense of the wave-type electric fish Apteronotus leptorhynchus. We show that the resonating vector strength always performs a clear resonance correlated with the phase locking that it quantifies. We analyze the influence of noise and demonstrate how well the resonance associated with maximal vector strength indicates the dominant stimulus frequency. Furthermore, we exhibit how one can obtain a specific phase associated with, for instance, a delay in auditory analysis.
Tani, Toshiki; Abe, Hiroshi; Hayami, Taku; Banno, Taku; Kitamura, Naohito; Mashiko, Hiromi
2018-01-01
Abstract Natural sound is composed of various frequencies. Although the core region of the primate auditory cortex has functionally defined sound frequency preference maps, how the map is organized in the auditory areas of the belt and parabelt regions is not well known. In this study, we investigated the functional organizations of the core, belt, and parabelt regions encompassed by the lateral sulcus and the superior temporal sulcus in the common marmoset (Callithrix jacchus). Using optical intrinsic signal imaging, we obtained evoked responses to band-pass noise stimuli in a range of sound frequencies (0.5–16 kHz) in anesthetized adult animals and visualized the preferred sound frequency map on the cortical surface. We characterized the functionally defined organization using histologically defined brain areas in the same animals. We found tonotopic representation of a set of sound frequencies (low to high) within the primary (A1), rostral (R), and rostrotemporal (RT) areas of the core region. In the belt region, the tonotopic representation existed only in the mediolateral (ML) area. This representation was symmetric with that found in A1 along the border between areas A1 and ML. The functional structure was not very clear in the anterolateral (AL) area. Low frequencies were mainly preferred in the rostrotemplatal (RTL) area, while high frequencies were preferred in the caudolateral (CL) area. There was a portion of the parabelt region that strongly responded to higher sound frequencies (>5.8 kHz) along the border between the rostral parabelt (RPB) and caudal parabelt (CPB) regions. PMID:29736410
Petrova, Ana; Gaskell, M. Gareth; Ferrand, Ludovic
2011-01-01
Many studies have repeatedly shown an orthographic consistency effect in the auditory lexical decision task. Words with phonological rimes that could be spelled in multiple ways (i.e., inconsistent words) typically produce longer auditory lexical decision latencies and more errors than do words with rimes that could be spelled in only one way (i.e., consistent words). These results have been extended to different languages and tasks, suggesting that the effect is quite general and robust. Despite this growing body of evidence, some psycholinguists believe that orthographic effects on spoken language are exclusively strategic, post-lexical, or restricted to peculiar (low-frequency) words. In the present study, we manipulated consistency and word-frequency orthogonally in order to explore whether the orthographic consistency effect extends to high-frequency words. Two different tasks were used: lexical decision and rime detection. Both tasks produced reliable consistency effects for both low- and high-frequency words. Furthermore, in Experiment 1 (lexical decision), an interaction revealed a stronger consistency effect for low-frequency words than for high-frequency words, as initially predicted by Ziegler and Ferrand (1998), whereas no interaction was found in Experiment 2 (rime detection). Our results extend previous findings by showing that the orthographic consistency effect is obtained not only for low-frequency words but also for high-frequency words. Furthermore, these effects were also obtained in a rime detection task, which does not require the explicit processing of orthographic structure. Globally, our results suggest that literacy changes the way people process spoken words, even for frequent words. PMID:22025916
Continuous exposure to low-frequency noise and carbon disulfide: Combined effects on hearing.
Venet, Thomas; Carreres-Pons, Maria; Chalansonnet, Monique; Thomas, Aurélie; Merlen, Lise; Nunge, Hervé; Bonfanti, Elodie; Cosnier, Frédéric; Llorens, Jordi; Campo, Pierre
2017-09-01
Carbon disulfide (CS 2 ) is used in industry; it has been shown to have neurotoxic effects, causing central and distal axonopathies.However, it is not considered cochleotoxic as it does not affect hair cells in the organ of Corti, and the only auditory effects reported in the literature were confined to the low-frequency region. No reports on the effects of combined exposure to low-frequency noise and CS 2 have been published to date. This article focuses on the effects on rat hearing of combined exposure to noise with increasing concentrations of CS 2 (0, 63,250, and 500ppm, 6h per day, 5 days per week, for 4 weeks). The noise used was a low-frequency noise ranging from 0.5 to 2kHz at an intensity of 106dB SPL. Auditory function was tested using distortion product oto-acoustic emissions, which mainly reflects the cochlear performances. Exposure to noise alone caused an auditory deficit in a frequency area ranging from 3.6 to 6 kHz. The damaged area was approximately one octave (6kHz) above the highest frequency of the exposure noise (2.8kHz); it was a little wider than expected based on the noise spectrum.Consequently, since maximum hearing sensitivity is located around 8kHz in rats, low-frequency noise exposure can affect the cochlear regions detecting mid-range frequencies. Co-exposure to CS 2 (250-ppm and over) and noise increased the extent of the damaged frequency window since a significant auditory deficit was measured at 9.6kHz in these conditions.Moreover, the significance at 9.6kHz increased with the solvent concentrations. Histological data showed that neither hair cells nor ganglion cells were damaged by CS 2 . This discrepancy between functional and histological data is discussed. Like most aromatic solvents, carbon disulfide should be considered as a key parameter in hearing conservation régulations. Copyright © 2017 Elsevier B.V. All rights reserved.
Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam
2011-01-01
To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.
Tokarev, Kirill; Tiunova, Anna; Scharff, Constance; Anokhin, Konstantin
2011-01-01
Specialized neural pathways, the song system, are required for acquiring, producing, and perceiving learned avian vocalizations. Birds that do not learn to produce their vocalizations lack telencephalic song system components. It is not known whether the song system forebrain regions are exclusively evolved for song or whether they also process information not related to song that might reflect their 'evolutionary history'. To address this question we monitored the induction of two immediate-early genes (IEGs) c-Fos and ZENK in various regions of the song system in zebra finches (Taeniopygia guttata) in response to an aversive food learning paradigm; this involves the association of a food item with a noxious stimulus that affects the oropharyngeal-esophageal cavity and tongue, causing subsequent avoidance of that food item. The motor response results in beak and head movements but not vocalizations. IEGs have been extensively used to map neuro-molecular correlates of song motor production and auditory processing. As previously reported, neurons in two pallial vocal motor regions, HVC and RA, expressed IEGs after singing. Surprisingly, c-Fos was induced equivalently also after food aversion learning in the absence of singing. The density of c-Fos positive neurons was significantly higher than that of birds in control conditions. This was not the case in two other pallial song nuclei important for vocal plasticity, LMAN and Area X, although singing did induce IEGs in these structures, as reported previously. Our results are consistent with the possibility that some of the song nuclei may participate in non-vocal learning and the populations of neurons involved in the two tasks show partial overlap. These findings underscore the previously advanced notion that the specialized forebrain pre-motor nuclei controlling song evolved from circuits involved in behaviors related to feeding.
Opiate modulation of monoamines in the chick forebrain: possible role in emotional regulation?
Baldauf, K; Braun, K; Gruss, M
2005-02-05
Numerous studies have shown that the opiate system is crucially involved in emotionally guided behavior. In the present study, we focussed on the medio-rostral neostriatum/hyperstriatum ventrale (MNH) of the chick forebrain. This avian prefrontal cortex analogue is critically involved in auditory filial imprinting, a well-characterized juvenile emotional learning event. The high density of mu-opiate receptors expressed in the MNH led to the hypothesis that mu-opiate receptor-mediated processes may modulate the glutamatergic, dopaminergic, and/or serotonergic neurotransmission within the MNH and thereby have a critical impact on filial imprinting. Using microdialysis and pharmaco-behavioral approaches in young chicks, we demonstrated that: the systemic application of the mu-opiate receptor antagonist naloxone (5, 50 mg/kg) significantly increased extracellular levels of 5-HIAA and HVA; the systemic application of the specific mu-opiate receptor agonist DAGO (5 mg/kg) increased the levels of HVA and taurine, an effect that was antagonized by simultaneously applied naloxone (5 mg/kg); the local application of DAGO (1 mM) had no effects on 5-HIAA, HVA, glutamate, and taurine, however, the effects of systemically injected naloxone (5 mg/kg) were abolished by simultaneously applied DAGO (1 mM); the systemic application of naloxone (5 mg/kg) increased distress behavior (measured as the duration of distress vocalization during separation from the peer group). These results are in line with our hypothesis that the mu-opiate receptor-mediated modulation of serotonergic and dopaminergic neurotransmission alters the emotional and motivational status of the animal and thereby may play a modulatory role during filial imprinting in the newborn animal. 2004 Wiley Periodicals, Inc
Behavioral Measures of Auditory Streaming in Ferrets (Mustela putorius)
Ma, Ling; Yin, Pingbo; Micheyl, Christophe; Oxenham, Andrew J.; Shamma, Shihab A.
2015-01-01
An important aspect of the analysis of auditory “scenes” relates to the perceptual organization of sound sequences into auditory “streams.” In this study, we adapted two auditory perception tasks, used in recent human psychophysical studies, to obtain behavioral measures of auditory streaming in ferrets (Mustela putorius). One task involved the detection of shifts in the frequency of tones within an alternating tone sequence. The other task involved the detection of a stream of regularly repeating target tones embedded within a randomly varying multitone background. In both tasks, performance was measured as a function of various stimulus parameters, which previous psychophysical studies in humans have shown to influence auditory streaming. Ferret performance in the two tasks was found to vary as a function of these parameters in a way that is qualitatively consistent with the human data. These results suggest that auditory streaming occurs in ferrets, and that the two tasks described here may provide a valuable tool in future behavioral and neurophysiological studies of the phenomenon. PMID:20695663
NASA Astrophysics Data System (ADS)
Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica
2005-12-01
This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.
Development of auditory sensory memory from 2 to 6 years: an MMN study.
Glass, Elisabeth; Sachse, Steffi; von Suchodoletz, Waldemar
2008-08-01
Short-term storage of auditory information is thought to be a precondition for cognitive development, and deficits in short-term memory are believed to underlie learning disabilities and specific language disorders. We examined the development of the duration of auditory sensory memory in normally developing children between the ages of 2 and 6 years. To probe the lifetime of auditory sensory memory we elicited the mismatch negativity (MMN), a component of the late auditory evoked potential, with tone stimuli of two different frequencies presented with various interstimulus intervals between 500 and 5,000 ms. Our findings suggest that memory traces for tone characteristics have a duration of 1-2 s in 2- and 3-year-old children, more than 2 s in 4-year-olds and 3-5 s in 6-year-olds. The results provide insights into the maturational processes involved in auditory sensory memory during the sensitive period of cognitive development.
Escera, Carles; Leung, Sumie; Grimm, Sabine
2014-07-01
Detection of changes in the acoustic environment is critical for survival, as it prevents missing potentially relevant events outside the focus of attention. In humans, deviance detection based on acoustic regularity encoding has been associated with a brain response derived from the human EEG, the mismatch negativity (MMN) auditory evoked potential, peaking at about 100-200 ms from deviance onset. By its long latency and cerebral generators, the cortical nature of both the processes of regularity encoding and deviance detection has been assumed. Yet, intracellular, extracellular, single-unit and local-field potential recordings in rats and cats have shown much earlier (circa 20-30 ms) and hierarchically lower (primary auditory cortex, medial geniculate body, inferior colliculus) deviance-related responses. Here, we review the recent evidence obtained with the complex auditory brainstem response (cABR), the middle latency response (MLR) and magnetoencephalography (MEG) demonstrating that human auditory deviance detection based on regularity encoding-rather than on refractoriness-occurs at latencies and in neural networks comparable to those revealed in animals. Specifically, encoding of simple acoustic-feature regularities and detection of corresponding deviance, such as an infrequent change in frequency or location, occur in the latency range of the MLR, in separate auditory cortical regions from those generating the MMN, and even at the level of human auditory brainstem. In contrast, violations of more complex regularities, such as those defined by the alternation of two different tones or by feature conjunctions (i.e., frequency and location) fail to elicit MLR correlates but elicit sizable MMNs. Altogether, these findings support the emerging view that deviance detection is a basic principle of the functional organization of the auditory system, and that regularity encoding and deviance detection is organized in ascending levels of complexity along the auditory pathway expanding from the brainstem up to higher-order areas of the cerebral cortex.
Basal Forebrain Gating by Somatostatin Neurons Drives Prefrontal Cortical Activity.
Espinosa, Nelson; Alonso, Alejandra; Morales, Cristian; Espinosa, Pedro; Chávez, Andrés E; Fuentealba, Pablo
2017-11-17
The basal forebrain provides modulatory input to the cortex regulating brain states and cognitive processing. Somatostatin-expressing neurons constitute a heterogeneous GABAergic population known to functionally inhibit basal forebrain cortically projecting cells thus favoring sleep and cortical synchronization. However, it remains unclear if somatostatin cells can regulate population activity patterns in the basal forebrain and modulate cortical dynamics. Here, we demonstrate that somatostatin neurons regulate the corticopetal synaptic output of the basal forebrain impinging on cortical activity and behavior. Optogenetic inactivation of somatostatin neurons in vivo rapidly modified neural activity in the basal forebrain, with the consequent enhancement and desynchronization of activity in the prefrontal cortex, reflected in both neuronal spiking and network oscillations. Cortical activation was partially dependent on cholinergic transmission, suppressing slow waves and potentiating gamma oscillations. In addition, recruitment dynamics was cell type-specific, with interneurons showing similar temporal profiles, but stronger responses than pyramidal cells. Finally, optogenetic stimulation of quiescent animals during resting periods prompted locomotor activity, suggesting generalized cortical activation and increased arousal. Altogether, we provide physiological and behavioral evidence indicating that somatostatin neurons are pivotal in gating the synaptic output of the basal forebrain, thus indirectly controlling cortical operations via both cholinergic and non-cholinergic mechanisms. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Estimating human cochlear tuning behaviorally via forward masking
NASA Astrophysics Data System (ADS)
Oxenham, Andrew J.; Kreft, Heather A.
2018-05-01
The cochlea is where sound vibrations are transduced into the initial neural code for hearing. Despite the intervening stages of auditory processing, a surprising number of auditory perceptual phenomena can be explained in terms of the cochlea's biomechanical transformations. The quest to relate perception to these transformations has a long and distinguished history. Given its long history, it is perhaps surprising that something as fundamental as the link between frequency tuning in the cochlea and perception remains a controversial and active topic of investigation. Here we review some recent developments in our understanding of the relationship between cochlear frequency tuning and behavioral measures of frequency selectivity in humans. We show that forward masking using the notched-noise technique can produce reliable estimates of tuning that are in line with predictions from stimulus frequency otoacoustic emissions.
Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.
Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M
2013-11-01
Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.
Receiver bias and the acoustic ecology of aye-ayes (Daubentonia madagascariensis).
Ramsier, Marissa A; Dominy, Nathaniel J
2012-11-01
The aye-aye is a rare lemur from Madagascar that uses its highly specialized middle digit for percussive foraging. This acoustic behavior, also termed tap-scanning, produces dominant frequencies between 6 and 15 kHz. An enhanced auditory sensitivity to these frequencies raises the possibility that the acoustic and auditory specializations of aye-ayes have imposed constraints on the evolution of their vocal signals, especially their primary long-distance vocalization, the screech. Here we explore this concept, termed receiver bias, and suggest that the dominant frequency of the screech call (~2.7 kHz) represents an evolutionary compromise between the opposing adaptive advantages of long-distance sound propagation and enhanced detection by conspecific receivers.
Salicylate-induced changes in auditory thresholds of adolescent and adult rats.
Brennan, J F; Brown, C A; Jastreboff, P J
1996-01-01
Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.
Verhey, Jesko L; Epp, Bastian; Stasiak, Arkadiusz; Winter, Ian M
2013-01-01
A common characteristic of natural sounds is that the level fluctuations in different frequency regions are coherent. The ability of the auditory system to use this comodulation is shown when a sinusoidal signal is masked by a masker centred at the signal frequency (on-frequency masker, OFM) and one or more off-frequency components, commonly referred to as flanking bands (FBs). In general, the threshold of the signal masked by comodulated masker components is lower than when masked by masker components with uncorrelated envelopes or in the presence of the OFM only. This effect is commonly referred to as comodulation masking release (CMR). The present study investigates if CMR is also observed for a sinusoidal signal embedded in the OFM when the centre frequencies of the FBs are swept over time with a sweep rate of one octave per second. Both a common change of different frequencies and comodulation could serve as cues to indicate which of the stimulus components originate from one source. If the common fate of frequency components is the stronger binding cue, the sweeping FBs and the OFM with a fixed centre frequency should no longer form one auditory object and the CMR should be abolished. However, psychoacoustical results with normal-hearing listeners show that a CMR is also observed with sweeping components. The results are consistent with the hypothesis of wideband inhibition as the underlying physiological mechanism, as the CMR should only depend on the spectral position of the flanking bands relative to the inhibitory areas (as seen in physiological recordings using stationary flanking bands). Preliminary physiological results in the cochlear nucleus of the Guinea pig show that a correlate of CMR can also be found at this level of the auditory pathway with sweeping flanking bands.
Keppler, H; Degeest, S; Dhooge, I
2017-11-01
Chronic tinnitus is associated with reduced auditory input, which results in changes in the central auditory system. This study aimed to examine the relationship between tinnitus pitch and parameters of audiometry and distortion product otoacoustic emissions. For audiometry, the parameters represented the edge frequency of hearing loss, the frequency of maximum hearing loss and the frequency range of hearing loss. For distortion product otoacoustic emissions, the parameters were the frequency of lowest distortion product otoacoustic emission amplitudes and the frequency range of reduced distortion product otoacoustic emissions. Sixty-seven patients (45 males, 22 females) with subjective chronic tinnitus, aged 18 to 73 years, were included. No correlation was found between tinnitus pitch and parameters of audiometry and distortion product otoacoustic emissions. However, tinnitus pitch fell mostly within the frequency range of hearing loss. The current study seems to confirm the relationship between tinnitus pitch and the frequency range of hearing loss, thus supporting the homeostatic plasticity model.
Evolution of the amniote pallium and the origins of mammalian neocortex
Butler, Ann B.; Reiner, Anton; Karten, Harvey J.
2012-01-01
Karten's neocortex hypothesis holds that many component cell populations of the sauropsid dorsal ventricular ridge (DVR) are homologous to particular cell populations in layers of auditory and visual tectofugal-recipient neocortex of mammals (i.e., temporal neocortex), as well as to some amygdaloid populations. The claustroamygdalar hypothesis, based on gene expression domains, proposes that mammalian homologues of DVR are found in the claustrum, endopiriform nuclei, and/or pallial amygdala. Because hypotheses of homology need to account for the totality of the evidence, the available data on multiple forebrain features of sauropsids and mammals are reviewed here. While some genetic data are compatible with the claustroamygdalar hypothesis, and developmental (epigenetic) data are indecisive, hodological, morphological, and topographical data favor the neocortex hypothesis and are inconsistent with the claustroamygdalar hypothesis. Detailed studies of gene signaling cascades that establish neuronal cell-type identity in DVR, tectofugal-recipient neocortex, and claustroamygdala will be needed to resolve this debate about the evolution of neocortex. PMID:21534989
de Hoz, Livia; Gierej, Dorota; Lioudyno, Victoria; Jaworski, Jacek; Blazejczyk, Magda; Cruces-Solís, Hugo; Beroun, Anna; Lebitko, Tomasz; Nikolaev, Tomasz; Knapska, Ewelina; Nelken, Israel; Kaczmarek, Leszek
2018-05-01
The behavioral changes that comprise operant learning are associated with plasticity in early sensory cortices as well as with modulation of gene expression, but the connection between the behavioral, electrophysiological, and molecular changes is only partially understood. We specifically manipulated c-Fos expression, a hallmark of learning-induced synaptic plasticity, in auditory cortex of adult mice using a novel approach based on RNA interference. Locally blocking c-Fos expression caused a specific behavioral deficit in a sound discrimination task, in parallel with decreased cortical experience-dependent plasticity, without affecting baseline excitability or basic auditory processing. Thus, c-Fos-dependent experience-dependent cortical plasticity is necessary for frequency discrimination in an operant behavioral task. Our results connect behavioral, molecular and physiological changes and demonstrate a role of c-Fos in experience-dependent plasticity and learning.
Snider, Kaitlin H; Dziema, Heather; Aten, Sydney; Loeser, Jacob; Norona, Frances E; Hoyt, Kari; Obrietan, Karl
2016-07-15
A large body of literature has shown that the disruption of circadian clock timing has profound effects on mood, memory and complex thinking. Central to this time keeping process is the master circadian pacemaker located within the suprachiasmatic nucleus (SCN). Of note, within the central nervous system, clock timing is not exclusive to the SCN, but rather, ancillary oscillatory capacity has been detected in a wide range of cell types and brain regions, including forebrain circuits that underlie complex cognitive processes. These observations raise questions about the hierarchical and functional relationship between the SCN and forebrain oscillators, and, relatedly, about the underlying clock-gated synaptic circuitry that modulates cognition. Here, we utilized a clock knockout strategy in which the essential circadian timing gene Bmal1 was selectively deleted from excitatory forebrain neurons, whilst the SCN clock remained intact, to test the role of forebrain clock timing in learning, memory, anxiety, and behavioral despair. With this model system, we observed numerous effects on hippocampus-dependent measures of cognition. Mice lacking forebrain Bmal1 exhibited deficits in both acquisition and recall on the Barnes maze. Notably, loss of forebrain Bmal1 abrogated time-of-day dependent novel object location memory. However, the loss of Bmal1 did not alter performance on the elevated plus maze, open field assay, and tail suspension test, indicating that this phenotype specifically impairs cognition but not affect. Together, these data suggest that forebrain clock timing plays a critical role in shaping the efficiency of learning and memory retrieval over the circadian day. Copyright © 2016 Elsevier B.V. All rights reserved.
Detecting modulated signals in modulated noise: (II) neural thresholds in the songbird forebrain.
Bee, Mark A; Buschermöhle, Michael; Klump, Georg M
2007-10-01
Sounds in the real world fluctuate in amplitude. The vertebrate auditory system exploits patterns of amplitude fluctuations to improve signal detection in noise. One experimental paradigm demonstrating these general effects has been used in psychophysical studies of 'comodulation detection difference' (CDD). The CDD effect refers to the fact that thresholds for detecting a modulated, narrowband noise signal are lower when the envelopes of flanking bands of modulated noise are comodulated with each other, but fluctuate independently of the signal compared with conditions in which the envelopes of the signal and flanking bands are all comodulated. Here, we report results from a study of the neural correlates of CDD in European starlings (Sturnus vulgaris). We manipulated: (i) the envelope correlations between a narrowband noise signal and a masker comprised of six flanking bands of noise; (ii) the signal onset delay relative to masker onset; (iii) signal duration; and (iv) masker spectrum level. Masked detection thresholds were determined from neural responses using signal detection theory. Across conditions, the magnitude of neural CDD ranged between 2 and 8 dB, which is similar to that reported in a companion psychophysical study of starlings [U. Langemann & G.M. Klump (2007) Eur. J. Neurosci., 26, 1969-1978]. We found little evidence to suggest that neural CDD resulted from the across-channel processing of auditory grouping cues related to common envelope fluctuations and synchronous onsets between the signal and flanking bands. We discuss a within-channel model of peripheral processing that explains many of our results.
Two-dimensional adaptation in the auditory forebrain
Nagel, Katherine I.; Doupe, Allison J.
2011-01-01
Sensory neurons exhibit two universal properties: sensitivity to multiple stimulus dimensions, and adaptation to stimulus statistics. How adaptation affects encoding along primary dimensions is well characterized for most sensory pathways, but if and how it affects secondary dimensions is less clear. We studied these effects for neurons in the avian equivalent of primary auditory cortex, responding to temporally modulated sounds. We showed that the firing rate of single neurons in field L was affected by at least two components of the time-varying sound log-amplitude. When overall sound amplitude was low, neural responses were based on nonlinear combinations of the mean log-amplitude and its rate of change (first time differential). At high mean sound amplitude, the two relevant stimulus features became the first and second time derivatives of the sound log-amplitude. Thus a strikingly systematic relationship between dimensions was conserved across changes in stimulus intensity, whereby one of the relevant dimensions approximated the time differential of the other dimension. In contrast to stimulus mean, increases in stimulus variance did not change relevant dimensions, but selectively increased the contribution of the second dimension to neural firing, illustrating a new adaptive behavior enabled by multidimensional encoding. Finally, we demonstrated theoretically that inclusion of time differentials as additional stimulus features, as seen so prominently in the single-neuron responses studied here, is a useful strategy for encoding naturalistic stimuli, because it can lower the necessary sampling rate while maintaining the robustness of stimulus reconstruction to correlated noise. PMID:21753019
On pure word deafness, temporal processing, and the left hemisphere.
Stefanatos, Gerry A; Gershkoff, Arthur; Madigan, Sean
2005-07-01
Pure word deafness (PWD) is a rare neurological syndrome characterized by severe difficulties in understanding and reproducing spoken language, with sparing of written language comprehension and speech production. The pathognomonic disturbance of auditory comprehension appears to be associated with a breakdown in processes involved in mapping auditory input to lexical representations of words, but the functional locus of this disturbance and the localization of the responsible lesion have long been disputed. We report here on a woman with PWD resulting from a circumscribed unilateral infarct involving the left superior temporal lobe who demonstrated significant problems processing transitional spectrotemporal cues in both speech and nonspeech sounds. On speech discrimination tasks, she exhibited poor differentiation of stop consonant-vowel syllables distinguished by voicing onset and brief formant frequency transitions. Isolated formant transitions could be reliably discriminated only at very long durations (> 200 ms). By contrast, click fusion threshold, which depends on millisecond-level resolution of brief auditory events, was normal. These results suggest that the problems with speech analysis in this case were not secondary to general constraints on auditory temporal resolution. Rather, they point to a disturbance of left hemisphere auditory mechanisms that preferentially analyze rapid spectrotemporal variations in frequency. The findings have important implications for our conceptualization of PWD and its subtypes.
Music for the birds: effects of auditory enrichment on captive bird species.
Robbins, Lindsey; Margulis, Susan W
2016-01-01
With the increase of mixed species exhibits in zoos, targeting enrichment for individual species may be problematic. Often, mammals may be the primary targets of enrichment, yet other species that share their environment (such as birds) will unavoidably be exposed to the enrichment as well. The purpose of this study was to determine if (1) auditory stimuli designed for enrichment of primates influenced the behavior of captive birds in the zoo setting, and (2) if the specific type of auditory enrichment impacted bird behavior. Three different African bird species were observed at the Buffalo Zoo during exposure to natural sounds, classical music and rock music. The results revealed that the average frequency of flying in all three bird species increased with naturalistic sounds and decreased with rock music (F = 7.63, df = 3,6, P = 0.018); vocalizations for two of the three species (Superb Starlings and Mousebirds) increased (F = 18.61, df = 2,6, P = 0.0027) in response to all auditory stimuli, however one species (Lady Ross's Turacos) increased frequency of duetting only in response to rock music (X(2) = 18.5, df = 2, P < 0.0001). Auditory enrichment implemented for large mammals may influence behavior in non-target species as well, in this case leading to increased activity by birds. © 2016 Wiley Periodicals, Inc.
Audiological and electrophysiological assessment of professional pop/rock musicians.
Samelli, Alessandra G; Matas, Carla G; Carvallo, Renata M M; Gomes, Raquel F; de Beija, Carolina S; Magliaro, Fernanda C L; Rabelo, Camila M
2012-01-01
In the present study, we evaluated peripheral and central auditory pathways in professional musicians (with and without hearing loss) compared to non-musicians. The goal was to verify if music exposure could affect auditory pathways as a whole. This is a prospective study that compared the results obtained between three groups (musicians with and without hearing loss and non-musicians). Thirty-two male individuals participated and they were assessed by: Immittance measurements, pure-tone air conduction thresholds at all frequencies from 0.25 to 20 kHz, Transient Evoked Otoacoustic Emissions, Auditory Brainstem Response (ABR), and Cognitive Potential. The musicians showed worse hearing thresholds in both conventional and high frequency audiometry when compared to the non-musicians; the mean amplitude of Transient Evoked Otoacoustic Emissions was smaller in the musicians group, but the mean latencies of Auditory Brainstem Response and Cognitive Potential were diminished in the musicians when compared to the non-musicians. Our findings suggest that the population of musicians is at risk for developing music-induced hearing loss. However, the electrophysiological evaluation showed that latency waves of ABR and P300 were diminished in musicians, which may suggest that the auditory training to which these musicians are exposed acts as a facilitator of the acoustic signal transmission to the cortex.
Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema
NASA Astrophysics Data System (ADS)
Manolas, Christos; Pauletto, Sandra
2014-09-01
Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.
NASA Astrophysics Data System (ADS)
Modegi, Toshio
We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.
Auditory mismatch negativity deficits in long-term heavy cannabis users.
Roser, Patrik; Della, Beate; Norra, Christine; Uhl, Idun; Brüne, Martin; Juckel, Georg
2010-09-01
Mismatch negativity (MMN) is an auditory event-related potential indicating auditory sensory memory and information processing. The present study tested the hypothesis that chronic cannabis use is associated with deficient MMN generation. MMN was investigated in age- and gender-matched chronic cannabis users (n = 30) and nonuser controls (n = 30). The cannabis users were divided into two groups according to duration and quantity of cannabis consumption. The MMNs resulting from a pseudorandomized sequence of 2 × 900 auditory stimuli were recorded by 32-channel EEG. The standard stimuli were 1,000 Hz, 80 dB SPL and 90 ms duration. The deviant stimuli differed in duration (50 ms) or frequency (1,200 Hz). There were no significant differences in MMN values between cannabis users and nonuser controls in both deviance conditions. With regard to subgroups, reduced amplitudes of frequency MMN at frontal electrodes were found in long-term (≥8 years of use) and heavy (≥15 joints/week) users compared to short-term and light users. The results indicate that chronic cannabis use may cause a specific impairment of auditory information processing. In particular, duration and quantity of cannabis use could be identified as important factors of deficient MMN generation.
Sinai, A; Crone, N E; Wied, H M; Franaszczuk, P J; Miglioretti, D; Boatman-Reich, D
2009-01-01
We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping.
Intracranial mapping of auditory perception: Event-related responses and electrocortical stimulation
Sinai, A.; Crone, N.E.; Wied, H.M.; Franaszczuk, P.J.; Miglioretti, D.; Boatman-Reich, D.
2010-01-01
Objective We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Methods Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. Results ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60 Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Conclusions Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. Significance These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping. PMID:19070540
Sherlin, Leslie; Budzynski, Thomas; Kogan Budzynski, Helen; Congedo, Marco; Fischer, Mary E; Buchwald, Dedra
2007-02-15
Previous work using quantified EEG has suggested that brain activity in individuals with chronic fatigue syndrome (CFS) and normal persons differs. Our objective was to investigate if specific frequency band-pass regions and spatial locations are associated with CFS using low-resolution electromagnetic brain tomography (LORETA). We conducted a co-twin control study of 17 pairs of monozygotic twins where 1 twin met criteria for CFS and the co-twin was healthy. Twins underwent an extensive battery of tests including a structured psychiatric interview and a quantified EEG. Eyes closed EEG frequency-domain analysis was computed and the entire brain volume was compared of the CFS and healthy twins using a multiple comparison procedure. Compared with their healthy co-twins, twins with CFS differed in current source density. The CFS twins had higher delta in the left uncus and parahippocampal gyrus and higher theta in the cingulate gyrus and right superior frontal gyrus. These findings suggest that neurophysiological activity in specific areas of the brain may differentiate individuals with CFS from those in good health. The study corroborates that slowing of the deeper structures of the limbic system is associated with affect. It also supports the neurobiological model that the right forebrain is associated with sympathetic activity and the left forebrain with the effective management of energy. These preliminary findings await replication.
Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise
Ioannou, Christos I.; Pereda, Ernesto; Lindsen, Job P.; Bhattacharya, Joydeep
2015-01-01
The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies. PMID:26065708
Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise.
Ioannou, Christos I; Pereda, Ernesto; Lindsen, Job P; Bhattacharya, Joydeep
2015-01-01
The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.
Lähteenmäki, P M; Krause, C M; Sillanmäki, L; Salmi, T T; Lang, A H
1999-12-01
Event-related desynchronization (ERD) and synchronization (ERS) of the 8-10 and 10-12 Hz frequency bands of the background EEG were studied in 19 adolescent survivors of childhood cancer (11 leukemias, 8 solid tumors) and in 10 healthy control subjects performing an auditory memory task. The stimuli were auditory Finnish words presented as a Sternberg-type memory-scanning paradigm. Each trial started with the presentation of a 4 word set for memorization whereafter a probe word was presented to be identified by the subject as belonging or not belonging to the memorized set. Encoding of the memory set elicited ERS and retrieval ERD at both frequency bands. However, in the survivors of leukemia, ERS was turned to ERD during encoding at the lower alpha frequency band. ERD was lasting longer at the lower frequency band than at the higher frequency band, in each study group. At both frequency bands, the maximum of ERD was achieved later in the cancer survivors than in the control group. The previously reported type of ERD/ERS during an auditory memory task was reproducible also in the survivors of childhood cancer, at different alpha frequency bands. However, the temporal deviance in ERD/ERS magnitudes, in the cancer survivors, was interpreted to indicate that both survivor groups had prolonged information processing time and/or they used ineffective cognitive strategies. This finding was more pronounced in the group of leukemia survivors, at the lower alpha frequency band, suggesting that the main problem of this patient group might be in the field of attention.
Smit, Jasper V; Jahanshahi, Ali; Janssen, Marcus L F; Stokroos, Robert J; Temel, Yasin
2017-01-01
Recently it has been shown in animal studies that deep brain stimulation (DBS) of auditory structures was able to reduce tinnitus-like behavior. However, the question arises whether hearing might be impaired when interfering in auditory-related network loops with DBS. The auditory brainstem response (ABR) was measured in rats during high frequency stimulation (HFS) and low frequency stimulation (LFS) in the central nucleus of the inferior colliculus (CIC, n = 5) or dentate cerebellar nucleus (DCBN, n = 5). Besides hearing thresholds using ABR, relative measures of latency and amplitude can be extracted from the ABR. In this study ABR thresholds, interpeak latencies (I-III, III-V, I-V) and V/I amplitude ratio were measured during off-stimulation state and during LFS and HFS. In both the CIC and the CNBN groups, no significant differences were observed for all outcome measures. DBS in both the CIC and the CNBN did not have adverse effects on hearing measurements. These findings suggest that DBS does not hamper physiological processing in the auditory circuitry.
Effects of auditory cues on gait initiation and turning in patients with Parkinson's disease.
Gómez-González, J; Martín-Casas, P; Cano-de-la-Cuerda, R
2016-12-08
To review the available scientific evidence about the effectiveness of auditory cues during gait initiation and turning in patients with Parkinson's disease. We conducted a literature search in the following databases: Brain, PubMed, Medline, CINAHL, Scopus, Science Direct, Web of Science, Cochrane Database of Systematic Reviews, Cochrane Library Plus, CENTRAL, Trip Database, PEDro, DARE, OTseeker, and Google Scholar. We included all studies published between 2007 and 2016 and evaluating the influence of auditory cues on independent gait initiation and turning in patients with Parkinson's disease. The methodological quality of the studies was assessed with the Jadad scale. We included 13 studies, all of which had a low methodological quality (Jadad scale score≤2). In these studies, high-intensity, high-frequency auditory cues had a positive impact on gait initiation and turning. More specifically, they 1) improved spatiotemporal and kinematic parameters; 2) decreased freezing, turning duration, and falls; and 3) increased gait initiation speed, muscle activation, and gait speed and cadence in patients with Parkinson's disease. We need studies of better methodological quality to establish the Parkinson's disease stage in which auditory cues are most beneficial, as well as to determine the most effective type and frequency of the auditory cue during gait initiation and turning in patients with Parkinson's disease. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Local and Global Auditory Processing: Behavioral and ERP Evidence
Sanders, Lisa D.; Poeppel, David
2007-01-01
Differential processing of local and global visual features is well established. Global precedence effects, differences in event-related potentials (ERPs) elicited when attention is focused on local versus global levels, and hemispheric specialization for local and global features all indicate that relative scale of detail is an important distinction in visual processing. Observing analogous differential processing of local and global auditory information would suggest that scale of detail is a general organizational principle of the brain. However, to date the research on auditory local and global processing has primarily focused on music perception or on the perceptual analysis of relatively higher and lower frequencies. The study described here suggests that temporal aspects of auditory stimuli better capture the local-global distinction. By combining short (40 ms) frequency modulated tones in series to create global auditory patterns (500 ms), we independently varied whether pitch increased or decreased over short time spans (local) and longer time spans (global). Accuracy and reaction time measures revealed better performance for global judgments and asymmetric interference that were modulated by amount of pitch change. ERPs recorded while participants listened to identical sounds and indicated the direction of pitch change at the local or global levels provided evidence for differential processing similar to that found in ERP studies employing hierarchical visual stimuli. ERP measures failed to provide evidence for lateralization of local and global auditory perception, but differences in distributions suggest preferential processing in more ventral and dorsal areas respectively. PMID:17113115
2012-01-01
Background About 25% of schizophrenia patients with auditory hallucinations are refractory to pharmacotherapy and electroconvulsive therapy. We conducted a deep transcranial magnetic stimulation (TMS) pilot study in order to evaluate the potential clinical benefit of repeated left temporoparietal cortex stimulation in these patients. The results were encouraging, but a sham-controlled study was needed to rule out a placebo effect. Methods A total of 18 schizophrenic patients with refractory auditory hallucinations were recruited, from Beer Yaakov MHC and other hospitals outpatient populations. Patients received 10 daily treatment sessions with low-frequency (1 Hz for 10 min) deep TMS applied over the left temporoparietal cortex, using the H1 coil at the intensity of 110% of the motor threshold. Procedure was either real or sham according to patient randomization. Patients were evaluated via the Auditory Hallucinations Rating Scale, Scale for the Assessment of Positive Symptoms-Negative Symptoms, Clinical Global Impressions, and Quality of Life Questionnaire. Results In all, 10 patients completed the treatment (10 TMS sessions). Auditory hallucination scores of both groups improved; however, there was no statistical difference in any of the scales between the active and the sham treated groups. Conclusions Low-frequency deep TMS to the left temporoparietal cortex using the protocol mentioned above has no statistically significant effect on auditory hallucinations or the other clinical scales measured in schizophrenic patients. Trial Registration Clinicaltrials.gov identifier: NCT00564096. PMID:22559192
Weisz, Nathan; Obleser, Jonas
2014-01-01
Human magneto- and electroencephalography (M/EEG) are capable of tracking brain activity at millisecond temporal resolution in an entirely non-invasive manner, a feature that offers unique opportunities to uncover the spatiotemporal dynamics of the hearing brain. In general, precise synchronisation of neural activity within as well as across distributed regions is likely to subserve any cognitive process, with auditory cognition being no exception. Brain oscillations, in a range of frequencies, are a putative hallmark of this synchronisation process. Embedded in a larger effort to relate human cognition to brain oscillations, a field of research is emerging on how synchronisation within, as well as between, brain regions may shape auditory cognition. Combined with much improved source localisation and connectivity techniques, it has become possible to study directly the neural activity of auditory cortex with unprecedented spatio-temporal fidelity and to uncover frequency-specific long-range connectivities across the human cerebral cortex. In the present review, we will summarise recent contributions mainly of our laboratories to this emerging domain. We present (1) a more general introduction on how to study local as well as interareal synchronisation in human M/EEG; (2) how these networks may subserve and influence illusory auditory perception (clinical and non-clinical) and (3) auditory selective attention; and (4) how oscillatory networks further reflect and impact on speech comprehension. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.
Ravicz, M E; Rosowski, J J; Voigt, H F
1992-07-01
This is the first paper of a series dealing with sound-power collection by the auditory periphery of the gerbil. The purpose of the series is to quantify the physiological action of the gerbil's relatively large tympanic membrane and middle-ear air cavities. To this end the middle-ear input impedance ZT was measured at frequencies between 10 Hz and 18 kHz before and after manipulations of the middle-ear cavity. The frequency dependence of ZT is consistent with that of the middle-ear transfer function computed from extant data. Comparison of the impedance and transfer function suggests a middle-ear transformer ratio of 50 at frequencies below 1 kHz, substantially smaller than the anatomical value of 90 [Lay, J. Morph. 138, 41-120 (1972)]. Below 1 kHz the data suggest a low-frequency acoustic stiffness KT for the middle ear of 970 Pa/mm3 and a stiffness of the middle-ear cavity of 720 Pa/mm3 (middle-ear volume V MEC of 195 mm3); thus the middle-ear air spaces contribute about 70% of the acoustic stiffness of the auditory periphery. Manipulations of a middle-ear model suggest that decreases in V MEC lead to proportionate increases in KT but that further increases in middle-ear cavity volume produce only limited decreases in middle-ear stiffness. The data and the model point out that the real part of the middle-ear impedance at frequencies below 100 Hz is determined primarily by losses within the middle-ear cavity. The measured impedance is comparable in magnitude and frequency dependence to the impedance in several larger mammalian species commonly used in auditory research. A comparison of low-frequency stiffness and anatomical dimensions among several species suggests that the large middle-ear cavities in gerbil act to reduce the middle-ear stiffness at low frequencies. A description of sound-power collection by the gerbil ear requires a description of the function of the external ear.
NASA Astrophysics Data System (ADS)
Comastri, S. A.; Martin, G.; Simon, J. M.; Angarano, C.; Dominguez, S.; Luzzi, F.; Lanusse, M.; Ranieri, M. V.; Boccio, C. M.
2008-04-01
In Optometry and in Audiology, the routine tests to prescribe correction lenses and headsets are respectively the visual acuity test (the first chart with letters was developed by Snellen in 1862) and conventional pure tone audiometry (the first audiometer with electrical current was devised by Hartmann in 1878). At present there are psychophysical non invasive tests that, besides evaluating visual and auditory performance globally and even in cases catalogued as normal according to routine tests, supply early information regarding diseases such as diabetes, hypertension, renal failure, cardiovascular problems, etc. Concerning Optometry, one of these tests is the achromatic luminance contrast sensitivity test (introduced by Schade in 1956). Concerning Audiology, one of these tests is high frequency pure tone audiometry (introduced a few decades ago) which yields information relative to pathologies affecting the basal cochlea and complements data resulting from conventional audiometry. These utilities of the contrast sensitivity test and of pure tone audiometry derive from the facts that Fourier components constitute the basis to synthesize stimuli present at the entrance of the visual and auditory systems; that these systems responses depend on frequencies and that the patient's psychophysical state affects frequency processing. The frequency of interest in the former test is the effective spatial frequency (inverse of the angle subtended at the eye by a cycle of a sinusoidal grating and measured in cycles/degree) and, in the latter, the temporal frequency (measured in cycles/sec). Both tests have similar duration and consist in determining the patient's threshold (corresponding to the inverse multiplicative of the contrast or to the inverse additive of the sound intensity level) for each harmonic stimulus present at the system entrance (sinusoidal grating or pure tone sound). In this article the frequencies, standard normality curves and abnormal threshold shifts inherent to the contrast sensitivity test (which for simplicity could be termed "visionmetry") and to pure tone audiometry (also termed auditory sensitivity test) are analyzed with the purpose of contributing to divulge their ability to supply early information associated to pathologies not solely related to the visual and auditory systems respectively.
Genomic Perspectives of Transcriptional Regulation in Forebrain Development
Nord, Alex S.; Pattabiraman, Kartik; Visel, Axel; ...
2015-01-07
The forebrain is the seat of higher-order brain functions, and many human neuropsychiatric disorders are due to genetic defects affecting forebrain development, making it imperative to understand the underlying genetic circuitry. We report that recent progress now makes it possible to begin fully elucidating the genomic regulatory mechanisms that control forebrain gene expression. Here, we discuss the current knowledge of how transcription factors drive gene expression programs through their interactions with cis-acting genomic elements, such as enhancers; how analyses of chromatin and DNA modifications provide insights into gene expression states; and how these approaches yield insights into the evolution ofmore » the human brain.« less
Auditory Gap-in-Noise Detection Behavior in Ferrets and Humans
2015-01-01
The precise encoding of temporal features of auditory stimuli by the mammalian auditory system is critical to the perception of biologically important sounds, including vocalizations, speech, and music. In this study, auditory gap-detection behavior was evaluated in adult pigmented ferrets (Mustelid putorius furo) using bandpassed stimuli designed to widely sample the ferret’s behavioral and physiological audiogram. Animals were tested under positive operant conditioning, with psychometric functions constructed in response to gap-in-noise lengths ranging from 3 to 270 ms. Using a modified version of this gap-detection task, with the same stimulus frequency parameters, we also tested a cohort of normal-hearing human subjects. Gap-detection thresholds were computed from psychometric curves transformed according to signal detection theory, revealing that for both ferrets and humans, detection sensitivity was worse for silent gaps embedded within low-frequency noise compared with high-frequency or broadband stimuli. Additional psychometric function analysis of ferret behavior indicated effects of stimulus spectral content on aspects of behavioral performance related to decision-making processes, with animals displaying improved sensitivity for broadband gap-in-noise detection. Reaction times derived from unconditioned head-orienting data and the time from stimulus onset to reward spout activation varied with the stimulus frequency content and gap length, as well as the approach-to-target choice and reward location. The present study represents a comprehensive evaluation of gap-detection behavior in ferrets, while similarities in performance with our human subjects confirm the use of the ferret as an appropriate model of temporal processing. PMID:26052794
Ontogenetic Development of Weberian Ossicles and Hearing Abilities in the African Bullhead Catfish
Lechner, Walter; Heiss, Egon; Schwaha, Thomas; Glösmann, Martin; Ladich, Friedrich
2011-01-01
Background The Weberian apparatus of otophysine fishes facilitates sound transmission from the swimbladder to the inner ear to increase hearing sensitivity. It has been of great interest to biologists since the 19th century. No studies, however, are available on the development of the Weberian ossicles and its effect on the development of hearing in catfishes. Methodology/Principal Findings We investigated the development of the Weberian apparatus and auditory sensitivity in the catfish Lophiobagrus cyclurus. Specimens from 11.3 mm to 85.5 mm in standard length were studied. Morphology was assessed using sectioning, histology, and X-ray computed tomography, along with 3D reconstruction. Hearing thresholds were measured utilizing the auditory evoked potentials recording technique. Weberian ossicles and interossicular ligaments were fully developed in all stages investigated except in the smallest size group. In the smallest catfish, the intercalarium and the interossicular ligaments were still missing and the tripus was not yet fully developed. Smallest juveniles revealed lowest auditory sensitivity and were unable to detect frequencies higher than 2 or 3 kHz; sensitivity increased in larger specimens by up to 40 dB, and frequency detection up to 6 kHz. In the size groups capable of perceiving frequencies up to 6 kHz, larger individuals had better hearing abilities at low frequencies (0.05–2 kHz), whereas smaller individuals showed better hearing at the highest frequencies (4–6 kHz). Conclusions/Significance Our data indicate that the ability of otophysine fish to detect sounds at low levels and high frequencies largely depends on the development of the Weberian apparatus. A significant increase in auditory sensitivity was observed as soon as all Weberian ossicles and interossicular ligaments are present and the chain for transmitting sounds from the swimbladder to the inner ear is complete. This contrasts with findings in another otophysine, the zebrafish, where no threshold changes have been observed. PMID:21533262
Phencyclidine Disrupts the Auditory Steady State Response in Rats
Leishman, Emma; O’Donnell, Brian F.; Millward, James B.; Vohs, Jenifer L.; Rass, Olga; Krishnan, Giri P.; Bolbecker, Amanda R.; Morzorati, Sandra L.
2015-01-01
The Auditory Steady-State Response (ASSR) in the electroencephalogram (EEG) is usually reduced in schizophrenia (SZ), particularly to 40 Hz stimulation. The gamma frequency ASSR deficit has been attributed to N-methyl-D-aspartate receptor (NMDAR) hypofunction. We tested whether the NMDAR antagonist, phencyclidine (PCP), produced similar ASSR deficits in rats. EEG was recorded from awake rats via intracranial electrodes overlaying the auditory cortex and at the vertex of the skull. ASSRs to click trains were recorded at 10, 20, 30, 40, 50, and 55 Hz and measured by ASSR Mean Power (MP) and Phase Locking Factor (PLF). In Experiment 1, the effect of different subcutaneous doses of PCP (1.0, 2.5 and 4.0 mg/kg) on the ASSR in 12 rats was assessed. In Experiment 2, ASSRs were compared in PCP treated rats and control rats at baseline, after acute injection (5 mg/kg), following two weeks of subchronic, continuous administration (5 mg/kg/day), and one week after drug cessation. Acute administration of PCP increased PLF and MP at frequencies of stimulation below 50 Hz, and decreased responses at higher frequencies at the auditory cortex site. Acute administration had a less pronounced effect at the vertex site, with a reduction of either PLF or MP observed at frequencies above 20 Hz. Acute effects increased in magnitude with higher doses of PCP. Consistent effects were not observed after subchronic PCP administration. These data indicate that acute administration of PCP, a NMDAR antagonist, produces an increase in ASSR synchrony and power at low frequencies of stimulation and a reduction of high frequency (> 40 Hz) ASSR activity in rats. Subchronic, continuous administration of PCP, on the other hand, has little impact on ASSRs. Thus, while ASSRs are highly sensitive to NMDAR antagonists, their translational utility as a cross-species biomarker for NMDAR hypofunction in SZ and other disorders may be dependent on dose and schedule. PMID:26258486
Songer, Jocelyn E.; Rosowski, John J.
2006-01-01
A superior semicircular canal dehiscence (SCD) is a break or hole in the bony wall of the superior semicircular canal. Patients with SCD syndrome present with a variety of symptoms: some with vestibular symptoms, others with auditory symptoms (including low-frequency conductive hearing loss) and yet others with both. We are interested in whether or not mechanically altering the superior canal by introducing a dehiscence is sufficient to cause the low-frequency conductive hearing loss associated with SCD syndrome. We evaluated the effect of a surgically introduced dehiscence on auditory responses to air-conducted (AC) stimuli in 11 chinchilla ears. Cochlear potential (CP) was recorded at the round-window before and after a dehiscence was introduced. In each ear, a decrease in CP in response to low frequency (<2 kHz) sound stimuli was observed after the introduction of the dehiscence. The dehiscence was then patched with cyanoacrylate glue leading to a reversal of the dehiscence-induced changes in CP. The reversible decrease in auditory sensitivity observed in chinchilla is consistent with the elevated AC thresholds observed in patients with SCD. According to the ‘third-window’ hypothesis the SCD shunts sound-induced stapes velocity away from the cochlea, resulting in decreased auditory sensitivity to AC sounds. The data collected in this study are consistent with predictions of this hypothesis. PMID:16150562
Kluender, K R; Lotto, A J
1994-02-01
When F1-onset frequency is lower, longer F1 cut-back (VOT) is required for human listeners to perceive synthesized stop consonants as voiceless. K. R. Kluender [J. Acoust. Soc. Am. 90, 83-96 (1991)] found comparable effects of F1-onset frequency on the "labeling" of stop consonants by Japanese quail (coturnix coturnix japonica) trained to distinguish stop consonants varying in F1 cut-back. In that study, CVs were synthesized with natural-like rising F1 transitions, and endpoint training stimuli differed in the onset frequency of F1 because a longer cut-back resulted in a higher F1 onset. In order to assess whether earlier results were due to auditory predispositions or due to animals having learned the natural covariance between F1 cut-back and F1-onset frequency, the present experiment was conducted with synthetic continua having either a relatively low (375 Hz) or high (750 Hz) constant-frequency F1. Six birds were trained to respond differentially to endpoint stimuli from three series of synthesized /CV/s varying in duration of F1 cut-back. Second and third formant transitions were appropriate for labial, alveolar, or velar stops. Despite the fact that there was no opportunity for animal subjects to use experienced covariation of F1-onset frequency and F1 cut-back, quail typically exhibited shorter labeling boundaries (more voiceless stops) for intermediate stimuli of the continua when F1 frequency was higher. Responses by human subjects listening to the same stimuli were also collected. Results lend support to the earlier conclusion that part or all of the effect of F1 onset frequency on perception of voicing may be adequately explained by general auditory processes.(ABSTRACT TRUNCATED AT 250 WORDS)
A New Test of Attention in Listening (TAIL) Predicts Auditory Performance
Zhang, Yu-Xuan; Barry, Johanna G.; Moore, David R.; Amitay, Sygal
2012-01-01
Attention modulates auditory perception, but there are currently no simple tests that specifically quantify this modulation. To fill the gap, we developed a new, easy-to-use test of attention in listening (TAIL) based on reaction time. On each trial, two clearly audible tones were presented sequentially, either at the same or different ears. The frequency of the tones was also either the same or different (by at least two critical bands). When the task required same/different frequency judgments, presentation at the same ear significantly speeded responses and reduced errors. A same/different ear (location) judgment was likewise facilitated by keeping tone frequency constant. Perception was thus influenced by involuntary orienting of attention along the task-irrelevant dimension. When information in the two stimulus dimensions were congruent (same-frequency same-ear, or different-frequency different-ear), response was faster and more accurate than when they were incongruent (same-frequency different-ear, or different-frequency same-ear), suggesting the involvement of executive control to resolve conflicts. In total, the TAIL yielded five independent outcome measures: (1) baseline reaction time, indicating information processing efficiency, (2) involuntary orienting of attention to frequency and (3) location, and (4) conflict resolution for frequency and (5) location. Processing efficiency and conflict resolution accounted for up to 45% of individual variances in the low- and high-threshold variants of three psychoacoustic tasks assessing temporal and spectral processing. Involuntary orientation of attention to the irrelevant dimension did not correlate with perceptual performance on these tasks. Given that TAIL measures are unlikely to be limited by perceptual sensitivity, we suggest that the correlations reflect modulation of perceptual performance by attention. The TAIL thus has the power to identify and separate contributions of different components of attention to auditory perception. PMID:23300934
Signal Processing in Periodically Forced Gradient Frequency Neural Networks
Kim, Ji Chul; Large, Edward W.
2015-01-01
Oscillatory instability at the Hopf bifurcation is a dynamical phenomenon that has been suggested to characterize active non-linear processes observed in the auditory system. Networks of oscillators poised near Hopf bifurcation points and tuned to tonotopically distributed frequencies have been used as models of auditory processing at various levels, but systematic investigation of the dynamical properties of such oscillatory networks is still lacking. Here we provide a dynamical systems analysis of a canonical model for gradient frequency neural networks driven by a periodic signal. We use linear stability analysis to identify various driven behaviors of canonical oscillators for all possible ranges of model and forcing parameters. The analysis shows that canonical oscillators exhibit qualitatively different sets of driven states and transitions for different regimes of model parameters. We classify the parameter regimes into four main categories based on their distinct signal processing capabilities. This analysis will lead to deeper understanding of the diverse behaviors of neural systems under periodic forcing and can inform the design of oscillatory network models of auditory signal processing. PMID:26733858
Umat, Cila; Mukari, Siti Z; Ezan, Nurul F; Din, Normah C
2011-08-01
To examine the changes in the short-term auditory memory following the use of frequency-modulated (FM) system in children with suspected auditory processing disorders (APDs), and also to compare the advantages of bilateral over unilateral FM fitting. This longitudinal study involved 53 children from Sekolah Kebangsaan Jalan Kuantan 2, Kuala Lumpur, Malaysia who fulfilled the inclusion criteria. The study was conducted from September 2007 to October 2008 in the Department of Audiology and Speech Sciences, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia. The children's age was between 7-10 years old, and they were assigned into 3 groups: 15 in the control group (not fitted with FM); 19 in the unilateral; and 19 in the bilateral FM-fitting group. Subjects wore the FM system during school time for 12 weeks. Their working memory (WM), best learning (BL), and retention of information (ROI) were measured using the Rey Auditory Verbal Learning Test at pre-fitting, post (after 12 weeks of FM usage), and at long term (one year after the usage of FM system ended). There were significant differences in the mean WM (p=0.001), BL (p=0.019), and ROI (p=0.005) scores at the different measurement times, in which the mean scores at long-term were consistently higher than at pre-fitting, despite similar performances at the baseline (p>0.05). There was no significant difference in performance between unilateral- and bilateral-fitting groups. The use of FM might give a long-term effect on improving selected short-term auditory memories of some children with suspected APDs. One may not need to use 2 FM receivers to receive advantages on auditory memory performance.
Neurological Diagnostic Tests and Procedures
... stem auditory evoked response ) are used to assess high-frequency hearing loss, diagnose any damage to the acoustic ... imaging , also called ultrasound scanning or sonography, uses high-frequency sound waves to obtain images inside the body. ...
Neural coding of high-frequency tones
NASA Technical Reports Server (NTRS)
Howes, W. L.
1976-01-01
Available evidence was presented indicating that neural discharges in the auditory nerve display characteristic periodicities in response to any tonal stimulus including high-frequency stimuli, and that this periodicity corresponds to the subjective pitch.
Saturation of subjective reward magnitude as a function of current and pulse frequency.
Simmons, J M; Gallistel, C R
1994-02-01
In rats with electrodes in the medial forebrain bundle, the upper portion of the function relating the experienced magnitude of the reward to pulse frequency was determined at currents ranging from 100 to 1,000 microA. The pulse frequency required to produce an asymptotic level of reward was inversely proportional to current except at the lowest currents and highest pulse frequencies. At a given current, the subjective reward magnitude functions decelerated to an asymptote over an interval in which the pulse frequency doubled or tripled. The asymptotic level of reward was approximately constant for currents between 200 and 1,000 microA but declined substantially at currents at or below 100 microA and pulse frequencies at or above 250 to 400 pulses per second. The results are consistent with the hypothesis that the magnitude of the experienced reward depends only on the number of action potentials generated by the train of pulses in the bundle of reward-relevant axons.
The electrical properties of auditory hair cells in the frog amphibian papilla.
Smotherman, M S; Narins, P M
1999-07-01
The amphibian papilla (AP) is the principal auditory organ of the frog. Anatomical and neurophysiological evidence suggests that this hearing organ utilizes both mechanical and electrical (hair cell-based) frequency tuning mechanisms, yet relatively little is known about the electrophysiology of AP hair cells. Using the whole-cell patch-clamp technique, we have investigated the electrical properties and ionic currents of isolated hair cells along the rostrocaudal axis of the AP. Electrical resonances were observed in the voltage response of hair cells harvested from the rostral and medial, but not caudal, regions of the AP. Two ionic currents, ICa and IK(Ca), were observed in every hair cell; however, their amplitudes varied substantially along the epithelium. Only rostral hair cells exhibited an inactivating potassium current (IA), whereas an inwardly rectifying potassium current (IK1) was identified only in caudal AP hair cells. Electrically tuned hair cells exhibited resonant frequencies from 50 to 375 Hz, which correlated well with hair cell position and the tonotopic organization of the papilla. Variations in the kinetics of the outward current contribute substantially to the determination of resonant frequency. ICa and IK(Ca) amplitudes increased with resonant frequency, reducing the membrane time constant with increasing resonant frequency. We conclude that a tonotopically organized hair cell substrate exists to support electrical tuning in the rostromedial region of the frog amphibian papilla and that the cellular mechanisms for frequency determination are very similar to those reported for another electrically tuned auditory organ, the turtle basilar papilla.
Processing of band-passed noise in the lateral auditory belt cortex of the rhesus monkey.
Rauschecker, Josef P; Tian, Biao
2004-06-01
Neurons in the lateral belt areas of rhesus monkey auditory cortex were stimulated with band-passed noise (BPN) bursts of different bandwidths and center frequencies. Most neurons responded much more vigorously to these sounds than to tone bursts of a single frequency, and it thus became possible to elicit a clear response in 85% of lateral belt neurons. Tuning to center frequency and bandwidth of the BPN bursts was analyzed. Best center frequency varied along the rostrocaudal direction, with 2 reversals defining borders between areas. We confirmed the existence of 2 belt areas (AL and ML) that were laterally adjacent to the core areas (R and A1, respectively) and a third area (CL) adjacent to area CM on the supratemporal plane (STP). All 3 lateral belt areas were cochleotopically organized with their frequency gradients collinear to those of the adjacent STP areas. Although A1 neurons responded best to pure tones and their responses decreased with increasing bandwidth, 63% of the lateral belt neurons were tuned to bandwidths between 1/3 and 2 octaves and showed either one or multiple peaks. The results are compared with previous data from visual cortex and are discussed in the context of spectral integration, whereby the lateral belt forms a relatively early stage of processing in the cortical hierarchy, giving rise to parallel streams for the identification of auditory objects and their localization in space.
Frequency locking in auditory hair cells: Distinguishing between additive and parametric forcing
NASA Astrophysics Data System (ADS)
Edri, Yuval; Bozovic, Dolores; Yochelis, Arik
2016-10-01
The auditory system displays remarkable sensitivity and frequency discrimination, attributes shown to rely on an amplification process that involves a mechanical as well as a biochemical response. Models that display proximity to an oscillatory onset (also known as Hopf bifurcation) exhibit a resonant response to distinct frequencies of incoming sound, and can explain many features of the amplification phenomenology. To understand the dynamics of this resonance, frequency locking is examined in a system near the Hopf bifurcation and subject to two types of driving forces: additive and parametric. Derivation of a universal amplitude equation that contains both forcing terms enables a study of their relative impact on the hair cell response. In the parametric case, although the resonant solutions are 1 : 1 frequency locked, they show the coexistence of solutions obeying a phase shift of π, a feature typical of the 2 : 1 resonance. Different characteristics are predicted for the transition from unlocked to locked solutions, leading to smooth or abrupt dynamics in response to different types of forcing. The theoretical framework provides a more realistic model of the auditory system, which incorporates a direct modulation of the internal control parameter by an applied drive. The results presented here can be generalized to many other media, including Faraday waves, chemical reactions, and elastically driven cardiomyocytes, which are known to exhibit resonant behavior.
Psychoacoustic and cognitive aspects of auditory roughness: definitions, models, and applications
NASA Astrophysics Data System (ADS)
Vassilakis, Pantelis N.; Kendall, Roger A.
2010-02-01
The term "auditory roughness" was first introduced in the 19th century to describe the buzzing, rattling auditory sensation accompanying narrow harmonic intervals (i.e. two tones with frequency difference in the range of ~15-150Hz, presented simultaneously). A broader definition and an overview of the psychoacoustic correlates of the auditory roughness sensation, also referred to as sensory dissonance, is followed by an examination of efforts to quantify it over the past one hundred and fifty years and leads to the introduction of a new roughness calculation model and an application that automates spectral and roughness analysis of sound signals. Implementation of spectral and roughness analysis is briefly discussed in the context of two pilot perceptual experiments, designed to assess the relationship among cultural background, music performance practice, and aesthetic attitudes towards the auditory roughness sensation.
Auditory beat stimulation and its effects on cognition and mood States.
Chaieb, Leila; Wilpert, Elke Caroline; Reber, Thomas P; Fell, Juergen
2015-01-01
Auditory beat stimulation may be a promising new tool for the manipulation of cognitive processes and the modulation of mood states. Here, we aim to review the literature examining the most current applications of auditory beat stimulation and its targets. We give a brief overview of research on auditory steady-state responses and its relationship to auditory beat stimulation (ABS). We have summarized relevant studies investigating the neurophysiological changes related to ABS and how they impact upon the design of appropriate stimulation protocols. Focusing on binaural-beat stimulation, we then discuss the role of monaural- and binaural-beat frequencies in cognition and mood states, in addition to their efficacy in targeting disease symptoms. We aim to highlight important points concerning stimulation parameters and try to address why there are often contradictory findings with regard to the outcomes of ABS.
Ferreira, G; Meurisse, M; Tillet, Y; Lévy, F
2001-01-01
The basal forebrain cholinergic system is involved in different forms of memory. To study its role in social memory in sheep, an immunotoxin, ME20.4 immunoglobulin G (IgG)-saporin, was developed that is specific to basal forebrain cholinergic neurons bearing the p75 neurotrophin receptor. The distribution of sheep cholinergic neurons was mapped with an antibody against choline acetyltransferase. To assess the localization of the p75 receptor on basal forebrain cholinergic neurons, the distribution of p75 receptor-immunoreactive neurons with ME20.4 IgG was examined, and a double-labeling study with antibodies against choline acetyltransferase and p75 receptor was undertaken. The loss of basal forebrain cholinergic neurons and acetylcholinesterase fibers in basal forebrain projection areas was assessed in ewes that had received intracerebroventricular injections of the immunotoxin (50, 100 or 150 microg) alone, as well as, in some of the ewes treated with the highest dose, with bilateral immunotoxin injections in the nucleus basalis (11 microg/side). Results indicated that choline acetyltransferase- and p75 receptor-immunoreactive cells had similar distributions in the medial septum, the vertical and horizontal limbs of the band of Broca, and the nucleus basalis. The double-labeling procedure revealed that 100% of the cholinergic neurons are also p75 receptor positive in the medial septum and in the vertical and horizontal limbs of the band of Broca, and 82% in the nucleus basalis. Moreover, 100% of the p75 receptor-immunoreactive cells of these four nuclei were cholinergic. Combined immunotoxin injections into ventricles and the nucleus basalis produced a near complete loss (80-95%) of basal forebrain cholinergic neurons and acetylcholinesterase-positive fibers in the hippocampus, olfactory bulb and entorhinal cortex. This study provides the first anatomical data concerning the basal forebrain cholinergic system in ungulates. The availability of a selective cholinergic immunotoxin effective in sheep provides a new tool to probe the involvement of basal forebrain cholinergic neurons in cognitive processes in this species.
Idealized Computational Models for Auditory Receptive Fields
Lindeberg, Tony; Friberg, Anders
2015-01-01
We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973
Mismatch negativity to acoustical illusion of beat: how and where the change detection takes place?
Chakalov, Ivan; Paraskevopoulos, Evangelos; Wollbrink, Andreas; Pantev, Christo
2014-10-15
In case of binaural presentation of two tones with slightly different frequencies the structures of brainstem can no longer follow the interaural time differences (ITD) resulting in an illusionary perception of beat corresponding to frequency difference between the two prime tones. Hence, the beat-frequency does not exist in the prime tones presented to either ear. This study used binaural beats to explore the nature of acoustic deviance detection in humans by means of magnetoencephalography (MEG). Recent research suggests that the auditory change detection is a multistage process. To test this, we employed 26 Hz-binaural beats in a classical oddball paradigm. However, the prime tones (250 Hz and 276 Hz) were switched between the ears in the case of the deviant-beat. Consequently, when the deviant is presented, the cochleae and auditory nerves receive a "new afferent", although the standards and the deviants are heard identical (26 Hz-beats). This allowed us to explore the contribution of auditory periphery to change detection process, and furthermore, to evaluate its influence on beats-related auditory steady-state responses (ASSRs). LORETA-source current density estimates of the evoked fields in a typical mismatch negativity time-window (MMN) and the subsequent difference-ASSRs were determined and compared. The results revealed an MMN generated by a complex neural network including the right parietal lobe and the left middle frontal gyrus. Furthermore, difference-ASSR was generated in the paracentral gyrus. Additionally, psychophysical measures showed no perceptual difference between the standard- and deviant-beats when isolated by noise. These results suggest that the auditory periphery has an important contribution to novelty detection already at sub-cortical level. Overall, the present findings support the notion of hierarchically organized acoustic novelty detection system. Copyright © 2014 Elsevier Inc. All rights reserved.
Selective Neuronal Activation by Cochlear Implant Stimulation in Auditory Cortex of Awake Primate
Johnson, Luke A.; Della Santina, Charles C.
2016-01-01
Despite the success of cochlear implants (CIs) in human populations, most users perform poorly in noisy environments and music and tonal language perception. How CI devices engage the brain at the single neuron level has remained largely unknown, in particular in the primate brain. By comparing neuronal responses with acoustic and CI stimulation in marmoset monkeys unilaterally implanted with a CI electrode array, we discovered that CI stimulation was surprisingly ineffective at activating many neurons in auditory cortex, particularly in the hemisphere ipsilateral to the CI. Further analyses revealed that the CI-nonresponsive neurons were narrowly tuned to frequency and sound level when probed with acoustic stimuli; such neurons likely play a role in perceptual behaviors requiring fine frequency and level discrimination, tasks that CI users find especially challenging. These findings suggest potential deficits in central auditory processing of CI stimulation and provide important insights into factors responsible for poor CI user performance in a wide range of perceptual tasks. SIGNIFICANCE STATEMENT The cochlear implant (CI) is the most successful neural prosthetic device to date and has restored hearing in hundreds of thousands of deaf individuals worldwide. However, despite its huge successes, CI users still face many perceptual limitations, and the brain mechanisms involved in hearing through CI devices remain poorly understood. By directly comparing single-neuron responses to acoustic and CI stimulation in auditory cortex of awake marmoset monkeys, we discovered that neurons unresponsive to CI stimulation were sharply tuned to frequency and sound level. Our results point out a major deficit in central auditory processing of CI stimulation and provide important insights into mechanisms underlying the poor CI user performance in a wide range of perceptual tasks. PMID:27927962
Magnotti, John F; Basu Mallick, Debshila; Feng, Guo; Zhou, Bin; Zhou, Wen; Beauchamp, Michael S
2015-09-01
Humans combine visual information from mouth movements with auditory information from the voice to recognize speech. A common method for assessing multisensory speech perception is the McGurk effect: When presented with particular pairings of incongruent auditory and visual speech syllables (e.g., the auditory speech sounds for "ba" dubbed onto the visual mouth movements for "ga"), individuals perceive a third syllable, distinct from the auditory and visual components. Chinese and American cultures differ in the prevalence of direct facial gaze and in the auditory structure of their languages, raising the possibility of cultural- and language-related group differences in the McGurk effect. There is no consensus in the literature about the existence of these group differences, with some studies reporting less McGurk effect in native Mandarin Chinese speakers than in English speakers and others reporting no difference. However, these studies sampled small numbers of participants tested with a small number of stimuli. Therefore, we collected data on the McGurk effect from large samples of Mandarin-speaking individuals from China and English-speaking individuals from the USA (total n = 307) viewing nine different stimuli. Averaged across participants and stimuli, we found similar frequencies of the McGurk effect between Chinese and American participants (48 vs. 44 %). In both groups, we observed a large range of frequencies both across participants (range from 0 to 100 %) and stimuli (15 to 83 %) with the main effect of culture and language accounting for only 0.3 % of the variance in the data. High individual variability in perception of the McGurk effect necessitates the use of large sample sizes to accurately estimate group differences.
Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina
2016-02-01
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Noise Equally Degrades Central Auditory Processing in 2- and 4-Year-Old Children
ERIC Educational Resources Information Center
Niemitalo-Haapola, Elina; Haapala, Sini; Kujala, Teija; Raappana, Antti; Kujala, Tiia; Jansson-Verkasalo, Eira
2017-01-01
Purpose: The aim of this study was to investigate developmental and noise-induced changes in central auditory processing indexed by event-related potentials in typically developing children. Method: P1, N2, and N4 responses as well as mismatch negativities (MMNs) were recorded for standard syllables and consonants, frequency, intensity, vowel, and…
Auditory Temporal Order Discrimination and Backward Recognition Masking in Adults with Dyslexia
ERIC Educational Resources Information Center
Griffiths, Yvonne M.; Hill, Nicholas I.; Bailey, Peter J.; Snowling, Margaret J.
2003-01-01
The ability of 20 adult dyslexic readers to extract frequency information from successive tone pairs was compared with that of IQ-matched controls using temporal order discrimination and auditory backward recognition masking (ABRM) tasks. In both paradigms, the interstimulus interval (ISI) between tones in a pair was either short (20 ms) or long…
Auditory Long Latency Responses to Tonal and Speech Stimuli
ERIC Educational Resources Information Center
Swink, Shannon; Stuart, Andrew
2012-01-01
Purpose: The effects of type of stimuli (i.e., nonspeech vs. speech), speech (i.e., natural vs. synthetic), gender of speaker and listener, speaker (i.e., self vs. other), and frequency alteration in self-produced speech on the late auditory cortical evoked potential were examined. Method: Young adult men (n = 15) and women (n = 15), all with…
Behavioral and Molecular Genetics of Reading-Related AM and FM Detection Thresholds.
Bruni, Matthew; Flax, Judy F; Buyske, Steven; Shindhelm, Amber D; Witton, Caroline; Brzustowicz, Linda M; Bartlett, Christopher W
2017-03-01
Auditory detection thresholds for certain frequencies of both amplitude modulated (AM) and frequency modulated (FM) dynamic auditory stimuli are associated with reading in typically developing and dyslexic readers. We present the first behavioral and molecular genetic characterization of these two auditory traits. Two extant extended family datasets were given reading tasks and psychoacoustic tasks to determine FM 2 Hz and AM 20 Hz sensitivity thresholds. Univariate heritabilities were significant for both AM (h 2 = 0.20) and FM (h 2 = 0.29). Bayesian posterior probability of linkage (PPL) analysis found loci for AM (12q, PPL = 81 %) and FM (10p, PPL = 32 %; 20q, PPL = 65 %). Bivariate heritability analyses revealed that FM is genetically correlated with reading, while AM was not. Bivariate PPL analysis indicates that FM loci (10p, 20q) are not also associated with reading.
Auditory discrimination therapy (ADT) for tinnitus management.
Herraiz, C; Diges, I; Cobo, P
2007-01-01
Auditory discrimination training (ADT) designs a procedure to increase cortical areas responding to trained frequencies (damaged cochlear areas with cortical misrepresentation) and to shrink the neighboring over-represented ones (tinnitus pitch). In a prospective descriptive study of 27 patients with high frequency tinnitus, the severity of the tinnitus was measured using a visual analog scale (VAS) and the tinnitus handicap inventory (THI). Patients performed a 10-min auditory discrimination task twice a day during one month. Discontinuous 4 kHz pure tones were mixed randomly with short broadband noise sounds through an MP3 system. After the treatment mean VAS scores were reduced from 5.2 to 4.5 (p=0.000) and the THI decreased from 26.2% to 21.3% (p=0.000). Forty percent of the patients had improvement in tinnitus perception (RESP). Comparing the ADT group with a control group showed statistically significant improvement of their tinnitus as assessed by RESP, VAS, and THI.
Jia, Jun; Li, Bo; Sun, Zuo-Li; Yu, Fen; Wang, Xuan; Wang, Xiao-Min
2010-04-01
The role of electro-acupuncture (EA) stimulation on motor symptoms in Parkinson's disease (PD) has not been well studied. In a rat hemiparkinsonian model induced by unilateral transection of the medial forebrain bundle (MFB), EA stimulation improved motor impairment in a frequency-dependent manner. Whereas EA stimulation at a low frequency (2 Hz) had no effect, EA stimulation at a high frequency (100 Hz) significantly improved motor coordination. However, neither low nor high EA stimulation could significantly enhance dopamine levels in the striatum. EA stimulation at 100 Hz normalized the MFB lesion-induced increase in midbrain GABA content, but it had no effect on GABA content in the globus pallidus. These results suggest that high-frequency EA stimulation improves motor impairment in MFB-lesioned rats by increasing GABAergic inhibition in the output structure of the basal ganglia.
Representations of Pitch and Timbre Variation in Human Auditory Cortex
2017-01-01
Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex. SIGNIFICANCE STATEMENT Pitch and timbre are two crucial aspects of auditory perception. Pitch governs our perception of musical melodies and harmonies, and conveys both prosodic and (in tone languages) lexical information in speech. Brightness—an aspect of timbre or sound quality—allows us to distinguish different musical instruments and speech sounds. Frequency-mapping studies have revealed tonotopic organization in primary auditory cortex, but the use of pure tones or noise bands has precluded the possibility of dissociating pitch from brightness. Our results suggest a distributed code, with no clear anatomical distinctions between auditory cortical regions responsive to changes in either pitch or timbre, but also reveal a population code that can differentiate between changes in either dimension within the same cortical regions. PMID:28025255
Killer whale (Orcinus orca) hearing: auditory brainstem response and behavioral audiograms.
Szymanski, M D; Bain, D E; Kiehl, K; Pennington, S; Wong, S; Henry, K R
1999-08-01
Killer whale (Orcinus orca) audiograms were measured using behavioral responses and auditory evoked potentials (AEPs) from two trained adult females. The mean auditory brainstem response (ABR) audiogram to tones between 1 and 100 kHz was 12 dB (re 1 mu Pa) less sensitive than behavioral audiograms from the same individuals (+/- 8 dB). The ABR and behavioral audiogram curves had shapes that were generally consistent and had the best threshold agreement (5 dB) in the most sensitive range 18-42 kHz, and the least (22 dB) at higher frequencies 60-100 kHz. The most sensitive frequency in the mean Orcinus audiogram was 20 kHz (36 dB), a frequency lower than many other odontocetes, but one that matches peak spectral energy reported for wild killer whale echolocation clicks. A previously reported audiogram of a male Orcinus had greatest sensitivity in this range (15 kHz, approximately 35 dB). Both whales reliably responded to 100-kHz tones (95 dB), and one whale to a 120-kHz tone, a variation from an earlier reported high-frequency limit of 32 kHz for a male Orcinus. Despite smaller amplitude ABRs than smaller delphinids, the results demonstrated that ABR audiometry can provide a useful suprathreshold estimate of hearing range in toothed whales.
Krumm, Bianca; Klump, Georg; Köppl, Christine; Langemann, Ulrike
2017-09-27
We measured the auditory sensitivity of the barn owl ( Tyto alba ), using a behavioural Go/NoGo paradigm in two different age groups, one younger than 2 years ( n = 4) and another more than 13 years of age ( n = 3). In addition, we obtained thresholds from one individual aged 23 years, three times during its lifetime. For computing audiograms, we presented test frequencies of between 0.5 and 12 kHz, covering the hearing range of the barn owl. Average thresholds in quiet were below 0 dB sound pressure level (SPL) for frequencies between 1 and 10 kHz. The lowest mean threshold was -12.6 dB SPL at 8 kHz. Thresholds were the highest at 12 kHz, with a mean of 31.7 dB SPL. Test frequency had a significant effect on auditory threshold but age group had no significant effect. There was no significant interaction between age group and test frequency. Repeated threshold estimates over 21 years from a single individual showed only a slight increase in thresholds. We discuss the auditory sensitivity of barn owls with respect to other species and suggest that birds, which generally show a remarkable capacity for regeneration of hair cells in the basilar papilla, are naturally protected from presbycusis. © 2017 The Author(s).
High-frequency gamma activity (80-150 Hz) is increased in human cortex during selective attention
Ray, Supratim; Niebur, Ernst; Hsiao, Steven S.; Sinai, Alon; Crone, Nathan E.
2008-01-01
Objective: To study the role of gamma oscillations (>30 Hz) in selective attention using subdural electrocorticography (ECoG) in humans. Methods: We recorded ECoG in human subjects implanted with subdural electrodes for epilepsy surgery. Sequences of auditory tones and tactile vibrations of 800 ms duration were presented asynchronously, and subjects were asked to selectively attend to one of the two stimulus modalities in order to detect an amplitude increase at 400 ms in some of the stimuli. Results: Event-related ECoG gamma activity was greater over auditory cortex when subjects attended auditory stimuli and was greater over somatosensory cortex when subjects attended vibrotactile stimuli. Furthermore, gamma activity was also observed over prefrontal cortex when stimuli appeared in either modality, but only when they were attended. Attentional modulation of gamma power began ∼400 ms after stimulus onset, consistent with the temporal demands on attention. The increase in gamma activity was greatest at frequencies between 80 and 150 Hz, in the so-called high gamma frequency range. Conclusions: There appears to be a strong link between activity in the high-gamma range (80-150 Hz) and selective attention. Significance: Selective attention is correlated with increased activity in a frequency range that is significantly higher than what has been reported previously using EEG recordings. PMID:18037343
Hofmann, H; Braun, K
1995-05-26
The persistence of morphological features of neurons in slice cultures of the imprinting-relevant forebrain area MNH (mediorostral neostriatum and hyperstriatum ventrale) of the domestic chick was analysed at 7, 14, 21 and 28 days in vitro. After having been explanted and kept in culture the neurons in vitro have larger soma areas, longer and more extensively branched dendritic trees and lower spine frequencies compared to the neurons in vivo. During the analyzed culturing period, the parameters soma area, total and mean dendritic length, number of dendrites, number of dendritic nodes per dendrite and per neuron as well as the spine densities in different dendritic segments showed no significant differences between early and late periods. Highly correlated in every age group were the total dendritic length and the number of dendritic nodes per neuron, indicating regular ramification during dendritic growth. Since these morphological parameters remain stable during the first 4 weeks in vitro, this culture system may provide a suitable model to investigate experimentally induced morphological changes.
Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier
2007-08-29
In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.
Auditory priming improves neural synchronization in auditory-motor entrainment.
Crasta, Jewel E; Thaut, Michael H; Anderson, Charles W; Davies, Patricia L; Gavin, William J
2018-05-22
Neurophysiological research has shown that auditory and motor systems interact during movement to rhythmic auditory stimuli through a process called entrainment. This study explores the neural oscillations underlying auditory-motor entrainment using electroencephalography. Forty young adults were randomly assigned to one of two control conditions, an auditory-only condition or a motor-only condition, prior to a rhythmic auditory-motor synchronization condition (referred to as combined condition). Participants assigned to the auditory-only condition auditory-first group) listened to 400 trials of auditory stimuli presented every 800 ms, while those in the motor-only condition (motor-first group) were asked to tap rhythmically every 800 ms without any external stimuli. Following their control condition, all participants completed an auditory-motor combined condition that required tapping along with auditory stimuli every 800 ms. As expected, the neural processes for the combined condition for each group were different compared to their respective control condition. Time-frequency analysis of total power at an electrode site on the left central scalp (C3) indicated that the neural oscillations elicited by auditory stimuli, especially in the beta and gamma range, drove the auditory-motor entrainment. For the combined condition, the auditory-first group had significantly lower evoked power for a region of interest representing sensorimotor processing (4-20 Hz) and less total power in a region associated with anticipation and predictive timing (13-16 Hz) than the motor-first group. Thus, the auditory-only condition served as a priming facilitator of the neural processes in the combined condition, more so than the motor-only condition. Results suggest that even brief periods of rhythmic training of the auditory system leads to neural efficiency facilitating the motor system during the process of entrainment. These findings have implications for interventions using rhythmic auditory stimulation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chen, Kevin; Cases, Olivier; Rebrin, Igor; Wu, Weihua; Gallaher, Timothy K; Seif, Isabelle; Shih, Jean Chen
2007-01-05
Previous studies have established that abrogation of monoamine oxidase (MAO) A expression leads to a neurochemical, morphological, and behavioral specific phenotype with increased levels of serotonin (5-HT), norepinephrine, and dopamine, loss of barrel field structure in mouse somatosensory cortex, and an association with increased aggression in adults. Forebrain-specific MAO A transgenic mice were generated from MAO A knock-out (KO) mice by using the promoter of calcium-dependent kinase IIalpha (CaMKIIalpha). The presence of human MAO A transgene and its expression were verified by PCR of genomic DNA and reverse transcription-PCR of mRNA and Western blot, respectively. Significant MAO A catalytic activity, autoradiographic labeling of 5-HT, and immunocytochemistry of MAO A were found in the frontal cortex, striatum, and hippocampus but not in the cerebellum of the forebrain transgenic mice. Also, compared with MAO A KO mice, lower levels of 5-HT, norepinephrine, and DA and higher levels of MAO A metabolite 5-hydroxyindoleacetic acid were found in the forebrain regions but not in the cerebellum of the transgenic mice. These results suggest that MAO A is specifically expressed in the forebrain regions of transgenic mice. This forebrain-specific differential expression resulted in abrogation of the aggressive phenotype. Furthermore, the disorganization of the somatosensory cortex barrel field structure associated with MAO A KO mice was restored and became morphologically similar to wild type. Thus, the lack of MAO A in the forebrain of MAO A KO mice may underlie their phenotypes.
Vocal Responses to Perturbations in Voice Auditory Feedback in Individuals with Parkinson's Disease
Liu, Hanjun; Wang, Emily Q.; Metman, Leo Verhagen; Larson, Charles R.
2012-01-01
Background One of the most common symptoms of speech deficits in individuals with Parkinson's disease (PD) is significantly reduced vocal loudness and pitch range. The present study investigated whether abnormal vocalizations in individuals with PD are related to sensory processing of voice auditory feedback. Perturbations in loudness or pitch of voice auditory feedback are known to elicit short latency, compensatory responses in voice amplitude or fundamental frequency. Methodology/Principal Findings Twelve individuals with Parkinson's disease and 13 age- and sex- matched healthy control subjects sustained a vowel sound (/α/) and received unexpected, brief (200 ms) perturbations in voice loudness (±3 or 6 dB) or pitch (±100 cents) auditory feedback. Results showed that, while all subjects produced compensatory responses in their voice amplitude or fundamental frequency, individuals with PD exhibited larger response magnitudes than the control subjects. Furthermore, for loudness-shifted feedback, upward stimuli resulted in shorter response latencies than downward stimuli in the control subjects but not in individuals with PD. Conclusions/Significance The larger response magnitudes in individuals with PD compared with the control subjects suggest that processing of voice auditory feedback is abnormal in PD. Although the precise mechanisms of the voice feedback processing are unknown, results of this study suggest that abnormal voice control in individuals with PD may be related to dysfunctional mechanisms of error detection or correction in sensory feedback processing. PMID:22448258
Memory and learning with rapid audiovisual sequences
Keller, Arielle S.; Sekuler, Robert
2015-01-01
We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193
Memory and learning with rapid audiovisual sequences.
Keller, Arielle S; Sekuler, Robert
2015-01-01
We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.
Noise-induced hearing loss alters the temporal dynamics of auditory-nerve responses
Scheidt, Ryan E.; Kale, Sushrut; Heinz, Michael G.
2010-01-01
Auditory-nerve fibers demonstrate dynamic response properties in that they adapt to rapid changes in sound level, both at the onset and offset of a sound. These dynamic response properties affect temporal coding of stimulus modulations that are perceptually relevant for many sounds such as speech and music. Temporal dynamics have been well characterized in auditory-nerve fibers from normal-hearing animals, but little is known about the effects of sensorineural hearing loss on these dynamics. This study examined the effects of noise-induced hearing loss on the temporal dynamics in auditory-nerve fiber responses from anesthetized chinchillas. Post-stimulus time histograms were computed from responses to 50-ms tones presented at characteristic frequency and 30 dB above fiber threshold. Several response metrics related to temporal dynamics were computed from post-stimulus-time histograms and were compared between normal-hearing and noise-exposed animals. Results indicate that noise-exposed auditory-nerve fibers show significantly reduced response latency, increased onset response and percent adaptation, faster adaptation after onset, and slower recovery after offset. The decrease in response latency only occurred in noise-exposed fibers with significantly reduced frequency selectivity. These changes in temporal dynamics have important implications for temporal envelope coding in hearing-impaired ears, as well as for the design of dynamic compression algorithms for hearing aids. PMID:20696230
Kokinous, Jenny; Tavano, Alessandro; Kotz, Sonja A; Schröger, Erich
2017-02-01
The role of spatial frequencies (SF) is highly debated in emotion perception, but previous work suggests the importance of low SFs for detecting emotion in faces. Furthermore, emotion perception essentially relies on the rapid integration of multimodal information from faces and voices. We used EEG to test the functional relevance of SFs in the integration of emotional and non-emotional audiovisual stimuli. While viewing dynamic face-voice pairs, participants were asked to identify auditory interjections, and the electroencephalogram (EEG) was recorded. Audiovisual integration was measured as auditory facilitation, indexed by the extent of the auditory N1 amplitude suppression in audiovisual compared to an auditory only condition. We found an interaction of SF filtering and emotion in the auditory response suppression. For neutral faces, larger N1 suppression ensued in the unfiltered and high SF conditions as compared to the low SF condition. Angry face perception led to a larger N1 suppression in the low SF condition. While the results for the neural faces indicate that perceptual quality in terms of SF content plays a major role in audiovisual integration, the results for angry faces suggest that early multisensory integration of emotional information favors low SF neural processing pathways, overruling the predictive value of the visual signal per se. Copyright © 2016 Elsevier B.V. All rights reserved.
De Cosmo, G; Aceto, P; Clemente, A; Congedo, E
2004-05-01
Auditory evoked potentials (AEPs) are an electrical manifestation of the brain response to an auditory stimulus. Mid-latency auditory evoked potentials (MLAEPs) and the coherent frequency of the AEP are the most promising for monitoring depth of anaesthesia. MLAEPs show graded changes with increasing anaesthetic concentration over the clinical concentration range. The latencies of Pa and Nb lengthen and their amplitudes reduce. These changes in features of waveform are similar with both inhaled and intravenous anaesthetics. Changes in latency of Pa and Nb waves are highly correlated to a transition from awake to loss of consciousness. MLAEPs recording may also provide information about cerebral processing of the auditory input, probably because it reflects activity in the temporal lobe/primary cortex, sites involved in sounds elaboration and in a complex mechanism of implicit (non declarative) memory processing. The coherent frequency has found to be disrupted by the anaesthetics as well as to be implicated in attentional mechanism. These results support the concept that the AEPs reflects the balance between the arousal effects of surgical stimulation and the depressant effects of anaesthetics. However, AEPs aren't a perfect measure of anaesthesia depth. They can't predict patients movements during surgery and the signal may be affected by muscle artefacts, diathermy and other electrical operating theatre interferences. In conclusion, once reliability of the AEPs recording became proved and the signal acquisition improved it is likely to became a routine feature of clinical anaesthetic practice.
Utilising reinforcement learning to develop strategies for driving auditory neural implants.
Lee, Geoffrey W; Zambetta, Fabio; Li, Xiaodong; Paolini, Antonio G
2016-08-01
In this paper we propose a novel application of reinforcement learning to the area of auditory neural stimulation. We aim to develop a simulation environment which is based off real neurological responses to auditory and electrical stimulation in the cochlear nucleus (CN) and inferior colliculus (IC) of an animal model. Using this simulator we implement closed loop reinforcement learning algorithms to determine which methods are most effective at learning effective acoustic neural stimulation strategies. By recording a comprehensive set of acoustic frequency presentations and neural responses from a set of animals we created a large database of neural responses to acoustic stimulation. Extensive electrical stimulation in the CN and the recording of neural responses in the IC provides a mapping of how the auditory system responds to electrical stimuli. The combined dataset is used as the foundation for the simulator, which is used to implement and test learning algorithms. Reinforcement learning, utilising a modified n-Armed Bandit solution, is implemented to demonstrate the model's function. We show the ability to effectively learn stimulation patterns which mimic the cochlea's ability to covert acoustic frequencies to neural activity. Time taken to learn effective replication using neural stimulation takes less than 20 min under continuous testing. These results show the utility of reinforcement learning in the field of neural stimulation. These results can be coupled with existing sound processing technologies to develop new auditory prosthetics that are adaptable to the recipients current auditory pathway. The same process can theoretically be abstracted to other sensory and motor systems to develop similar electrical replication of neural signals.
Cai, Shanqing; Beal, Deryk S.; Ghosh, Satrajit S.; Tiede, Mark K.; Guenther, Frank H.; Perkell, Joseph S.
2012-01-01
Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking) functions abnormally in the speech motor systems of persons who stutter (PWS). Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants’ compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls’ and had close-to-normal latencies (∼150 ms), but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05). Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands. PMID:22911857
Estimating subglottal pressure via airflow interruption with auditory masking.
Hoffman, Matthew R; Jiang, Jack J
2009-11-01
Current noninvasive measurement of subglottal pressure using airflow interruption often produces inconsistent results due to the elicitation of audio-laryngeal reflexes. Auditory feedback could be considered as a means of ensuring measurement accuracy and precision. The purpose of this study was to determine if auditory masking could be used with the airflow interruption system to improve intrasubject consistency. A prerecorded sample of subject phonation was played on a loop over headphones during the trials with auditory masking. This provided subjects with a target pitch and blocked out distracting ambient noise created by the airflow interrupter. Subglottal pressure was noninvasively measured using the airflow interruption system. Thirty subjects, divided into two equal groups, performed 10 trials without auditory masking and 10 trials with auditory masking. Group one performed the normal trials first, followed by the trials with auditory masking. Group two performed the auditory masking trials first, followed by the normal trials. Intrasubject consistency was improved by adding auditory masking, resulting in a decrease in average intrasubject standard deviation from 0.93+/-0.51 to 0.47+/-0.22 cm H(2)O (P < 0.001). Auditory masking can be used effectively to combat audio-laryngeal reflexes and aid subjects in maintaining constant glottal configuration and frequency, thereby increasing intrasubject consistency when measuring subglottal pressure. By considering auditory feedback, a more reliable method of measurement was developed. This method could be used by clinicians, as reliable, immediately available values of subglottal pressure are useful in evaluating laryngeal health and monitoring treatment progress.
Involvement of the human midbrain and thalamus in auditory deviance detection.
Cacciaglia, Raffaele; Escera, Carles; Slabu, Lavinia; Grimm, Sabine; Sanjuán, Ana; Ventura-Campos, Noelia; Ávila, César
2015-02-01
Prompt detection of unexpected changes in the sensory environment is critical for survival. In the auditory domain, the occurrence of a rare stimulus triggers a cascade of neurophysiological events spanning over multiple time-scales. Besides the role of the mismatch negativity (MMN), whose cortical generators are located in supratemporal areas, cumulative evidence suggests that violations of auditory regularities can be detected earlier and lower in the auditory hierarchy. Recent human scalp recordings have shown signatures of auditory mismatch responses at shorter latencies than those of the MMN. Moreover, animal single-unit recordings have demonstrated that rare stimulus changes cause a release from stimulus-specific adaptation in neurons of the primary auditory cortex, the medial geniculate body (MGB), and the inferior colliculus (IC). Although these data suggest that change detection is a pervasive property of the auditory system which may reside upstream cortical sites, direct evidence for the involvement of subcortical stages in the human auditory novelty system is lacking. Using event-related functional magnetic resonance imaging during a frequency oddball paradigm, we here report that auditory deviance detection occurs in the MGB and the IC of healthy human participants. By implementing a random condition controlling for neural refractoriness effects, we show that auditory change detection in these subcortical stations involves the encoding of statistical regularities from the acoustic input. These results provide the first direct evidence of the existence of multiple mismatch detectors nested at different levels along the human ascending auditory pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.
Abnormal frequency discrimination in children with SLI as indexed by mismatch negativity (MMN).
Rinker, Tanja; Kohls, Gregor; Richter, Cathrin; Maas, Verena; Schulz, Eberhard; Schecker, Michael
2007-02-14
For several decades, the aetiology of specific language impairment (SLI) has been associated with a central auditory processing deficit disrupting the normal language development of affected children. One important aspect for language acquisition is the discrimination of different acoustic features, such as frequency information. Concerning SLI, studies to date that examined frequency discrimination abilities have been contradictory. We hypothesized that an auditory processing deficit in children with SLI depends on the frequency range and the difference between the tones used. Using a passive mismatch negativity (MMN)-design, 13 boys with SLI and 13 age- and IQ-matched controls (7-11 years) were tested with two sine tones of different frequency (700Hz versus 750Hz). Reversed hemispheric activity between groups indicated abnormal processing in SLI. In a second time window, MMN2 was absent for the children with SLI. It can therefore be assumed that a frequency discrimination deficit in children with SLI becomes particularly apparent for tones below 750Hz and for a frequency difference of 50Hz. This finding may have important implications for future research and integration of various research approaches.
Poncelet, L C; Coppens, A G; Meuris, S I; Deltenre, P F
2000-11-01
To evaluate auditory maturation in puppies. Ten clinically normal Beagle puppies. Puppies were examined repeatedly from days 11 to 36 after birth (8 measurements). Click-evoked brain stem auditory-evoked potentials (BAEP) were obtained in response to rarefaction and condensation click stimuli from 90 dB normal hearing level to wave V threshold, using steps of 10 dB. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation differential potential (RCDP). Steps of 5 dB were used to determine thresholds of RCDP and wave V. Slope of the low-intensity segment of the wave V latency-intensity curve was calculated. The intensity range at which RCDP could not be recorded (ie, pre-RCDP range) was calculated by subtracting the threshold of wave V from threshold of RCDP RESULTS: Slope of the wave V latency-intensity curve low-intensity segment evolved with age, changing from (mean +/- SD) -90.8 +/- 41.6 to -27.8 +/- 4.1 micros/dB. Similar results were obtained from days 23 through 36. The pre-RCDP range diminished as puppies became older, decreasing from 40.0 +/- 7.5 to 20.5 +/- 6.4 dB. Changes in slope of the latency-intensity curve with age suggest enlargement of the audible range of frequencies toward high frequencies up to the third week after birth. Decrease in the pre-RCDP range may indicate an increase of the audible range of frequencies toward low frequencies. Age-related reference values will assist clinicians in detecting hearing loss in puppies.
Milner, Rafał; Lewandowska, Monika; Ganc, Małgorzata; Włodarczyk, Elżbieta; Grudzień, Diana; Skarżyński, Henryk
2018-01-01
In this study, we showed an abnormal resting-state quantitative electroencephalogram (QEEG) pattern in children with central auditory processing disorder (CAPD). Twenty-seven children (16 male, 11 female; mean age = 10.7 years) with CAPD and no symptoms of other developmental disorders, as well as 23 age- and sex-matched, typically developing children (TDC, 11 male, 13 female; mean age = 11.8 years) underwent examination of central auditory processes (CAPs) and QEEG evaluation consisting of two randomly presented blocks of “Eyes Open” (EO) or “Eyes Closed” (EC) recordings. Significant correlations between individual frequency band powers and CAP tests performance were found. The QEEG studies revealed that in CAPD relative to TDC there was no effect of decreased delta absolute power (1.5–4 Hz) in EO compared to the EC condition. Furthermore, children with CAPD showed increased theta power (4–8 Hz) in the frontal area, a tendency toward elevated theta power in EO block, and reduced low-frequency beta power (12–15 Hz) in the bilateral occipital and the left temporo-occipital regions for both EO and EC conditions. Decreased middle-frequency beta power (15–18 Hz) in children with CAPD was observed only in the EC block. The findings of the present study suggest that QEEG could be an adequate tool to discriminate children with CAPD from normally developing children. Correlation analysis shows relationship between the individual EEG resting frequency bands and the CAPs. Increased power of slow waves and decreased power of fast rhythms could indicate abnormal functioning (hypoarousal of the cortex and/or an immaturity) of brain areas not specialized in auditory information processing.
Perez, Veronica B; Woods, Scott W; Roach, Brian J; Ford, Judith M; McGlashan, Thomas H; Srihari, Vinod H; Mathalon, Daniel H
2014-03-15
Only about one third of patients at high risk for psychosis based on current clinical criteria convert to a psychotic disorder within a 2.5-year follow-up period. Targeting clinical high-risk (CHR) individuals for preventive interventions could expose many to unnecessary treatments, underscoring the need to enhance predictive accuracy with nonclinical measures. Candidate measures include event-related potential components with established sensitivity to schizophrenia. Here, we examined the mismatch negativity (MMN) component of the event-related potential elicited automatically by auditory deviance in CHR and early illness schizophrenia (ESZ) patients. We also examined whether MMN predicted subsequent conversion to psychosis in CHR patients. Mismatch negativity to auditory deviants (duration, frequency, and duration + frequency double deviant) was assessed in 44 healthy control subjects, 19 ESZ, and 38 CHR patients. Within CHR patients, 15 converters to psychosis were compared with 16 nonconverters with at least 12 months of clinical follow-up. Hierarchical Cox regression examined the ability of MMN to predict time to psychosis onset in CHR patients. Irrespective of deviant type, MMN was significantly reduced in ESZ and CHR patients relative to healthy control subjects and in CHR converters relative to nonconverters. Mismatch negativity did not significantly differentiate ESZ and CHR patients. The duration + frequency double deviant MMN, but not the single deviant MMNs, significantly predicted the time to psychosis onset in CHR patients. Neurophysiological mechanisms underlying automatic processing of auditory deviance, as reflected by the duration + frequency double deviant MMN, are compromised before psychosis onset and can enhance the prediction of psychosis risk among CHR patients. Copyright © 2014 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Klein-Hennig, Martin; Dietz, Mathias; Hohmann, Volker
2018-03-01
Both harmonic and binaural signal properties are relevant for auditory processing. To investigate how these cues combine in the auditory system, detection thresholds for an 800-Hz tone masked by a diotic (i.e., identical between the ears) harmonic complex tone were measured in six normal-hearing subjects. The target tone was presented either diotically or with an interaural phase difference (IPD) of 180° and in either harmonic or "mistuned" relationship to the diotic masker. Three different maskers were used, a resolved and an unresolved complex tone (fundamental frequency: 160 and 40 Hz) with four components below and above the target frequency and a broadband unresolved complex tone with 12 additional components. The target IPD provided release from masking in most masker conditions, whereas mistuning led to a significant release from masking only in the diotic conditions with the resolved and the narrowband unresolved maskers. A significant effect of mistuning was neither found in the diotic condition with the wideband unresolved masker nor in any of the dichotic conditions. An auditory model with a single analysis frequency band and different binaural processing schemes was employed to predict the data of the unresolved masker conditions. Sensitivity to modulation cues was achieved by including an auditory-motivated modulation filter in the processing pathway. The predictions of the diotic data were in line with the experimental results and literature data in the narrowband condition, but not in the broadband condition, suggesting that across-frequency processing is involved in processing modulation information. The experimental and model results in the dichotic conditions show that the binaural processor cannot exploit modulation information in binaurally unmasked conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Effects of sound intensity on temporal properties of inhibition in the pallid bat auditory cortex.
Razak, Khaleel A
2013-01-01
Auditory neurons in bats that use frequency modulated (FM) sweeps for echolocation are selective for the behaviorally-relevant rates and direction of frequency change. Such selectivity arises through spectrotemporal interactions between excitatory and inhibitory components of the receptive field. In the pallid bat auditory system, the relationship between FM sweep direction/rate selectivity and spectral and temporal properties of sideband inhibition have been characterized. Of note is the temporal asymmetry in sideband inhibition, with low-frequency inhibition (LFI) exhibiting faster arrival times compared to high-frequency inhibition (HFI). Using the two-tone inhibition over time (TTI) stimulus paradigm, this study investigated the interactions between two sound parameters in shaping sideband inhibition: intensity and time. Specifically, the impact of changing relative intensities of the excitatory and inhibitory tones on arrival time of inhibition was studied. Using this stimulation paradigm, single unit data from the auditory cortex of pentobarbital-anesthetized cortex show that the threshold for LFI is on average ~8 dB lower than HFI. For equal intensity tones near threshold, LFI is stronger than HFI. When the inhibitory tone intensity is increased further from threshold, the strength asymmetry decreased. The temporal asymmetry in LFI vs. HFI arrival time is strongest when the excitatory and inhibitory tones are of equal intensities or if excitatory tone is louder. As inhibitory tone intensity is increased, temporal asymmetry decreased suggesting that the relative magnitude of excitatory and inhibitory inputs shape arrival time of inhibition and FM sweep rate and direction selectivity. Given that most FM bats use downward sweeps as echolocation calls, a similar asymmetry in threshold and strength of LFI vs. HFI may be a general adaptation to enhance direction selectivity while maintaining sweep-rate selective responses to downward sweeps.
Auditory Beat Stimulation and its Effects on Cognition and Mood States
Chaieb, Leila; Wilpert, Elke Caroline; Reber, Thomas P.; Fell, Juergen
2015-01-01
Auditory beat stimulation may be a promising new tool for the manipulation of cognitive processes and the modulation of mood states. Here, we aim to review the literature examining the most current applications of auditory beat stimulation and its targets. We give a brief overview of research on auditory steady-state responses and its relationship to auditory beat stimulation (ABS). We have summarized relevant studies investigating the neurophysiological changes related to ABS and how they impact upon the design of appropriate stimulation protocols. Focusing on binaural-beat stimulation, we then discuss the role of monaural- and binaural-beat frequencies in cognition and mood states, in addition to their efficacy in targeting disease symptoms. We aim to highlight important points concerning stimulation parameters and try to address why there are often contradictory findings with regard to the outcomes of ABS. PMID:26029120
The spectrotemporal filter mechanism of auditory selective attention
Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.
2013-01-01
SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126
An Experimental Analysis of Memory Processing
Wright, Anthony A
2007-01-01
Rhesus monkeys were trained and tested in visual and auditory list-memory tasks with sequences of four travel pictures or four natural/environmental sounds followed by single test items. Acquisitions of the visual list-memory task are presented. Visual recency (last item) memory diminished with retention delay, and primacy (first item) memory strengthened. Capuchin monkeys, pigeons, and humans showed similar visual-memory changes. Rhesus learned an auditory memory task and showed octave generalization for some lists of notes—tonal, but not atonal, musical passages. In contrast with visual list memory, auditory primacy memory diminished with delay and auditory recency memory strengthened. Manipulations of interitem intervals, list length, and item presentation frequency revealed proactive and retroactive inhibition among items of individual auditory lists. Repeating visual items from prior lists produced interference (on nonmatching tests) revealing how far back memory extended. The possibility of using the interference function to separate familiarity vs. recollective memory processing is discussed. PMID:18047230
Research on Frequency Transposition for Hearing Aids. Final Report.
ERIC Educational Resources Information Center
Gengel, Roy W.; Pickett, J. M.
Reported were studies measuring residual auditory capacities of deaf persons and investigating hearing aids which transpose speech to lower frequencies where deaf persons may have better hearing. Studies on temporal and frequency discrimination indicated that the duration of a signal may have a differential effect on its detectability by…
Research Program Review. Aircrew Physiology.
1982-06-01
15 Visual and Auditory LocaizationrNormal and Abnormal Relation Leonard Detection of Retinal Ischemia Prior to Blackout by Electrical Evoked...parameters and provision of auditory or tactile feedback to the subject, all promise some improvement. Measurement of the separate responses at 01...Work in Progress A centrifuge program designed to evaluate two different electrode placements and four different frequencies of stimulation is now in
ERIC Educational Resources Information Center
Vandewalle, Ellen; Boets, Bart; Ghesquiere, Pol; Zink, Inge
2012-01-01
This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay…
Abrams, Daniel A; Nicol, Trent; White-Schwoch, Travis; Zecker, Steven; Kraus, Nina
2017-05-01
Speech perception relies on a listener's ability to simultaneously resolve multiple temporal features in the speech signal. Little is known regarding neural mechanisms that enable the simultaneous coding of concurrent temporal features in speech. Here we show that two categories of temporal features in speech, the low-frequency speech envelope and periodicity cues, are processed by distinct neural mechanisms within the same population of cortical neurons. We measured population activity in primary auditory cortex of anesthetized guinea pig in response to three variants of a naturally produced sentence. Results show that the envelope of population responses closely tracks the speech envelope, and this cortical activity more closely reflects wider bandwidths of the speech envelope compared to narrow bands. Additionally, neuronal populations represent the fundamental frequency of speech robustly with phase-locked responses. Importantly, these two temporal features of speech are simultaneously observed within neuronal ensembles in auditory cortex in response to clear, conversation, and compressed speech exemplars. Results show that auditory cortical neurons are adept at simultaneously resolving multiple temporal features in extended speech sentences using discrete coding mechanisms. Copyright © 2017 Elsevier B.V. All rights reserved.
Spectral and Temporal Processing in Rat Posterior Auditory Cortex
Pandya, Pritesh K.; Rathbun, Daniel L.; Moucha, Raluca; Engineer, Navzer D.; Kilgard, Michael P.
2009-01-01
The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex. PMID:17615251
Eramudugolla, Ranmalee; Mattingley, Jason B
2008-01-01
Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.
BALDEY: A database of auditory lexical decisions.
Ernestus, Mirjam; Cutler, Anne
2015-01-01
In an auditory lexical decision experiment, 5541 spoken content words and pseudowords were presented to 20 native speakers of Dutch. The words vary in phonological make-up and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudowords were matched in these respects to the real words. The BALDEY ("biggest auditory lexical decision experiment yet") data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbours and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles, and frequency ratings by 75 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.
Chimaeric sounds reveal dichotomies in auditory perception
Smith, Zachary M.; Delgutte, Bertrand; Oxenham, Andrew J.
2008-01-01
By Fourier's theorem1, signals can be decomposed into a sum of sinusoids of different frequencies. This is especially relevant for hearing, because the inner ear performs a form of mechanical Fourier transform by mapping frequencies along the length of the cochlear partition. An alternative signal decomposition, originated by Hilbert2, is to factor a signal into the product of a slowly varying envelope and a rapidly varying fine time structure. Neurons in the auditory brainstem3–6 sensitive to these features have been found in mammalian physiological studies. To investigate the relative perceptual importance of envelope and fine structure, we synthesized stimuli that we call ‘auditory chimaeras’, which have the envelope of one sound and the fine structure of another. Here we show that the envelope is most important for speech reception, and the fine structure is most important for pitch perception and sound localization. When the two features are in conflict, the sound of speech is heard at a location determined by the fine structure, but the words are identified according to the envelope. This finding reveals a possible acoustic basis for the hypothesized ‘what’ and ‘where’ pathways in the auditory cortex7–10. PMID:11882898
Music and speech listening enhance the recovery of early sensory processing after stroke.
Särkämö, Teppo; Pihko, Elina; Laitinen, Sari; Forsblom, Anita; Soinila, Seppo; Mikkonen, Mikko; Autti, Taina; Silvennoinen, Heli M; Erkkilä, Jaakko; Laine, Matti; Peretz, Isabelle; Hietanen, Marja; Tervaniemi, Mari
2010-12-01
Our surrounding auditory environment has a dramatic influence on the development of basic auditory and cognitive skills, but little is known about how it influences the recovery of these skills after neural damage. Here, we studied the long-term effects of daily music and speech listening on auditory sensory memory after middle cerebral artery (MCA) stroke. In the acute recovery phase, 60 patients who had middle cerebral artery stroke were randomly assigned to a music listening group, an audio book listening group, or a control group. Auditory sensory memory, as indexed by the magnetic MMN (MMNm) response to changes in sound frequency and duration, was measured 1 week (baseline), 3 months, and 6 months after the stroke with whole-head magnetoencephalography recordings. Fifty-four patients completed the study. Results showed that the amplitude of the frequency MMNm increased significantly more in both music and audio book groups than in the control group during the 6-month poststroke period. In contrast, the duration MMNm amplitude increased more in the audio book group than in the other groups. Moreover, changes in the frequency MMNm amplitude correlated significantly with the behavioral improvement of verbal memory and focused attention induced by music listening. These findings demonstrate that merely listening to music and speech after neural damage can induce long-term plastic changes in early sensory processing, which, in turn, may facilitate the recovery of higher cognitive functions. The neural mechanisms potentially underlying this effect are discussed.
The effects of context and musical training on auditory temporal-interval discrimination.
Banai, Karen; Fisher, Shirley; Ganot, Ron
2012-02-01
Non sensory factors such as stimulus context and musical experience are known to influence auditory frequency discrimination, but whether the context effect extends to auditory temporal processing remains unknown. Whether individual experiences such as musical training alter the context effect is also unknown. The goal of the present study was therefore to investigate the effects of stimulus context and musical experience on auditory temporal-interval discrimination. In experiment 1, temporal-interval discrimination was compared between fixed context conditions in which a single base temporal interval was presented repeatedly across all trials and variable context conditions in which one of two base intervals was randomly presented on each trial. Discrimination was significantly better in the fixed than in the variable context conditions. In experiment 2 temporal discrimination thresholds of musicians and non-musicians were compared across 3 conditions: a fixed context condition in which the target interval was presented repeatedly across trials, and two variable context conditions differing in the frequencies used for the tones marking the temporal intervals. Musicians outperformed non-musicians on all 3 conditions, but the effects of context were similar for the two groups. Overall, it appears that, like frequency discrimination, temporal-interval discrimination benefits from having a fixed reference. Musical experience, while improving performance, did not alter the context effect, suggesting that improved discrimination skills among musicians are probably not an outcome of more sensitive contextual facilitation or predictive coding mechanisms. Copyright © 2011 Elsevier B.V. All rights reserved.
Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger
2016-05-01
A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.
A masking level difference due to harmonicity.
Treurniet, W C; Boucher, D R
2001-01-01
The role of harmonicity in masking was studied by comparing the effect of harmonic and inharmonic maskers on the masked thresholds of noise probes using a three-alternative, forced-choice method. Harmonic maskers were created by selecting sets of partials from a harmonic series with an 88-Hz fundamental and 45 consecutive partials. Inharmonic maskers differed in that the partial frequencies were perturbed to nearby values that were not integer multiples of the fundamental frequency. Average simultaneous-masked thresholds were as much as 10 dB lower with the harmonic masker than with the inharmonic masker, and this difference was unaffected by masker level. It was reduced or eliminated when the harmonic partials were separated by more than 176 Hz, suggesting that the effect is related to the extent to which the harmonics are resolved by auditory filters. The threshold difference was not observed in a forward-masking experiment. Finally, an across-channel mechanism was implicated when the threshold difference was found between a harmonic masker flanked by harmonic bands and a harmonic masker flanked by inharmonic bands. A model developed to explain the observed difference recognizes that an auditory filter output envelope is modulated when the filter passes two or more sinusoids, and that the modulation rate depends on the differences among the input frequencies. For a harmonic masker, the frequency differences of adjacent partials are identical, and all auditory filters have the same dominant modulation rate. For an inharmonic masker, however, the frequency differences are not constant and the envelope modulation rate varies across filters. The model proposes that a lower variability facilitates detection of a probe-induced change in the variability, thus accounting for the masked threshold difference. The model was supported by significantly improved predictions of observed thresholds when the predictor variables included envelope modulation rate variance measured using simulated auditory filters.
Feature conjunctions and auditory sensory memory.
Sussman, E; Gomes, H; Nousak, J M; Ritter, W; Vaughan, H G
1998-05-18
This study sought to obtain additional evidence that transient auditory memory stores information about conjunctions of features on an automatic basis. The mismatch negativity of event-related potentials was employed because its operations are based on information that is stored in transient auditory memory. The mismatch negativity was found to be elicited by a tone that differed from standard tones in a combination of its perceived location and frequency. The result lends further support to the hypothesis that the system upon which the mismatch negativity relies processes stimuli in an holistic manner. Copyright 1998 Elsevier Science B.V.
Musical Experience, Auditory Perception and Reading-Related Skills in Children
Banai, Karen; Ahissar, Merav
2013-01-01
Background The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Methodology/Principal Findings Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Conclusions/Significance Participants’ previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case. PMID:24086654
Mertens, Griet; Van Rompaey, Vincent; Van de Heyning, Paul
2018-05-17
A suggested solution to suppress tinnitus is to restore the normal sensory input. This is based on the auditory deprivation hypothesis. It is known that hearing aids can provide sufficient activation of the auditory nervous system and reduce tinnitus in subjects with mild to moderate hearing loss and that cochlear implantation can reduce tinnitus in subjects with severe to profound hearing loss. This applies to subjects with single-sided deafness (SSD) or bilateral hearing loss. To investigate if electric-acoustic stimulation (EAS) can reduce severe tinnitus in a subject with residual hearing in the ipsilateral ear and contralateral normal hearing (high-frequency SSD) by restoring the auditory input. Tinnitus reduction was investigated for 1 year after implantation in a subject with high-frequency SSD, who uses EAS, and was compared to 11 subjects with a cochlear implant (CI) with SSD. The Visual Analogue Scale (VAS) and the Tinnitus Questionnaire (TQ) were administered pre-operatively and at 1, 3, 6, and 12 months after implantation. Significant tinnitus reduction was observed 1 month after implantation on the VAS in the subjects with SSD using a CI. Tinnitus reduction was also observed in the subject with high-frequency SSD using EAS. A further decrease was observed 3 months after implantation. The TQ and VAS scores remained stable up to 1 year after implantation. A CI can significantly reduce ipsilateral severe tinnitus in a subject with SSD. Ipsilateral severe tinnitus can also be reduced using EAS in subjects with high-frequency SSD.
Electrical stimulation of the midbrain excites the auditory cortex asymmetrically.
Quass, Gunnar Lennart; Kurt, Simone; Hildebrandt, Jannis; Kral, Andrej
2018-05-17
Auditory midbrain implant users cannot achieve open speech perception and have limited frequency resolution. It remains unclear whether the spread of excitation contributes to this issue and how much it can be compensated by current-focusing, which is an effective approach in cochlear implants. The present study examined the spread of excitation in the cortex elicited by electric midbrain stimulation. We further tested whether current-focusing via bipolar and tripolar stimulation is effective with electric midbrain stimulation and whether these modes hold any advantage over monopolar stimulation also in conditions when the stimulation electrodes are in direct contact with the target tissue. Using penetrating multielectrode arrays, we recorded cortical population responses to single pulse electric midbrain stimulation in 10 ketamine/xylazine anesthetized mice. We compared monopolar, bipolar, and tripolar stimulation configurations with regard to the spread of excitation and the characteristic frequency difference between the stimulation/recording electrodes. The cortical responses were distributed asymmetrically around the characteristic frequency of the stimulated midbrain region with a strong activation in regions tuned up to one octave higher. We found no significant differences between monopolar, bipolar, and tripolar stimulation in threshold, evoked firing rate, or dynamic range. The cortical responses to electric midbrain stimulation are biased towards higher tonotopic frequencies. Current-focusing is not effective in direct contact electrical stimulation. Electrode maps should account for the asymmetrical spread of excitation when fitting auditory midbrain implants by shifting the frequency-bands downward and stimulating as dorsally as possible. Copyright © 2018 Elsevier Inc. All rights reserved.
Ethridge, Lauren E; White, Stormi P; Mosconi, Matthew W; Wang, Jun; Pedapati, Ernest V; Erickson, Craig A; Byerly, Matthew J; Sweeney, John A
2017-01-01
Studies in the fmr1 KO mouse demonstrate hyper-excitability and increased high-frequency neuronal activity in sensory cortex. These abnormalities may contribute to prominent and distressing sensory hypersensitivities in patients with fragile X syndrome (FXS). The current study investigated functional properties of auditory cortex using a sensory entrainment task in FXS. EEG recordings were obtained from 17 adolescents and adults with FXS and 17 age- and sex-matched healthy controls. Participants heard an auditory chirp stimulus generated using a 1000-Hz tone that was amplitude modulated by a sinusoid linearly increasing in frequency from 0-100 Hz over 2 s. Single trial time-frequency analyses revealed decreased gamma band phase-locking to the chirp stimulus in FXS, which was strongly coupled with broadband increases in gamma power. Abnormalities in gamma phase-locking and power were also associated with theta-gamma amplitude-amplitude coupling during the pre-stimulus period and with parent reports of heightened sensory sensitivities and social communication deficits. This represents the first demonstration of neural entrainment alterations in FXS patients and suggests that fast-spiking interneurons regulating synchronous high-frequency neural activity have reduced functionality. This reduced ability to synchronize high-frequency neural activity was related to the total power of background gamma band activity. These observations extend findings from fmr1 KO models of FXS, characterize a core pathophysiological aspect of FXS, and may provide a translational biomarker strategy for evaluating promising therapeutics.
Muller, Christopher L; Anacker, Allison MJ; Rogers, Tiffany D; Goeden, Nick; Keller, Elizabeth H; Forsberg, C Gunnar; Kerr, Travis M; Wender, Carly LA; Anderson, George M; Stanwood, Gregg D; Blakely, Randy D; Bonnin, Alexandre; Veenstra-VanderWeele, Jeremy
2017-01-01
Biomarker, neuroimaging, and genetic findings implicate the serotonin transporter (SERT) in autism spectrum disorder (ASD). Previously, we found that adult male mice expressing the autism-associated SERT Ala56 variant have altered central serotonin (5-HT) system function, as well as elevated peripheral blood 5-HT levels. Early in gestation, before midbrain 5-HT projections have reached the cortex, peripheral sources supply 5-HT to the forebrain, suggesting that altered maternal or placenta 5-HT system function could impact the developing embryo. We therefore used different combinations of maternal and embryo SERT Ala56 genotypes to examine effects on blood, placenta and embryo serotonin levels and neurodevelopment at embryonic day E14.5, when peripheral sources of 5-HT predominate, and E18.5, when midbrain 5-HT projections have reached the forebrain. Maternal SERT Ala56 genotype was associated with decreased placenta and embryonic forebrain 5-HT levels at E14.5. Low 5-HT in the placenta persisted, but forebrain levels normalized by E18.5. Maternal SERT Ala56 genotype effects on forebrain 5-HT levels were accompanied by a broadening of 5-HT-sensitive thalamocortical axon projections. In contrast, no effect of embryo genotype was seen in concepti from heterozygous dams. Blood 5-HT levels were dynamic across pregnancy and were increased in SERT Ala56 dams at E14.5. Placenta RNA sequencing data at E14.5 indicated substantial impact of maternal SERT Ala56 genotype, with alterations in immune and metabolic-related pathways. Collectively, these findings indicate that maternal SERT function impacts offspring placental 5-HT levels, forebrain 5-HT levels, and neurodevelopment. PMID:27550733
Placenta-derived hypo-serotonin situations in the developing forebrain cause autism.
Sato, Kohji
2013-04-01
Autism is a pervasive developmental disorder that is characterized by the behavioral traits of impaired social cognition and communication, and repetitive and/or obsessive behavior and interests. Although there are many theories and speculations about the pathogenetic causes of autism, the disruption of the serotonergic system is one of the most consistent and well-replicated findings. Recently, it has been reported that placenta-derived serotonin is the main source in embryonic day (E) 10-15 mouse forebrain, after that period, the serotonergic fibers start to supply serotonin into the forebrain. E 10-15 is the very important developing period, when cortical neurogenesis, migration and initial axon targeting are processed. Since all these events have been considered to be involved in the pathogenesis of autism and they are highly controlled by serotonin signals, the paucity of placenta-derived serotonin should have potential importance when the pathogenesis of autism is considered. I, thus, postulate a hypothesis that placenta-derived hypo-serotonin situations in the developing forebrain cause autism. The hypothesis is as follows. Various factors, such as inflammation, dysfunction of the placenta, together with genetic predispositions cause a decrease of placenta-derived serotonin levels. The decrease of placenta-derived serotonin levels leads to hypo-serotonergic situations in the forebrain of the fetus. The paucity of serotonin in the forebrain leads to mis-wiring in important regions which are responsible for the theory of mind. The paucity of serotonin in the forebrain also causes over-growth of serotonergic fibers. These disturbances result in network deficiency and aberration of the serotonergic system, leading to the autistic phenotypes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Zhang, Chi; Kang, Yi; Lundy, Robert F.
2010-01-01
The pontine parabrachial nucleus (PBN) and medullary reticular formation (RF) are hindbrain regions that, respectively, process sensory input and coordinate motor output related to ingestive behavior. Neural processing in each hindbrain site is subject to modulation originating from several forebrain structures including the insular gustatory cortex (IC), bed nucleus of the stria terminalis (BNST), central nucleus of the amygdala (CeA), and lateral hypothalamus (LH). The present study combined electrophysiology and retrograde tracing techniques to determine the extent of overlap between neurons within the IC, BNST, CeA and LH that target both the PBN and RF. One fluorescent retrograde tracer, red (RFB) or green (GFB) latex microbeads, was injected into the gustatory PBN under electrophysiological guidance and a different retrograde tracer, GFB or fluorogold (FG), into the ipsilateral RF using the location of gustatory NST as a point of reference. Brain tissue containing each forebrain region was sectioned, scanned using a confocal microscope, and scored for the number of single and double labeled neurons. Neurons innervating the RF only, the PBN only, or both the medullary RF and PBN were observed, largely intermingled, in each forebrain region. The CeA contained the largest number of cells retrogradely labeled after tracer injection into either hindbrain region. For each forebrain area except the IC, the origin of descending input to the RF and PBN was almost entirely ipsilateral. Axons from a small percentage of hindbrain projecting forebrain neurons targeted both the PBN and RF. Target specific and non specific inputs from a variety of forebrain nuclei to the hindbrain likely reflect functional specialization in the control of ingestive behaviors. PMID:21040715
Mechanics of the Mammalian Cochlea
Robles, Luis; Ruggero, Mario A.
2013-01-01
In mammals, environmental sounds stimulate the auditory receptor, the cochlea, via vibrations of the stapes, the innermost of the middle ear ossicles. These vibrations produce displacement waves that travel on the elongated and spirally wound basilar membrane (BM). As they travel, waves grow in amplitude, reaching a maximum and then dying out. The location of maximum BM motion is a function of stimulus frequency, with high-frequency waves being localized to the “base” of the cochlea (near the stapes) and low-frequency waves approaching the “apex” of the cochlea. Thus each cochlear site has a characteristic frequency (CF), to which it responds maximally. BM vibrations produce motion of hair cell stereocilia, which gates stereociliar transduction channels leading to the generation of hair cell receptor potentials and the excitation of afferent auditory nerve fibers. At the base of the cochlea, BM motion exhibits a CF-specific and level-dependent compressive nonlinearity such that responses to low-level, near-CF stimuli are sensitive and sharply frequency-tuned and responses to intense stimuli are insensitive and poorly tuned. The high sensitivity and sharp-frequency tuning, as well as compression and other nonlinearities (two-tone suppression and intermodulation distortion), are highly labile, indicating the presence in normal cochleae of a positive feedback from the organ of Corti, the “cochlear amplifier.” This mechanism involves forces generated by the outer hair cells and controlled, directly or indirectly, by their transduction currents. At the apex of the cochlea, nonlinearities appear to be less prominent than at the base, perhaps implying that the cochlear amplifier plays a lesser role in determining apical mechanical responses to sound. Whether at the base or the apex, the properties of BM vibration adequately account for most frequency-specific properties of the responses to sound of auditory nerve fibers. PMID:11427697
Synaptic transmission at the endbulb of Held deteriorates during age‐related hearing loss
Manis, Paul B.
2016-01-01
Key points Synaptic transmission at the endbulb of Held was assessed by whole‐cell patch clamp recordings from auditory neurons in mature (2–4 months) and aged (20–26 months) mice.Synaptic transmission is degraded in aged mice, which may contribute to the decline in neural processing of the central auditory system during age‐related hearing loss.The changes in synaptic transmission in aged mice can be partially rescued by improving calcium buffering, or decreasing action potential‐evoked calcium influx.These experiments suggest potential mechanisms, such as regulating intraterminal calcium, that could be manipulated to improve the fidelity of transmission at the aged endbulb of Held. Abstract Age‐related hearing loss (ARHL) is associated with changes to the auditory periphery that raise sensory thresholds and alter coding, and is accompanied by alterations in excitatory and inhibitory synaptic transmission, and intrinsic excitability in the circuits of the central auditory system. However, it remains unclear how synaptic transmission changes at the first central auditory synapses during ARHL. Using mature (2–4 months) and old (20–26 months) CBA/CaJ mice, we studied synaptic transmission at the endbulb of Held. Mature and old mice showed no difference in either spontaneous quantal synaptic transmission or low frequency evoked synaptic transmission at the endbulb of Held. However, when challenged with sustained high frequency stimulation, synapses in old mice exhibited increased asynchronous transmitter release and reduced synchronous release. This suggests that the transmission of temporally precise information is degraded at the endbulb during ARHL. Increasing intraterminal calcium buffering with EGTA‐AM or decreasing calcium influx with ω‐agatoxin IVA decreased the amount of asynchronous release and restored synchronous release in old mice. In addition, recovery from depression following high frequency trains was faster in old mice, but was restored to a normal time course by EGTA‐AM treatment. These results suggest that intraterminal calcium in old endbulbs may rise to abnormally high levels during high rates of auditory nerve firing, or that calcium‐dependent processes involved in release are altered with age. These observations suggest that ARHL is associated with a decrease in temporal precision of synaptic release at the first central auditory synapse, which may contribute to perceptual deficits in hearing. PMID:27618790
Suda, Yoko; Kokura, Kenji; Kimura, Jun; Kajikawa, Eriko; Inoue, Fumitaka; Aizawa, Shinichi
2010-09-01
We have analyzed Emx2 enhancers to determine how Emx2 functions during forebrain development are regulated. The FB (forebrain) enhancer we identified immediately 3' downstream of the last coding exon is well conserved among tetrapods and unexpectedly directed all the Emx2 expression in forebrain: caudal forebrain primordium at E8.5, dorsal telencephalon at E9.5-E10.5 and the cortical ventricular zone after E12.5. Otx, Tcf, Smad and two unknown transcription factor binding sites were essential to all these activities. The mutant that lacked this enhancer demonstrated that Emx2 expression under the enhancer is solely responsible for diencephalon development. However, in telencephalon, the FB enhancer did not have activities in cortical hem or Cajal-Retzius cells, nor was its activity in the cortex graded. Emx2 expression was greatly reduced, but persisted in the telencephalon of the enhancer mutant, indicating that there exists another enhancer for Emx2 expression unique to mammalian telencephalon.
Abnormal auditory synchronization in stuttering: A magnetoencephalographic study.
Kikuchi, Yoshikazu; Okamoto, Tsuyoshi; Ogata, Katsuya; Hagiwara, Koichi; Umezaki, Toshiro; Kenjo, Masamutsu; Nakagawa, Takashi; Tobimatsu, Shozo
2017-02-01
In a previous magnetoencephalographic study, we showed both functional and structural reorganization of the right auditory cortex and impaired left auditory cortex function in people who stutter (PWS). In the present work, we reevaluated the same dataset to further investigate how the right and left auditory cortices interact to compensate for stuttering. We evaluated bilateral N100m latencies as well as indices of local and inter-hemispheric phase synchronization of the auditory cortices. The left N100m latency was significantly prolonged relative to the right N100m latency in PWS, while healthy control participants did not show any inter-hemispheric differences in latency. A phase-locking factor (PLF) analysis, which indicates the degree of local phase synchronization, demonstrated enhanced alpha-band synchrony in the right auditory area of PWS. A phase-locking value (PLV) analysis of inter-hemispheric synchronization demonstrated significant elevations in the beta band between the right and left auditory cortices in PWS. In addition, right PLF and PLVs were positively correlated with stuttering frequency in PWS. Taken together, our data suggest that increased right hemispheric local phase synchronization and increased inter-hemispheric phase synchronization are electrophysiological correlates of a compensatory mechanism for impaired left auditory processing in PWS. Published by Elsevier B.V.
Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.
Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M
1991-06-01
An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.
Spatial processing in the auditory cortex of the macaque monkey
NASA Astrophysics Data System (ADS)
Recanzone, Gregg H.
2000-10-01
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
Auditory discrimination training for tinnitus treatment: the effect of different paradigms.
Herraiz, Carlos; Diges, I; Cobo, P; Aparicio, J M; Toledano, A
2010-07-01
Acoustic deprivation, i.e. hearing loss, is responsible for a cascade of processes resulting in reorganisation of the cortex. Tinnitus mechanisms are explained by synchronization of the neural spontaneous activity and might be related to cortical re-mapping. Auditory discrimination training (ADT) has demonstrated in both animals and humans to induce tonotopical changes in the auditory pathways through neural plasticity. We hypothesize that ADT could have some effect on tinnitus perception. The objective of this study is to compare the effect on tinnitus following two paradigms of ADT. Only patients from 20 to 60 years of age were recruited. Inclusion criteria were pure tone tinnitus of mild or moderate handicap according to the Tinnitus Handicap Inventory score (<56). ADT patients were randomized in two groups: SAME (ADT in the same frequency of tinnitus pitch, 20 patients) and NONSAME (ADT in the frequency one-octave below tinnitus pitch, 21 patients). Groups of pair of tones (70% standard tones ST, 30% deviant tones ST + 0.1-0.5 kHz) were randomly mixed for 20 min/day during 1 month. Patient had to mark when the two sounds of the pair were similar or different. Control group included 26 patients from the waiting list (WLG). Patients were also divided according to the trained frequency and the deepest hearing-impaired frequency. Outcome parameters were set up according to the answer to the question "is your tinnitus better, same, or worse with the treatment?" (RESP), the tinnitus handicap inventory (THI) and the visual analogue scale from 1 to 10 on tinnitus intensity (VAS). Tinnitus improved in 42.2% of the patients (RESP). VAS and THI scores were reduced but only THI differences were statistically significant (P = 0.003). ADT patients improved significantly compared with WLG in RESP and THI scores (P < 0.01). Training frequencies one-octave below the tinnitus pitch (NONSAME) decreased significantly THI scores compared with patients trained frequencies similar to tinnitus pitch (SAME, P = 0.035). RESP and VAS scores decreased more in NONSAME group though differences were not significant. We did not find any differences when comparing the group training the deepest hearing-impaired frequency and the group who trained other frequencies. Auditory discrimination training significantly improved tinnitus handicap compared to a waiting list group. Those patients who trained frequencies one octave below the tinnitus pitch had better outcome than those who performed the ADT with frequencies similar to the tinnitus pitch (P = 0.035).
A novel hybrid auditory BCI paradigm combining ASSR and P300.
Kaongoen, Netiwit; Jo, Sungho
2017-03-01
Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.
Rapid estimation of high-parameter auditory-filter shapes
Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.
2014-01-01
A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086
Neural Correlates of Vocal Production and Motor Control in Human Heschl's Gyrus
Oya, Hiroyuki; Nourski, Kirill V.; Kawasaki, Hiroto; Larson, Charles R.; Brugge, John F.; Howard, Matthew A.; Greenlee, Jeremy D.W.
2016-01-01
The present study investigated how pitch frequency, a perceptually relevant aspect of periodicity in natural human vocalizations, is encoded in Heschl's gyrus (HG), and how this information may be used to influence vocal pitch motor control. We recorded local field potentials from multicontact depth electrodes implanted in HG of 14 neurosurgical epilepsy patients as they vocalized vowel sounds and received brief (200 ms) pitch perturbations at 100 Cents in their auditory feedback. Event-related band power responses to vocalizations showed sustained frequency following responses that tracked voice fundamental frequency (F0) and were significantly enhanced in posteromedial HG during speaking compared with when subjects listened to the playback of their own voice. In addition to frequency following responses, a transient response component within the high gamma frequency band (75–150 Hz) was identified. When this response followed the onset of vocalization, the magnitude of the response was the same for the speaking and playback conditions. In contrast, when this response followed a pitch shift, its magnitude was significantly enhanced during speaking compared with playback. We also observed that, in anterolateral HG, the power of high gamma responses to pitch shifts correlated with the magnitude of compensatory vocal responses. These findings demonstrate a functional parcellation of HG with neural activity that encodes pitch in natural human voice, distinguishes between self-generated and passively heard vocalizations, detects discrepancies between the intended and heard vocalization, and contains information about the resulting behavioral vocal compensations in response to auditory feedback pitch perturbations. SIGNIFICANCE STATEMENT The present study is a significant contribution to our understanding of sensor-motor mechanisms of vocal production and motor control. The findings demonstrate distinct functional parcellation of core and noncore areas within human auditory cortex on Heschl's gyrus that process natural human vocalizations and pitch perturbations in the auditory feedback. In addition, our data provide evidence for distinct roles of high gamma neural oscillations and frequency following responses for processing periodicity in human vocalizations during vocal production and motor control. PMID:26888939
Gransier, Robin; Deprez, Hanne; Hofmann, Michael; Moonen, Marc; van Wieringen, Astrid; Wouters, Jan
2016-05-01
Previous studies have shown that objective measures based on stimulation with low-rate pulse trains fail to predict the threshold levels of cochlear implant (CI) users for high-rate pulse trains, as used in clinical devices. Electrically evoked auditory steady-state responses (EASSRs) can be elicited by modulated high-rate pulse trains, and can potentially be used to objectively determine threshold levels of CI users. The responsiveness of the auditory pathway of profoundly hearing-impaired CI users to modulation frequencies is, however, not known. In the present study we investigated the responsiveness of the auditory pathway of CI users to a monopolar 500 pulses per second (pps) pulse train modulated between 1 and 100 Hz. EASSRs to forty-three modulation frequencies, elicited at the subject's maximum comfort level, were recorded by means of electroencephalography. Stimulation artifacts were removed by a linear interpolation between a pre- and post-stimulus sample (i.e., blanking). The phase delay across modulation frequencies was used to differentiate between the neural response and a possible residual stimulation artifact after blanking. Stimulation artifacts were longer than the inter-pulse interval of the 500pps pulse train for recording electrodes ipsilateral to the CI. As a result the stimulation artifacts could not be removed by artifact removal on the bases of linear interpolation for recording electrodes ipsilateral to the CI. However, artifact-free responses could be obtained in all subjects from recording electrodes contralateral to the CI, when subject specific reference electrodes (Cz or Fpz) were used. EASSRs to modulation frequencies within the 30-50 Hz range resulted in significant responses in all subjects. Only a small number of significant responses could be obtained, during a measurement period of 5 min, that originate from the brain stem (i.e., modulation frequencies in the 80-100 Hz range). This reduced synchronized activity of brain stem responses in long-term severely-hearing impaired CI users could be an attribute of processes associated with long-term hearing impairment and/or electrical stimulation. Copyright © 2016 Elsevier B.V. All rights reserved.
Implicit versus explicit frequency comparisons: two mechanisms of auditory change detection.
Demany, Laurent; Semal, Catherine; Pressnitzer, Daniel
2011-04-01
Listeners had to compare, with respect to pitch (frequency), a pure tone (T) to a combination of pure tones presented subsequently (C). The elements of C were either synchronous, and therefore difficult to hear out individually, or asynchronous and therefore easier to hear out individually. In the "present/absent" condition, listeners had to judge if T reappeared in C or not. In the "up/down" condition, the task was to judge if the element of C most similar to T was higher or lower than T. When the elements of C were synchronous, the up/down task was found to be easier than the present/absent task; the converse result was obtained when the elements of C were asynchronous. This provides evidence for a duality of auditory comparisons between tone frequencies: (1) implicit comparisons made by automatic and direction-sensitive "frequency-shift detectors"; (2) explicit comparisons more sensitive to the magnitude of a frequency change than to its direction. Another experiment suggests that although the frequency-shift detectors cannot compare effectively two tones separated by an interfering tone, they are largely insensitive to interfering noise bursts.
Development of a Pitch Discrimination Screening Test for Preschool Children.
Abramson, Maria Kulick; Lloyd, Peter J
2016-04-01
There is a critical need for tests of auditory discrimination for young children as this skill plays a fundamental role in the development of speaking, prereading, reading, language, and more complex auditory processes. Frequency discrimination is important with regard to basic sensory processing affecting phonological processing, dyslexia, measurements of intelligence, auditory memory, Asperger syndrome, and specific language impairment. This study was performed to determine the clinical feasibility of the Pitch Discrimination Test (PDT) to screen the preschool child's ability to discriminate some of the acoustic demands of speech perception, primarily pitch discrimination, without linguistic content. The PDT used brief speech frequency tones to gather normative data from preschool children aged 3 to 5 yrs. A cross-sectional study was used to gather data regarding the pitch discrimination abilities of a sample of typically developing preschool children, between 3 and 5 yrs of age. The PDT consists of ten trials using two pure tones of 100-msec duration each, and was administered in an AA or AB forced-choice response format. Data from 90 typically developing preschool children between the ages of 3 and 5 yrs were used to provide normative data. Nonparametric Mann-Whitney U-testing was used to examine the effects of age as a continuous variable on pitch discrimination. The Kruskal-Wallis test was used to determine the significance of age on performance on the PDT. Spearman rank was used to determine the correlation of age and performance on the PDT. Pitch discrimination of brief tones improved significantly from age 3 yrs to age 4 yrs, as well as from age 3 yrs to the age 4- and 5-yrs group. Results indicated that between ages 3 and 4 yrs, children's auditory discrimination of pitch improved on the PDT. The data showed that children can be screened for auditory discrimination of pitch beginning with age 4 yrs. The PDT proved to be a time efficient, feasible tool for a simple form of frequency discrimination screening in the preschool population before the age where other diagnostic tests of auditory processing disorders can be used. American Academy of Audiology.
dos Santos Filha, Valdete Alves Valentins; Samelli, Alessandra Giannella; Matas, Carla Gentile
2015-09-11
Tinnitus is an important occupational health concern, but few studies have focused on the central auditory pathways of workers with a history of occupational noise exposure. Thus, we analyzed the central auditory pathways of workers with a history of occupational noise exposure who had normal hearing threshold, and compared middle latency auditory evoked potential in those with and without noise-induced tinnitus. Sixty individuals (30 with and 30 without tinnitus) underwent the following procedures: anamnesis, immittance measures, pure-tone air conduction thresholds at all frequencies between 0.25-8 kHz, and middle latency auditory evoked potentials. Quantitative analysis of latencies and amplitudes of middle latency auditory evoked potential showed no significant differences between the groups with and without tinnitus. In the qualitative analysis, we found that both groups showed increased middle latency auditory evoked potential latencies. The study group had more alterations of the "both" type regarding the Na-Pa amplitude, while the control group had more "electrode effect" alterations, but these alterations were not significantly different when compared to controls. Individuals with normal hearing with or without tinnitus who are exposed to occupational noise have altered middle latency auditory evoked potential, suggesting impairment of the auditory pathways in cortical and subcortical regions. Although differences did not reach significance, individuals with tinnitus seemed to have more abnormalities in components of the middle latency auditory evoked potential when compared to individuals without tinnitus, suggesting alterations in the generation and transmission of neuroelectrical impulses along the auditory pathway.
Association between heart rhythm and cortical sound processing.
Marcomini, Renata S; Frizzo, Ana Claúdia F; de Góes, Viviane B; Regaçone, Simone F; Garner, David M; Raimundo, Rodrigo D; Oliveira, Fernando R; Valenti, Vitor E
2018-04-26
Sound signal processing signifies an important factor for human conscious communication and it may be assessed through cortical auditory evoked potentials (CAEP). Heart rate variability (HRV) provides information about heart rate autonomic regulation. We investigated the association between resting HRV and CAEP. We evaluated resting HRV in the time and frequency domain and the CAEP components. The subjects remained at rest for 10 minutes for HRV recording, then they performed the CAEP examinations through frequency and duration protocols in both ears. Linear regression indicated that the amplitude of the N2 wave of the CAEP in the left ear (not right ear) was significantly influenced by standard deviation of normal-to-normal RR-intervals (17.7%) and percentage of adjacent RR-intervals with a difference of duration greater than 50 milliseconds (25.3%) time domain HRV indices in the frequency protocol. In the duration protocol and in the left ear the latency of the P2 wave was significantly influenced by low (LF) (20.8%) and high frequency (HF) bands in normalized units (21%) and LF/HF ratio (22.4%) indices of HRV spectral analysis. The latency of the N2 wave was significantly influenced by LF (25.8%), HF (25.9%) and LF/HF (28.8%). In conclusion, we promote the supposition that resting heart rhythm is associated with thalamo-cortical, cortical-cortical and auditory cortex pathways involved with auditory processing in the right hemisphere.
Jang, Jongmoon; Lee, JangWoo; Woo, Seongyong; Sly, David J; Campbell, Luke J; Cho, Jin-Ho; O'Leary, Stephen J; Park, Min-Hyun; Han, Sungmin; Choi, Ji-Wong; Jang, Jeong Hun; Choi, Hongsoo
2015-07-31
We proposed a piezoelectric artificial basilar membrane (ABM) composed of a microelectromechanical system cantilever array. The ABM mimics the tonotopy of the cochlea: frequency selectivity and mechanoelectric transduction. The fabricated ABM exhibits a clear tonotopy in an audible frequency range (2.92-12.6 kHz). Also, an animal model was used to verify the characteristics of the ABM as a front end for potential cochlear implant applications. For this, a signal processor was used to convert the piezoelectric output from the ABM to an electrical stimulus for auditory neurons. The electrical stimulus for auditory neurons was delivered through an implanted intra-cochlear electrode array. The amplitude of the electrical stimulus was modulated in the range of 0.15 to 3.5 V with incoming sound pressure levels (SPL) of 70.1 to 94.8 dB SPL. The electrical stimulus was used to elicit an electrically evoked auditory brainstem response (EABR) from deafened guinea pigs. EABRs were successfully measured and their magnitude increased upon application of acoustic stimuli from 75 to 95 dB SPL. The frequency selectivity of the ABM was estimated by measuring the magnitude of EABRs while applying sound pressure at the resonance and off-resonance frequencies of the corresponding cantilever of the selected channel. In this study, we demonstrated a novel piezoelectric ABM and verified its characteristics by measuring EABRs.
Correlation between the characteristics of resonance and aging of the external ear.
Silva, Aline Papin Roedas da; Blasca, Wanderléia Quinhoneiro; Lauris, José Roberto Pereira; Oliveira, Jerusa Roberta Massola de
2014-01-01
Aging causes changes in the external ear as a collapse of the external auditory canal and tympanic membrane senile. Knowing them is appropriate for the diagnosis of hearing loss and selection of hearing aids. For this reason, the study aimed to verify the influence of the anatomical changes of the external ear resonance in the auditory canal in the elderly. The sample consisted of objective measures of the external ear of elderly with collapse (group A), senile tympanic membrane (group B) and without changing the external auditory canal or tympanic membrane (group C) and adults without changing the external ear (group D). In the retrospective/clinical study were performed comparisons of measures of individuals with and without alteration of the external ear through the gain and response external ear resonant frequency and the primary peak to the right ear. In groups A, B and C was no statistically significant difference between Real Ear Unaided Response (REUR) and Real Ear Unaided Gain (REUG), but not for the peak frequency. For groups A and B were shown significant differences in REUR and REUG. Between the C and D groups were significant statistics to the REUR and REUG, but not for the frequency of the primary peak. Changes influence the external ear resonance, decreasing its amplitude. However, the frequency of the primary peak is not affected.
Acoustic and Auditory Perception Effects of the Voice Therapy Technique Finger Kazoo in Adult Women.
Christmann, Mara Keli; Cielo, Carla Aparecida
2017-05-01
This study aimed to verify and to correlate acoustic and auditory-perceptual measures of glottic source after the performance of finger kazoo (FK) technique. This is an experimental, cross-sectional, and qualitative study. We made an analysis of the vowel [a:] in 46 adult women with neither vocal complaints nor laryngeal alterations, through the Multi-Dimensional Voice Program Advanced and RASATI scale, before and immediately after performing three series of FK and 5 minutes after a period of silence. Kappa, Friedman, Wilcoxon, and Spearman tests were used. We found significant increase in fundamental frequency, reduction of amplitude variation, and degree of sub-harmonics immediately after performing FK. Positive correlations were measures of frequency and its perturbation, measures of amplitude, of soft phonation index, of degree and number of unvoiced segments with aspects of RASATI. Negative correlations were voice turbulence index, measures of frequency and its perturbation, and measures of soft phonation index with aspects of RASATI. There was fundamental frequency increase, within normal limits, and reduction of acoustic measures related to presence of noise and instability. In general, acoustic measures, suggestive of noise and instability, were reduced according to the decrease of perceptive-auditory aspects of vocal alteration. It shows that both instruments are complementary and that the acoustic vocal effect was positive. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Anomal, Renata; de Villers-Sidani, Etienne; Merzenich, Michael M; Panizzutti, Rogerio
2013-01-01
Sensory experience powerfully shapes cortical sensory representations during an early developmental "critical period" of plasticity. In the rat primary auditory cortex (A1), the experience-dependent plasticity is exemplified by significant, long-lasting distortions in frequency representation after mere exposure to repetitive frequencies during the second week of life. In the visual system, the normal unfolding of critical period plasticity is strongly dependent on the elaboration of brain-derived neurotrophic factor (BDNF), which promotes the establishment of inhibition. Here, we tested the hypothesis that BDNF signaling plays a role in the experience-dependent plasticity induced by pure tone exposure during the critical period in the primary auditory cortex. Elvax resin implants filled with either a blocking antibody against BDNF or the BDNF protein were placed on the A1 of rat pups throughout the critical period window. These pups were then exposed to 7 kHz pure tone for 7 consecutive days and their frequency representations were mapped. BDNF blockade completely prevented the shaping of cortical tuning by experience and resulted in poor overall frequency tuning in A1. By contrast, BDNF infusion on the developing A1 amplified the effect of 7 kHz tone exposure compared to control. These results indicate that BDNF signaling participates in the experience-dependent plasticity induced by pure tone exposure during the critical period in A1.
Auditory Inhibition of Rapid Eye Movements and Dream Recall from REM Sleep
Stuart, Katrina; Conduit, Russell
2009-01-01
Study Objectives: There is debate in dream research as to whether ponto-geniculo-occipital (PGO) waves or cortical arousal during sleep underlie the biological mechanisms of dreaming. This study comprised 2 experiments. As eye movements (EMs) are currently considered the best noninvasive indicator of PGO burst activity in humans, the aim of the first experiment was to investigate the effect of low-intensity repeated auditory stimulation on EMs (and inferred PGO burst activity) during REM sleep. It was predicted that such auditory stimuli during REM sleep would have a suppressive effect on EMs. The aim of the second experiment was to examine the effects of this auditory stimulation on subsequent dream reporting on awakening. Design: Repeated measures design with counterbalanced order of experimental and control conditions across participants. Setting: Sleep laboratory based polysomnography (PSG) Participants: Experiment 1: 5 males and 10 females aged 18-35 years (M = 20.8, SD = 5.4). Experiment 2: 7 males and 13 females aged 18-35 years (M = 23.3, SD = 5.5). Interventions: Below-waking threshold tone presentations during REM sleep compared to control REM sleep conditions without tone presentations. Measurements and Results: PSG records were manually scored for sleep stages, EEG arousals, and EMs. Auditory stimulation during REM sleep was related to: (a) an increase in EEG arousal, (b) a decrease in the amplitude and frequency of EMs, and (c) a decrease in the frequency of visual imagery reports on awakening. Conclusions: The results of this study provide phenomenological support for PGO-based theories of dream reporting on awakening from sleep in humans. Citation: Stuart K; Conduit R. Auditory inhibition of rapid eye movements and dream recall from REM sleep. SLEEP 2009;32(3):399–408. PMID:19294960
Arakaki, Xianghong; Galbraith, Gary; Pikov, Victor; Fonteh, Alfred N.; Harrington, Michael G.
2014-01-01
Migraine symptoms often include auditory discomfort. Nitroglycerin (NTG)-triggered central sensitization (CS) provides a rodent model of migraine, but auditory brainstem pathways have not yet been studied in this example. Our objective was to examine brainstem auditory evoked potentials (BAEPs) in rat CS as a measure of possible auditory abnormalities. We used four subdermal electrodes to record horizontal (h) and vertical (v) dipole channel BAEPs before and after injection of NTG or saline. We measured the peak latencies (PLs), interpeak latencies (IPLs), and amplitudes for detectable waveforms evoked by 8, 16, or 32 KHz auditory stimulation. At 8 KHz stimulation, vertical channel positive PLs of waves 4, 5, and 6 (vP4, vP5, and vP6), and related IPLs from earlier negative or positive peaks (vN1-vP4, vN1-vP5, vN1-vP6; vP3-vP4, vP3-vP6) increased significantly 2 hours after NTG injection compared to the saline group. However, BAEP peak amplitudes at all frequencies, PLs and IPLs from the horizontal channel at all frequencies, and the vertical channel stimulated at 16 and 32 KHz showed no significant/consistent change. For the first time in the rat CS model, we show that BAEP PLs and IPLs ranging from putative bilateral medial superior olivary nuclei (P4) to the more rostral structures such as the medial geniculate body (P6) were prolonged 2 hours after NTG administration. These BAEP alterations could reflect changes in neurotransmitters and/or hypoperfusion in the midbrain. The similarity of our results with previous human studies further validates the rodent CS model for future migraine research. PMID:24680742
1993-05-28
1993 Dissertation and Abstract Approved: Commit tee Chairperson . ,a..w ember ~tee Member tli:u., ;2 9" PQ3 bate Date bate The author...1982; Mesulam et al., 1983; Rye et al., 1984; Saper, 1984). I will refer to the region of the basal forebrain that supplies cholinergic innervation to...topographical organization has been observed for cholinergic projection patterns, with more rostral and medial basal forebrain cell groups supplying
MEGALEX: A megastudy of visual and auditory word recognition.
Ferrand, Ludovic; Méot, Alain; Spinelli, Elsa; New, Boris; Pallier, Christophe; Bonin, Patrick; Dufau, Stéphane; Mathôt, Sebastiaan; Grainger, Jonathan
2018-06-01
Using the megastudy approach, we report a new database (MEGALEX) of visual and auditory lexical decision times and accuracy rates for tens of thousands of words. We collected visual lexical decision data for 28,466 French words and the same number of pseudowords, and auditory lexical decision data for 17,876 French words and the same number of pseudowords (synthesized tokens were used for the auditory modality). This constitutes the first large-scale database for auditory lexical decision, and the first database to enable a direct comparison of word recognition in different modalities. Different regression analyses were conducted to illustrate potential ways to exploit this megastudy database. First, we compared the proportions of variance accounted for by five word frequency measures. Second, we conducted item-level regression analyses to examine the relative importance of the lexical variables influencing performance in the different modalities (visual and auditory). Finally, we compared the similarities and differences between the two modalities. All data are freely available on our website ( https://sedufau.shinyapps.io/megalex/ ) and are searchable at www.lexique.org , inside the Open Lexique search engine.
Effects of musical training on the auditory cortex in children.
Trainor, Laurel J; Shahin, Antoine; Roberts, Larry E
2003-11-01
Several studies of the effects of musical experience on sound representations in the auditory cortex are reviewed. Auditory evoked potentials are compared in response to pure tones, violin tones, and piano tones in adult musicians versus nonmusicians as well as in 4- to 5-year-old children who have either had or not had extensive musical experience. In addition, the effects of auditory frequency discrimination training in adult nonmusicians on auditory evoked potentials are examined. It was found that the P2-evoked response is larger in both adult and child musicians than in nonmusicians and that auditory training enhances this component in nonmusician adults. The results suggest that the P2 is particularly neuroplastic and that the effects of musical experience can be seen early in development. They also suggest that although the effects of musical training on cortical representations may be greater if training begins in childhood, the adult brain is also open to change. These results are discussed with respect to potential benefits of early musical training as well as potential benefits of musical experience in aging.
Background sounds contribute to spectrotemporal plasticity in primary auditory cortex.
Moucha, Raluca; Pandya, Pritesh K; Engineer, Navzer D; Rathbun, Daniel L; Kilgard, Michael P
2005-05-01
The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8-4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity.
[Perception and selectivity of sound duration in the central auditory midbrain].
Wang, Xin; Li, An-An; Wu, Fei-Jian
2010-08-25
Sound duration plays important role in acoustic communication. Information of acoustic signal is mainly encoded in the amplitude and frequency spectrum of different durations. Duration selective neurons exist in the central auditory system including inferior colliculus (IC) of frog, bat, mouse and chinchilla, etc., and they are important in signal recognition and feature detection. Two generally accepted models, which are "coincidence detector model" and "anti-coincidence detector model", have been raised to explain the mechanism of neural selective responses to sound durations based on the study of IC neurons in bats. Although they are different in details, they both emphasize the importance of synaptic integration of excitatory and inhibitory inputs, and are able to explain the responses of most duration-selective neurons. However, both of the hypotheses need to be improved since other sound parameters, such as spectral pattern, amplitude and repetition rate, could affect the duration selectivity of the neurons. The dynamic changes of sound parameters are believed to enable the animal to effectively perform recognition of behavior related acoustic signals. Under free field sound stimulation, we analyzed the neural responses in the IC and auditory cortex of mouse and bat to sounds with different duration, frequency and amplitude, using intracellular or extracellular recording techniques. Based on our work and previous studies, this article reviews the properties of duration selectivity in central auditory system and discusses the mechanisms of duration selectivity and the effect of other sound parameters on the duration coding of auditory neurons.
Early auditory processing in area V5/MT+ of the congenitally blind brain.
Watkins, Kate E; Shakespeare, Timothy J; O'Donoghue, M Clare; Alexander, Iona; Ragge, Nicola; Cowey, Alan; Bridge, Holly
2013-11-13
Previous imaging studies of congenital blindness have studied individuals with heterogeneous causes of blindness, which may influence the nature and extent of cross-modal plasticity. Here, we scanned a homogeneous group of blind people with bilateral congenital anophthalmia, a condition in which both eyes fail to develop, and, as a result, the visual pathway is not stimulated by either light or retinal waves. This model of congenital blindness presents an opportunity to investigate the effects of very early visual deafferentation on the functional organization of the brain. In anophthalmic animals, the occipital cortex receives direct subcortical auditory input. We hypothesized that this pattern of subcortical reorganization ought to result in a topographic mapping of auditory frequency information in the occipital cortex of anophthalmic people. Using functional MRI, we examined auditory-evoked activity to pure tones of high, medium, and low frequencies. Activity in the superior temporal cortex was significantly reduced in anophthalmic compared with sighted participants. In the occipital cortex, a region corresponding to the cytoarchitectural area V5/MT+ was activated in the anophthalmic participants but not in sighted controls. Whereas previous studies in the blind indicate that this cortical area is activated to auditory motion, our data show it is also active for trains of pure tone stimuli and in some anophthalmic participants shows a topographic mapping (tonotopy). Therefore, this region appears to be performing early sensory processing, possibly served by direct subcortical input from the pulvinar to V5/MT+.
Emergence of band-pass filtering through adaptive spiking in the owl's cochlear nucleus
MacLeod, Katrina M.; Lubejko, Susan T.; Steinberg, Louisa J.; Köppl, Christine; Peña, Jose L.
2014-01-01
In the visual, auditory, and electrosensory modalities, stimuli are defined by first- and second-order attributes. The fast time-pressure signal of a sound, a first-order attribute, is important, for instance, in sound localization and pitch perception, while its slow amplitude-modulated envelope, a second-order attribute, can be used for sound recognition. Ascending the auditory pathway from ear to midbrain, neurons increasingly show a preference for the envelope and are most sensitive to particular envelope modulation frequencies, a tuning considered important for encoding sound identity. The level at which this tuning property emerges along the pathway varies across species, and the mechanism of how this occurs is a matter of debate. In this paper, we target the transition between auditory nerve fibers and the cochlear nucleus angularis (NA). While the owl's auditory nerve fibers simultaneously encode the fast and slow attributes of a sound, one synapse further, NA neurons encode the envelope more efficiently than the auditory nerve. Using in vivo and in vitro electrophysiology and computational analysis, we show that a single-cell mechanism inducing spike threshold adaptation can explain the difference in neural filtering between the two areas. We show that spike threshold adaptation can explain the increased selectivity to modulation frequency, as input level increases in NA. These results demonstrate that a spike generation nonlinearity can modulate the tuning to second-order stimulus features, without invoking network or synaptic mechanisms. PMID:24790170