Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L
2012-04-01
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.
Demodulation processes in auditory perception
NASA Astrophysics Data System (ADS)
Feth, Lawrence L.
1994-08-01
The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.
Enhanced pure-tone pitch discrimination among persons with autism but not Asperger syndrome.
Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A; Mottron, Laurent
2010-07-01
Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis. Based on these findings, Samson, Mottron, Jemel, Belin, and Ciocca (2006) proposed to extend the neural complexity hypothesis to the auditory modality. They hypothesized that persons with ASD should display enhanced performance for simple tones that are processed in primary auditory cortical regions, but diminished performance for complex tones that require additional processing in associative auditory regions, in comparison to typically developing individuals. To assess this hypothesis, we designed four auditory discrimination experiments targeting pitch, non-vocal and vocal timbre, and loudness. Stimuli consisted of spectro-temporally simple and complex tones. The participants were adolescents and young adults with autism, Asperger syndrome, and typical developmental histories, all with IQs in the normal range. Consistent with the neural complexity hypothesis and enhanced perceptual functioning model of ASD (Mottron, Dawson, Soulières, Hubert, & Burack, 2006), the participants with autism, but not with Asperger syndrome, displayed enhanced pitch discrimination for simple tones. However, no discrimination-thresholds differences were found between the participants with ASD and the typically developing persons across spectrally and temporally complex conditions. These findings indicate that enhanced pure-tone pitch discrimination may be a cognitive correlate of speech-delay among persons with ASD. However, auditory discrimination among this group does not appear to be directly contingent on the spectro-temporal complexity of the stimuli. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Auditory brainstem response to complex sounds: a tutorial
Skoe, Erika; Kraus, Nina
2010-01-01
This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007
Auditory Processing of Complex Sounds Across Frequency Channels.
1992-06-26
towards gaining an understanding how the auditory system processes complex sounds. "The results of binaural psychophysical experiments in human subjects...suggest (1) that spectrally synthetic binaural processing is the rule when the number of components in the tone complex are relatively few (less than...10) and there are no dynamic binaural cues to aid segregation of the target from the background, and (2) that waveforms having large effective
Mechanisms Mediating the Perception of Complex Acoustic Patterns
1990-11-09
units stimulated by the louder sound include the units stimulated by the fainter sound. Thus, auditory induction corresponds to a rather sophisticated...FIELD GRU - auditory perception, complex sounds I. I 19. ABSTRACT (Continue on reverse if necessary and identify by block number) Five studies were...show how auditory mechanisms employed for the processing of complex nonverbal patterns have been modified for the perception of speech. 2 Richard M
NASA Astrophysics Data System (ADS)
Moore, Brian C. J.
Psychoacoustics
Auditory connections and functions of prefrontal cortex
Plakke, Bethany; Romanski, Lizabeth M.
2014-01-01
The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931
Auditory psychophysics and perception.
Hirsh, I J; Watson, C S
1996-01-01
In this review of auditory psychophysics and perception, we cite some important books, research monographs, and research summaries from the past decade. Within auditory psychophysics, we have singled out some topics of current importance: Cross-Spectral Processing, Timbre and Pitch, and Methodological Developments. Complex sounds and complex listening tasks have been the subject of new studies in auditory perception. We review especially work that concerns auditory pattern perception, with emphasis on temporal aspects of the patterns and on patterns that do not depend on the cognitive structures often involved in the perception of speech and music. Finally, we comment on some aspects of individual difference that are sufficiently important to question the goal of characterizing auditory properties of the typical, average, adult listener. Among the important factors that give rise to these individual differences are those involved in selective processing and attention.
Intact Spectral but Abnormal Temporal Processing of Auditory Stimuli in Autism
ERIC Educational Resources Information Center
Groen, Wouter B.; van Orsouw, Linda; ter Huurne, Niels; Swinkels, Sophie; van der Gaag, Rutger-Jan; Buitelaar, Jan K.; Zwiers, Marcel P.
2009-01-01
The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with high-functioning-autism and 23 matched controls…
Neural basis of processing threatening voices in a crowded auditory world
Mothes-Lasch, Martin; Becker, Michael P. I.; Miltner, Wolfgang H. R.
2016-01-01
In real world situations, we typically listen to voice prosody against a background crowded with auditory stimuli. Voices and background can both contain behaviorally relevant features and both can be selectively in the focus of attention. Adequate responses to threat-related voices under such conditions require that the brain unmixes reciprocally masked features depending on variable cognitive resources. It is unknown which brain systems instantiate the extraction of behaviorally relevant prosodic features under varying combinations of prosody valence, auditory background complexity and attentional focus. Here, we used event-related functional magnetic resonance imaging to investigate the effects of high background sound complexity and attentional focus on brain activation to angry and neutral prosody in humans. Results show that prosody effects in mid superior temporal cortex were gated by background complexity but not attention, while prosody effects in the amygdala and anterior superior temporal cortex were gated by attention but not background complexity, suggesting distinct emotional prosody processing limitations in different regions. Crucially, if attention was focused on the highly complex background, the differential processing of emotional prosody was prevented in all brain regions, suggesting that in a distracting, complex auditory world even threatening voices may go unnoticed. PMID:26884543
Can spectro-temporal complexity explain the autistic pattern of performance on auditory tasks?
Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter
2006-01-01
To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material (pure tones) and/or low-level operations (detection, labelling, chord disembedding, detection of pitch changes) show a superior level of performance and shorter ERP latencies. In contrast, tasks involving spectrally- and temporally-dynamic material and/or complex operations (evaluation, attention) are poorly performed by autistics, or generate inferior ERP activity or brain activation. Neural complexity required to perform auditory tasks may therefore explain pattern of performance and activation of autistic individuals during auditory tasks.
Can Spectro-Temporal Complexity Explain the Autistic Pattern of Performance on Auditory Tasks?
ERIC Educational Resources Information Center
Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter
2006-01-01
To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material…
Prenatal Nicotine Exposure Disrupts Infant Neural Markers of Orienting.
King, Erin; Campbell, Alana; Belger, Aysenil; Grewen, Karen
2018-06-07
Prenatal nicotine exposure (PNE) from maternal cigarette smoking is linked to developmental deficits, including impaired auditory processing, language, generalized intelligence, attention, and sleep. Fetal brain undergoes massive growth, organization, and connectivity during gestation, making it particularly vulnerable to neurotoxic insult. Nicotine binds to nicotinic acetylcholine receptors, which are extensively involved in growth, connectivity, and function of developing neural circuitry and neurotransmitter systems. Thus, PNE may have long-term impact on neurobehavioral development. The purpose of this study was to compare the auditory K-complex, an event-related potential reflective of auditory gating, sleep preservation and memory consolidation during sleep, in infants with and without PNE and to relate these neural correlates to neurobehavioral development. We compared brain responses to an auditory paired-click paradigm in 3- to 5-month-old infants during Stage 2 sleep, when the K-complex is best observed. We measured component amplitude and delta activity during the K-complex. Infants with PNE demonstrated significantly smaller amplitude of the N550 component and reduced delta-band power within elicited K-complexes compared to nonexposed infants and also were less likely to orient with a head turn to a novel auditory stimulus (bell ring) when awake. PNE may impair auditory sensory gating, which may contribute to disrupted sleep and to reduced auditory discrimination and learning, attention re-orienting, and/or arousal during wakefulness reported in other studies. Links between PNE and reduced K-complex amplitude and delta power may represent altered cholinergic and GABAergic synaptic programming and possibly reflect early neural bases for PNE-linked disruptions in sleep quality and auditory processing. These may pose significant disadvantage for language acquisition, attention, and social interaction necessary for academic and social success.
Jones, S J; Longe, O; Vaz Pato, M
1998-03-01
Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with 'streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.
ERIC Educational Resources Information Center
Medwetsky, Larry
2011-01-01
Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…
Slevc, L Robert; Shell, Alison R
2015-01-01
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Harmonic template neurons in primate auditory cortex underlying complex sound processing
Feng, Lei
2017-01-01
Harmonicity is a fundamental element of music, speech, and animal vocalizations. How the auditory system extracts harmonic structures embedded in complex sounds and uses them to form a coherent unitary entity is not fully understood. Despite the prevalence of sounds rich in harmonic structures in our everyday hearing environment, it has remained largely unknown what neural mechanisms are used by the primate auditory cortex to extract these biologically important acoustic structures. In this study, we discovered a unique class of harmonic template neurons in the core region of auditory cortex of a highly vocal New World primate, the common marmoset (Callithrix jacchus), across the entire hearing frequency range. Marmosets have a rich vocal repertoire and a similar hearing range to that of humans. Responses of these neurons show nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures beyond two-tone combinations, and sensitivity to harmonic number and spectral regularity. Our findings suggest that the harmonic template neurons in auditory cortex may play an important role in processing sounds with harmonic structures, such as animal vocalizations, human speech, and music. PMID:28096341
Pinaud, Raphael; Terleph, Thomas A.; Tremere, Liisa A.; Phan, Mimi L.; Dagostin, André A.; Leão, Ricardo M.; Mello, Claudio V.; Vicario, David S.
2008-01-01
The role of GABA in the central processing of complex auditory signals is not fully understood. We have studied the involvement of GABAA-mediated inhibition in the processing of birdsong, a learned vocal communication signal requiring intact hearing for its development and maintenance. We focused on caudomedial nidopallium (NCM), an area analogous to parts of the mammalian auditory cortex with selective responses to birdsong. We present evidence that GABAA-mediated inhibition plays a pronounced role in NCM's auditory processing of birdsong. Using immunocytochemistry, we show that approximately half of NCM's neurons are GABAergic. Whole cell patch-clamp recordings in a slice preparation demonstrate that, at rest, spontaneously active GABAergic synapses inhibit excitatory inputs onto NCM neurons via GABAA receptors. Multi-electrode electrophysiological recordings in awake birds show that local blockade of GABAA-mediated inhibition in NCM markedly affects the temporal pattern of song-evoked responses in NCM without modifications in frequency tuning. Surprisingly, this blockade increases the phasic and largely suppresses the tonic response component, reflecting dynamic relationships of inhibitory networks that could include disinhibition. Thus processing of learned natural communication sounds in songbirds, and possibly other vocal learners, may depend on complex interactions of inhibitory networks. PMID:18480371
Franken, Matthias K; Eisner, Frank; Acheson, Daniel J; McQueen, James M; Hagoort, Peter; Schoffelen, Jan-Mathijs
2018-06-21
Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Mechanism of auditory hypersensitivity in human autism using autism model rats.
Ida-Eto, Michiru; Hara, Nao; Ohkawara, Takeshi; Narita, Masaaki
2017-04-01
Auditory hypersensitivity is one of the major complications in autism spectrum disorder. The aim of this study was to investigate whether the auditory brain center is affected in autism model rats. Autism model rats were prepared by prenatal exposure to thalidomide on embryonic day 9 and 10 in pregnant rats. The superior olivary complex (SOC), a complex of auditory nuclei, was immunostained with anti-calbindin d28k antibody at postnatal day 50. In autism model rats, SOC immunoreactivity was markedly decreased. Strength of immunostaining of SOC auditory fibers was also weak in autism model rats. Surprisingly, the size of the medial nucleus of trapezoid body, a nucleus exerting inhibitory function in SOC, was significantly decreased in autism model rats. Auditory hypersensitivity may be, in part, due to impairment of inhibitory processing by the auditory brain center. © 2016 Japan Pediatric Society.
Halliday, Lorna F; Tuomainen, Outi; Rosen, Stuart
2017-09-01
There is a general consensus that many children and adults with dyslexia and/or specific language impairment display deficits in auditory processing. However, how these deficits are related to developmental disorders of language is uncertain, and at least four categories of model have been proposed: single distal cause models, risk factor models, association models, and consequence models. This study used children with mild to moderate sensorineural hearing loss (MMHL) to investigate the link between auditory processing deficits and language disorders. We examined the auditory processing and language skills of 46, 8-16year-old children with MMHL and 44 age-matched typically developing controls. Auditory processing abilities were assessed using child-friendly psychophysical techniques in order to obtain discrimination thresholds. Stimuli incorporated three different timescales (µs, ms, s) and three different levels of complexity (simple nonspeech tones, complex nonspeech sounds, speech sounds), and tasks required discrimination of frequency or amplitude cues. Language abilities were assessed using a battery of standardised assessments of phonological processing, reading, vocabulary, and grammar. We found evidence that three different auditory processing abilities showed different relationships with language: Deficits in a general auditory processing component were necessary but not sufficient for language difficulties, and were consistent with a risk factor model; Deficits in slow-rate amplitude modulation (envelope) detection were sufficient but not necessary for language difficulties, and were consistent with either a single distal cause or a consequence model; And deficits in the discrimination of a single speech contrast (/bɑ/ vs /dɑ/) were neither necessary nor sufficient for language difficulties, and were consistent with an association model. Our findings suggest that different auditory processing deficits may constitute distinct and independent routes to the development of language difficulties in children. Copyright © 2017 Elsevier B.V. All rights reserved.
Auditory spatial processing in Alzheimer’s disease
Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.
2015-01-01
The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732
Morphological Effects in Auditory Word Recognition: Evidence from Danish
ERIC Educational Resources Information Center
Balling, Laura Winther; Baayen, R. Harald
2008-01-01
In this study, we investigate the processing of morphologically complex words in Danish using auditory lexical decision. We document a second critical point in auditory comprehension in addition to the Uniqueness Point (UP), namely the point at which competing morphological continuation forms of the base cease to be compatible with the input,…
Information-Processing Modules and Their Relative Modality Specificity
ERIC Educational Resources Information Center
Anderson, John R.; Qin, Yulin; Jung, Kwan-Jin; Carter, Cameron S.
2007-01-01
This research uses fMRI to understand the role of eight cortical regions in a relatively complex information-processing task. Modality of input (visual versus auditory) and modality of output (manual versus vocal) are manipulated. Two perceptual regions (auditory cortex and fusiform gyrus) only reflected perceptual encoding. Two motor regions were…
Tarasenko, Melissa A; Swerdlow, Neal R; Makeig, Scott; Braff, David L; Light, Gregory A
2014-01-01
Cognitive deficits limit psychosocial functioning in schizophrenia. For many patients, cognitive remediation approaches have yielded encouraging results. Nevertheless, therapeutic response is variable, and outcome studies consistently identify individuals who respond minimally to these interventions. Biomarkers that can assist in identifying patients likely to benefit from particular forms of cognitive remediation are needed. Here, we describe an event-related potential (ERP) biomarker - the auditory brain-stem response (ABR) to complex sounds (cABR) - that appears to be particularly well-suited for predicting response to at least one form of cognitive remediation that targets auditory information processing. Uniquely, the cABR quantifies the fidelity of sound encoded at the level of the brainstem and midbrain. This ERP biomarker has revealed auditory processing abnormalities in various neurodevelopmental disorders, correlates with functioning across several cognitive domains, and appears to be responsive to targeted auditory training. We present preliminary cABR data from 18 schizophrenia patients and propose further investigation of this biomarker for predicting and tracking response to cognitive interventions.
The Perception of Auditory Motion
Leung, Johahn
2016-01-01
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029
Incorporating Auditory Models in Speech/Audio Applications
NASA Astrophysics Data System (ADS)
Krishnamoorthi, Harish
2011-12-01
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
ERIC Educational Resources Information Center
Haesen, Birgitt; Boets, Bart; Wagemans, Johan
2011-01-01
This literature review aims to interpret behavioural and electrophysiological studies addressing auditory processing in children and adults with autism spectrum disorder (ASD). Data have been organised according to the applied methodology (behavioural versus electrophysiological studies) and according to stimulus complexity (pure versus complex…
Cortical Representations of Speech in a Multitalker Auditory Scene.
Puvvada, Krishna C; Simon, Jonathan Z
2017-09-20
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.
Testing the dual-pathway model for auditory processing in human cortex.
Zündorf, Ida C; Lewald, Jörg; Karnath, Hans-Otto
2016-01-01
Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis. Copyright © 2015 Elsevier Inc. All rights reserved.
Pilcher, June J; Jennings, Kristen S; Phillips, Ginger E; McCubbin, James A
2016-11-01
The current study investigated performance on a dual auditory task during a simulated night shift. Night shifts and sleep deprivation negatively affect performance on vigilance-based tasks, but less is known about the effects on complex tasks. Because language processing is necessary for successful work performance, it is important to understand how it is affected by night work and sleep deprivation. Sixty-two participants completed a simulated night shift resulting in 28 hr of total sleep deprivation. Performance on a vigilance task and a dual auditory language task was examined across four testing sessions. The results indicate that working at night negatively impacts vigilance, auditory attention, and comprehension. The effects on the auditory task varied based on the content of the auditory material. When the material was interesting and easy, the participants performed better. Night work had a greater negative effect when the auditory material was less interesting and more difficult. These findings support research that vigilance decreases during the night. The results suggest that auditory comprehension suffers when individuals are required to work at night. Maintaining attention and controlling effort especially on passages that are less interesting or more difficult could improve performance during night shifts. The results from the current study apply to many work environments where decision making is necessary in response to complex auditory information. Better predicting the effects of night work on language processing is important for developing improved means of coping with shiftwork. © 2016, Human Factors and Ergonomics Society.
Lifespan differences in nonlinear dynamics during rest and auditory oddball performance.
Müller, Viktor; Lindenberger, Ulman
2012-07-01
Electroencephalographic recordings (EEG) were used to assess age-associated differences in nonlinear brain dynamics during both rest and auditory oddball performance in children aged 9.0-12.8 years, younger adults, and older adults. We computed nonlinear coupling dynamics and dimensional complexity, and also determined spectral alpha power as an indicator of cortical reactivity. During rest, both nonlinear coupling and spectral alpha power decreased with age, whereas dimensional complexity increased. In contrast, when attending to the deviant stimulus, nonlinear coupling increased with age, and complexity decreased. Correlational analyses showed that nonlinear measures assessed during auditory oddball performance were reliably related to an independently assessed measure of perceptual speed. We conclude that cortical dynamics during rest and stimulus processing undergo substantial reorganization from childhood to old age, and propose that lifespan age differences in nonlinear dynamics during stimulus processing reflect lifespan changes in the functional organization of neuronal cell assemblies. © 2012 Blackwell Publishing Ltd.
Teng, Santani
2017-01-01
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019
Cichy, Radoslaw Martin; Teng, Santani
2017-02-19
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Tarasenko, Melissa A.; Swerdlow, Neal R.; Makeig, Scott; Braff, David L.; Light, Gregory A.
2014-01-01
Cognitive deficits limit psychosocial functioning in schizophrenia. For many patients, cognitive remediation approaches have yielded encouraging results. Nevertheless, therapeutic response is variable, and outcome studies consistently identify individuals who respond minimally to these interventions. Biomarkers that can assist in identifying patients likely to benefit from particular forms of cognitive remediation are needed. Here, we describe an event-related potential (ERP) biomarker – the auditory brain-stem response (ABR) to complex sounds (cABR) – that appears to be particularly well-suited for predicting response to at least one form of cognitive remediation that targets auditory information processing. Uniquely, the cABR quantifies the fidelity of sound encoded at the level of the brainstem and midbrain. This ERP biomarker has revealed auditory processing abnormalities in various neurodevelopmental disorders, correlates with functioning across several cognitive domains, and appears to be responsive to targeted auditory training. We present preliminary cABR data from 18 schizophrenia patients and propose further investigation of this biomarker for predicting and tracking response to cognitive interventions. PMID:25352811
Complex auditory behaviour emerges from simple reactive steering
NASA Astrophysics Data System (ADS)
Hedwig, Berthold; Poulet, James F. A.
2004-08-01
The recognition and localization of sound signals is fundamental to acoustic communication. Complex neural mechanisms are thought to underlie the processing of species-specific sound patterns even in animals with simple auditory pathways. In female crickets, which orient towards the male's calling song, current models propose pattern recognition mechanisms based on the temporal structure of the song. Furthermore, it is thought that localization is achieved by comparing the output of the left and right recognition networks, which then directs the female to the pattern that most closely resembles the species-specific song. Here we show, using a highly sensitive method for measuring the movements of female crickets, that when walking and flying each sound pulse of the communication signal releases a rapid steering response. Thus auditory orientation emerges from reactive motor responses to individual sound pulses. Although the reactive motor responses are not based on the song structure, a pattern recognition process may modulate the gain of the responses on a longer timescale. These findings are relevant to concepts of insect auditory behaviour and to the development of biologically inspired robots performing cricket-like auditory orientation.
A dynamic auditory-cognitive system supports speech-in-noise perception in older adults
Anderson, Samira; White-Schwoch, Travis; Parbery-Clark, Alexandra; Kraus, Nina
2013-01-01
Understanding speech in noise is one of the most complex activities encountered in everyday life, relying on peripheral hearing, central auditory processing, and cognition. These abilities decline with age, and so older adults are often frustrated by a reduced ability to communicate effectively in noisy environments. Many studies have examined these factors independently; in the last decade, however, the idea of the auditory-cognitive system has emerged, recognizing the need to consider the processing of complex sounds in the context of dynamic neural circuits. Here, we use structural equation modeling to evaluate interacting contributions of peripheral hearing, central processing, cognitive ability, and life experiences to understanding speech in noise. We recruited 120 older adults (ages 55 to 79) and evaluated their peripheral hearing status, cognitive skills, and central processing. We also collected demographic measures of life experiences, such as physical activity, intellectual engagement, and musical training. In our model, central processing and cognitive function predicted a significant proportion of variance in the ability to understand speech in noise. To a lesser extent, life experience predicted hearing-in-noise ability through modulation of brainstem function. Peripheral hearing levels did not significantly contribute to the model. Previous musical experience modulated the relative contributions of cognitive ability and lifestyle factors to hearing in noise. Our models demonstrate the complex interactions required to hear in noise and the importance of targeting cognitive function, lifestyle, and central auditory processing in the management of individuals who are having difficulty hearing in noise. PMID:23541911
Response to own name in children: ERP study of auditory social information processing.
Key, Alexandra P; Jones, Dorita; Peters, Sarika U
2016-09-01
Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other's names from unknown names, as reflected by the enhanced parietal P300 response. The responses to own and close other names did not differ between each other. Repeated presentations of an unknown name did not result in the same familiarity as the known names. These results suggest that auditory ERPs to known/unknown names are a feasible means to evaluate complex auditory processing without the need for overt behavioral responses. Copyright © 2016 Elsevier B.V. All rights reserved.
Response to Own Name in Children: ERP Study of Auditory Social Information Processing
Key, Alexandra P.; Jones, Dorita; Peters, Sarika U.
2016-01-01
Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85 years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other’s names from unknown names, as reflected by the enhanced parietal P300 response. The responses to own and close other names did not differ between each other. Repeated presentations of an unknown name did not result in the same familiarity as the known names. These results suggest that auditory ERPs to known/unknown names are a feasible means to evaluate complex auditory processing without the need for overt behavioral responses. PMID:27456543
Speech target modulates speaking induced suppression in auditory cortex
Ventura, Maria I; Nagarajan, Srikantan S; Houde, John F
2009-01-01
Background Previous magnetoencephalography (MEG) studies have demonstrated speaking-induced suppression (SIS) in the auditory cortex during vocalization tasks wherein the M100 response to a subject's own speaking is reduced compared to the response when they hear playback of their speech. Results The present MEG study investigated the effects of utterance rapidity and complexity on SIS: The greatest difference between speak and listen M100 amplitudes (i.e., most SIS) was found in the simple speech task. As the utterances became more rapid and complex, SIS was significantly reduced (p = 0.0003). Conclusion These findings are highly consistent with our model of how auditory feedback is processed during speaking, where incoming feedback is compared with an efference-copy derived prediction of expected feedback. Thus, the results provide further insights about how speech motor output is controlled, as well as the computational role of auditory cortex in transforming auditory feedback. PMID:19523234
Perception of Long-Period Complex Sounds
1989-11-27
Richard M. Warren AFOSR Grant No. 88-0320 M CES Guttman, N. & Julesz, B. (1963). Lower limits of auditory periodicity analysis. Journal of the Aostical...order within auditory sequences. Peretion & PsvchobhVsics, 12, 86-90. Watson, C.S., (1987). Uncertainty, informational masking, and the capacity of...immediate memory. In W.A. Yost and C.S. Watson (eds.), Auditory Processing of Camlex Sounds. New Jersey: lawrence Erlbaum Associates, pp. 267-277
Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.
Woolley, Sarah M N; Portfors, Christine V
2013-11-01
The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.
Psycho acoustical Measures in Individuals with Congenital Visual Impairment.
Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh
2017-12-01
In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.
Auditory reafferences: the influence of real-time feedback on movement control.
Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus
2015-01-01
Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.
Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R.
2012-01-01
Previous studies have shown that the pitch of a sound is perceived in the absence of its fundamental frequency (F0), suggesting that a distinct mechanism may resolve pitch based on a pattern that exists between harmonic frequencies. The present study investigated whether such a mechanism is active during voice pitch control. ERPs were recorded in response to +200 cents pitch shifts in the auditory feedback of self-vocalizations and complex tones with and without the F0. The absence of the fundamental induced no difference in ERP latencies. However, a right-hemisphere difference was found in the N1 amplitudes with larger responses to complex tones that included the fundamental compared to when it was missing. The P1 and N1 latencies were shorter in the left hemisphere, and the N1 and P2 amplitudes were larger bilaterally for pitch shifts in voice and complex tones compared with pure tones. These findings suggest hemispheric differences in neural encoding of pitch in sounds with missing fundamental. Data from the present study suggest that the right cortical auditory areas, thought to be specialized for spectral processing, may utilize different mechanisms to resolve pitch in sounds with missing fundamental. The left hemisphere seems to perform faster processing to resolve pitch based on the rate of temporal variations in complex sounds compared with pure tones. These effects indicate that the differential neural processing of pitch in the left and right hemispheres may enable the audio-vocal system to detect temporal and spectral variations in the auditory feedback for vocal pitch control. PMID:22386045
The auditory scene: an fMRI study on melody and accompaniment in professional pianists.
Spada, Danilo; Verga, Laura; Iadanza, Antonella; Tettamanti, Marco; Perani, Daniela
2014-11-15
The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes. Copyright © 2014 Elsevier Inc. All rights reserved.
Milner, Rafał; Rusiniak, Mateusz; Lewandowska, Monika; Wolak, Tomasz; Ganc, Małgorzata; Piątkowska-Janko, Ewa; Bogorodzki, Piotr; Skarżyński, Henryk
2014-01-01
Background The neural underpinnings of auditory information processing have often been investigated using the odd-ball paradigm, in which infrequent sounds (deviants) are presented within a regular train of frequent stimuli (standards). Traditionally, this paradigm has been applied using either high temporal resolution (EEG) or high spatial resolution (fMRI, PET). However, used separately, these techniques cannot provide information on both the location and time course of particular neural processes. The goal of this study was to investigate the neural correlates of auditory processes with a fine spatio-temporal resolution. A simultaneous auditory evoked potentials (AEP) and functional magnetic resonance imaging (fMRI) technique (AEP-fMRI), together with an odd-ball paradigm, were used. Material/Methods Six healthy volunteers, aged 20–35 years, participated in an odd-ball simultaneous AEP-fMRI experiment. AEP in response to acoustic stimuli were used to model bioelectric intracerebral generators, and electrophysiological results were integrated with fMRI data. Results fMRI activation evoked by standard stimuli was found to occur mainly in the primary auditory cortex. Activity in these regions overlapped with intracerebral bioelectric sources (dipoles) of the N1 component. Dipoles of the N1/P2 complex in response to standard stimuli were also found in the auditory pathway between the thalamus and the auditory cortex. Deviant stimuli induced fMRI activity in the anterior cingulate gyrus, insula, and parietal lobes. Conclusions The present study showed that neural processes evoked by standard stimuli occur predominantly in subcortical and cortical structures of the auditory pathway. Deviants activate areas non-specific for auditory information processing. PMID:24413019
Klein-Hennig, Martin; Dietz, Mathias; Hohmann, Volker
2018-03-01
Both harmonic and binaural signal properties are relevant for auditory processing. To investigate how these cues combine in the auditory system, detection thresholds for an 800-Hz tone masked by a diotic (i.e., identical between the ears) harmonic complex tone were measured in six normal-hearing subjects. The target tone was presented either diotically or with an interaural phase difference (IPD) of 180° and in either harmonic or "mistuned" relationship to the diotic masker. Three different maskers were used, a resolved and an unresolved complex tone (fundamental frequency: 160 and 40 Hz) with four components below and above the target frequency and a broadband unresolved complex tone with 12 additional components. The target IPD provided release from masking in most masker conditions, whereas mistuning led to a significant release from masking only in the diotic conditions with the resolved and the narrowband unresolved maskers. A significant effect of mistuning was neither found in the diotic condition with the wideband unresolved masker nor in any of the dichotic conditions. An auditory model with a single analysis frequency band and different binaural processing schemes was employed to predict the data of the unresolved masker conditions. Sensitivity to modulation cues was achieved by including an auditory-motivated modulation filter in the processing pathway. The predictions of the diotic data were in line with the experimental results and literature data in the narrowband condition, but not in the broadband condition, suggesting that across-frequency processing is involved in processing modulation information. The experimental and model results in the dichotic conditions show that the binaural processor cannot exploit modulation information in binaurally unmasked conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Lelo-de-Larrea-Mancera, E Sebastian; Rodríguez-Agudelo, Yaneth; Solís-Vivanco, Rodolfo
2017-06-01
Music represents a complex form of human cognition. To what extent our auditory system is attuned to music is yet to be clearly understood. Our principal aim was to determine whether the neurophysiological operations underlying pre-attentive auditory change detection (N1 enhancement (N1e)/Mismatch Negativity (MMN)) and the subsequent involuntary attentional reallocation (P3a) towards infrequent sound omissions, are influenced by differences in musical content. Specifically, we intended to explore any interaction effects that rhythmic and pitch dimensions of musical organization may have over these processes. Results showed that both the N1e and MMN amplitudes were differentially influenced by rhythm and pitch dimensions. MMN latencies were shorter for musical structures containing both features. This suggests some neurocognitive independence between pitch and rhythm domains, but also calls for further address on possible interactions between both of them at the level of early, automatic auditory detection. Furthermore, results demonstrate that the N1e reflects basic sensory memory processes. Lastly, we show that the involuntary switch of attention associated with the P3a reflects a general-purpose mechanism not modulated by musical features. Altogether, the N1e/MMN/P3a complex elicited by infrequent sound omissions revealed evidence of musical influence over early stages of auditory perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation
Butler, Blake E.; Trainor, Laurel J.
2012-01-01
Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch. PMID:22740836
A dynamic auditory-cognitive system supports speech-in-noise perception in older adults.
Anderson, Samira; White-Schwoch, Travis; Parbery-Clark, Alexandra; Kraus, Nina
2013-06-01
Understanding speech in noise is one of the most complex activities encountered in everyday life, relying on peripheral hearing, central auditory processing, and cognition. These abilities decline with age, and so older adults are often frustrated by a reduced ability to communicate effectively in noisy environments. Many studies have examined these factors independently; in the last decade, however, the idea of an auditory-cognitive system has emerged, recognizing the need to consider the processing of complex sounds in the context of dynamic neural circuits. Here, we used structural equation modeling to evaluate the interacting contributions of peripheral hearing, central processing, cognitive ability, and life experiences to understanding speech in noise. We recruited 120 older adults (ages 55-79) and evaluated their peripheral hearing status, cognitive skills, and central processing. We also collected demographic measures of life experiences, such as physical activity, intellectual engagement, and musical training. In our model, central processing and cognitive function predicted a significant proportion of variance in the ability to understand speech in noise. To a lesser extent, life experience predicted hearing-in-noise ability through modulation of brainstem function. Peripheral hearing levels did not significantly contribute to the model. Previous musical experience modulated the relative contributions of cognitive ability and lifestyle factors to hearing in noise. Our models demonstrate the complex interactions required to hear in noise and the importance of targeting cognitive function, lifestyle, and central auditory processing in the management of individuals who are having difficulty hearing in noise. Copyright © 2013 Elsevier B.V. All rights reserved.
Xiang, Juanjuan; Simon, Jonathan; Elhilali, Mounya
2010-01-01
Processing of complex acoustic scenes depends critically on the temporal integration of sensory information as sounds evolve naturally over time. It has been previously speculated that this process is guided by both innate mechanisms of temporal processing in the auditory system, as well as top-down mechanisms of attention, and possibly other schema-based processes. In an effort to unravel the neural underpinnings of these processes and their role in scene analysis, we combine Magnetoencephalography (MEG) with behavioral measures in humans in the context of polyrhythmic tone sequences. While maintaining unchanged sensory input, we manipulate subjects’ attention to one of two competing rhythmic streams in the same sequence. The results reveal that the neural representation of the attended rhythm is significantly enhanced both in its steady-state power and spatial phase coherence relative to its unattended state, closely correlating with its perceptual detectability for each listener. Interestingly, the data reveals a differential efficiency of rhythmic rates of the order of few hertz during the streaming process, closely following known neural and behavioral measures of temporal modulation sensitivity in the auditory system. These findings establish a direct link between known temporal modulation tuning in the auditory system (particularly at the level of auditory cortex) and the temporal integration of perceptual features in a complex acoustic scene, while mediated by processes of attention. PMID:20826671
Linguistic and auditory temporal processing in children with specific language impairment.
Fortunato-Tavares, Talita; Rocha, Caroline Nunes; Andrade, Claudia Regina Furquim de; Befi-Lopes, Débora Maria; Schochat, Eliane; Hestvik, Arild; Schwartz, Richard G
2009-01-01
Several studies suggest the association of specific language impairment (SLI) to deficits in auditory processing. It has been evidenced that children with SLI present deficit in brief stimuli discrimination. Such deficit would lead to difficulties in developing phonological abilities necessary to map phonemes and to effectively and automatically code and decode words and sentences. However, the correlation between temporal processing (TP) and specific deficits in language disorders--such as syntactic comprehension abilities--has received little or no attention. To analyze the correlation between: TP (through the Frequency Pattern Test--FPT) and Syntactic Complexity Comprehension (through a Sentence Comprehension Task). Sixteen children with typical language development (8;9 +/- 1;1 years) and seven children with SLI (8;1 +/- 1;2 years) participated on the study. Accuracy of both groups decreased with the increase on syntactic complexity (both p < 0.01). On the between groups comparison, performance difference on the Test of Syntactic Complexity Comprehension (TSCC) was statistically significant (p = 0.02).As expected, children with SLI presented FPT performance outside reference values. On the SLI group, correlations between TSCC and FPT were positive and higher for high syntactic complexity (r = 0.97) than for low syntactic complexity (r = 0.51). Results suggest that FPT is positively correlated to syntactic complexity comprehension abilities.The low performance on FPT could serve as an additional indicator of deficits in complex linguistic processing. Future studies should consider, besides the increase of the sample, longitudinal studies that investigate the effect of frequency pattern auditory training on performance in high syntactic complexity comprehension tasks.
Cortical contributions to the auditory frequency-following response revealed by MEG
Coffey, Emily B. J.; Herholz, Sibylle C.; Chepesiuk, Alexander M. P.; Baillet, Sylvain; Zatorre, Robert J.
2016-01-01
The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. PMID:27009409
Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed
2014-11-01
Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Maturation of the auditory t-complex brain response across adolescence.
Mahajan, Yatin; McArthur, Genevieve
2013-02-01
Adolescence is a time of great change in the brain in terms of structure and function. It is possible to track the development of neural function across adolescence using auditory event-related potentials (ERPs). This study tested if the brain's functional processing of sound changed across adolescence. We measured passive auditory t-complex peaks to pure tones and consonant-vowel (CV) syllables in 90 children and adolescents aged 10-18 years, as well as 10 adults. Across adolescence, Na amplitude increased to tones and speech at the right, but not left, temporal site. Ta amplitude decreased at the right temporal site for tones, and at both sites for speech. The Tb remained constant at both sites. The Na and Ta appeared to mature later in the right than left hemisphere. The t-complex peaks Na and Tb exhibited left lateralization and Ta showed right lateralization. Thus, the functional processing of sound continued to develop across adolescence and into adulthood. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
McKeown, Denis; Wellsted, David
2009-01-01
Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…
Agnosia for accents in primary progressive aphasia☆
Fletcher, Phillip D.; Downey, Laura E.; Agustus, Jennifer L.; Hailstone, Julia C.; Tyndall, Marina H.; Cifelli, Alberto; Schott, Jonathan M.; Warrington, Elizabeth K.; Warren, Jason D.
2013-01-01
As an example of complex auditory signal processing, the analysis of accented speech is potentially vulnerable in the progressive aphasias. However, the brain basis of accent processing and the effects of neurodegenerative disease on this processing are not well understood. Here we undertook a detailed neuropsychological study of a patient, AA with progressive nonfluent aphasia, in whom agnosia for accents was a prominent clinical feature. We designed a battery to assess AA's ability to process accents in relation to other complex auditory signals. AA's performance was compared with a cohort of 12 healthy age and gender matched control participants and with a second patient, PA, who had semantic dementia with phonagnosia and prosopagnosia but no reported difficulties with accent processing. Relative to healthy controls, the patients showed distinct profiles of accent agnosia. AA showed markedly impaired ability to distinguish change in an individual's accent despite being able to discriminate phonemes and voices (apperceptive accent agnosia); and in addition, a severe deficit of accent identification. In contrast, PA was able to perceive changes in accents, phonemes and voices normally, but showed a relatively mild deficit of accent identification (associative accent agnosia). Both patients showed deficits of voice and environmental sound identification, however PA showed an additional deficit of face identification whereas AA was able to identify (though not name) faces normally. These profiles suggest that AA has conjoint (or interacting) deficits involving both apperceptive and semantic processing of accents, while PA has a primary semantic (associative) deficit affecting accents along with other kinds of auditory objects and extending beyond the auditory modality. Brain MRI revealed left peri-Sylvian atrophy in case AA and relatively focal asymmetric (predominantly right sided) temporal lobe atrophy in case PA. These cases provide further evidence for the fractionation of brain mechanisms for complex sound analysis, and for the stratification of progressive aphasia syndromes according to the signature of nonverbal auditory deficits they produce. PMID:23721780
Agnosia for accents in primary progressive aphasia.
Fletcher, Phillip D; Downey, Laura E; Agustus, Jennifer L; Hailstone, Julia C; Tyndall, Marina H; Cifelli, Alberto; Schott, Jonathan M; Warrington, Elizabeth K; Warren, Jason D
2013-08-01
As an example of complex auditory signal processing, the analysis of accented speech is potentially vulnerable in the progressive aphasias. However, the brain basis of accent processing and the effects of neurodegenerative disease on this processing are not well understood. Here we undertook a detailed neuropsychological study of a patient, AA with progressive nonfluent aphasia, in whom agnosia for accents was a prominent clinical feature. We designed a battery to assess AA's ability to process accents in relation to other complex auditory signals. AA's performance was compared with a cohort of 12 healthy age and gender matched control participants and with a second patient, PA, who had semantic dementia with phonagnosia and prosopagnosia but no reported difficulties with accent processing. Relative to healthy controls, the patients showed distinct profiles of accent agnosia. AA showed markedly impaired ability to distinguish change in an individual's accent despite being able to discriminate phonemes and voices (apperceptive accent agnosia); and in addition, a severe deficit of accent identification. In contrast, PA was able to perceive changes in accents, phonemes and voices normally, but showed a relatively mild deficit of accent identification (associative accent agnosia). Both patients showed deficits of voice and environmental sound identification, however PA showed an additional deficit of face identification whereas AA was able to identify (though not name) faces normally. These profiles suggest that AA has conjoint (or interacting) deficits involving both apperceptive and semantic processing of accents, while PA has a primary semantic (associative) deficit affecting accents along with other kinds of auditory objects and extending beyond the auditory modality. Brain MRI revealed left peri-Sylvian atrophy in case AA and relatively focal asymmetric (predominantly right sided) temporal lobe atrophy in case PA. These cases provide further evidence for the fractionation of brain mechanisms for complex sound analysis, and for the stratification of progressive aphasia syndromes according to the signature of nonverbal auditory deficits they produce. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Salisbury, Dean F; McCathern, Alexis G
2016-11-01
The simple mismatch negativity (MMN) to tones deviating physically (in pitch, loudness, duration, etc.) from repeated standard tones is robustly reduced in schizophrenia. Although generally interpreted to reflect memory or cognitive processes, simple MMN likely contains some activity from non-adapted sensory cells, clouding what process is affected in schizophrenia. Research in healthy participants has demonstrated that MMN can be elicited by deviations from abstract auditory patterns and complex rules that do not cause sensory adaptation. Whether persons with schizophrenia show abnormalities in the complex MMN is unknown. Fourteen schizophrenia participants and 16 matched healthy underwent EEG recording while listening to 400 groups of 6 tones 330 ms apart, separated by 800 ms. Occasional deviant groups were missing the 4th or 6th tone (50 groups each). Healthy participants generated a robust response to a missing but expected tone. The schizophrenia group was significantly impaired in activating the missing stimulus MMN, generating no significant activity at all. Schizophrenia affects the ability of "primitive sensory intelligence" and pre-attentive perceptual mechanisms to form implicit groups in the auditory environment. Importantly, this deficit must relate to abnormalities in abstract complex pattern analysis rather than sensory problems in the disorder. The results indicate a deficit in parsing of the complex auditory scene which likely impacts negatively on successful social navigation in schizophrenia. Knowledge of the location and circuit architecture underlying the true novelty-related MMN and its pathophysiology in schizophrenia will help target future interventions.
Auditory scene analysis in school-aged children with developmental language disorders
Sussman, E.; Steinschneider, M.; Lee, W.; Lawson, K.
2014-01-01
Natural sound environments are dynamic, with overlapping acoustic input originating from simultaneously active sources. A key function of the auditory system is to integrate sensory inputs that belong together and segregate those that come from different sources. We hypothesized that this skill is impaired in individuals with phonological processing difficulties. There is considerable disagreement about whether phonological impairments observed in children with developmental language disorders can be attributed to specific linguistic deficits or to more general acoustic processing deficits. However, most tests of general auditory abilities have been conducted with a single set of sounds. We assessed the ability of school-aged children (7–15 years) to parse complex auditory non-speech input, and determined whether the presence of phonological processing impairments was associated with stream perception performance. A key finding was that children with language impairments did not show the same developmental trajectory for stream perception as typically developing children. In addition, children with language impairments required larger frequency separations between sounds to hear distinct streams compared to age-matched peers. Furthermore, phonological processing ability was a significant predictor of stream perception measures, but only in the older age groups. No such association was found in the youngest children. These results indicate that children with language impairments have difficulty parsing speech streams, or identifying individual sound events when there are competing sound sources. We conclude that language group differences may in part reflect fundamental maturational disparities in the analysis of complex auditory scenes. PMID:24548430
Temporal integration at consecutive processing stages in the auditory pathway of the grasshopper.
Wirtssohn, Sarah; Ronacher, Bernhard
2015-04-01
Temporal integration in the auditory system of locusts was quantified by presenting single clicks and click pairs while performing intracellular recordings. Auditory neurons were studied at three processing stages, which form a feed-forward network in the metathoracic ganglion. Receptor neurons and most first-order interneurons ("local neurons") encode the signal envelope, while second-order interneurons ("ascending neurons") tend to extract more complex, behaviorally relevant sound features. In different neuron types of the auditory pathway we found three response types: no significant temporal integration (some ascending neurons), leaky energy integration (receptor neurons and some local neurons), and facilitatory processes (some local and ascending neurons). The receptor neurons integrated input over very short time windows (<2 ms). Temporal integration on longer time scales was found at subsequent processing stages, indicative of within-neuron computations and network activity. These different strategies, realized at separate processing stages and in parallel neuronal pathways within one processing stage, could enable the grasshopper's auditory system to evaluate longer time windows and thus to implement temporal filters, while at the same time maintaining a high temporal resolution. Copyright © 2015 the American Physiological Society.
Terband, H.; Maassen, B.; Guenther, F.H.; Brumberg, J.
2014-01-01
Background/Purpose Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. Method In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Results Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. Conclusions These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. PMID:24491630
Ethofer, Thomas; Brück, Carolin; Alter, Kai; Grodd, Wolfgang; Kreifelts, Benjamin
2013-01-01
Laughter is an ancient signal of social communication among humans and non-human primates. Laughter types with complex social functions (e.g., taunt and joy) presumably evolved from the unequivocal and reflex-like social bonding signal of tickling laughter already present in non-human primates. Here, we investigated the modulations of cerebral connectivity associated with different laughter types as well as the effects of attention shifts between implicit and explicit processing of social information conveyed by laughter using functional magnetic resonance imaging (fMRI). Complex social laughter types and tickling laughter were found to modulate connectivity in two distinguishable but partially overlapping parts of the laughter perception network irrespective of task instructions. Connectivity changes, presumably related to the higher acoustic complexity of tickling laughter, occurred between areas in the prefrontal cortex and the auditory association cortex, potentially reflecting higher demands on acoustic analysis associated with increased information load on auditory attention, working memory, evaluation and response selection processes. In contrast, the higher degree of socio-relational information in complex social laughter types was linked to increases of connectivity between auditory association cortices, the right dorsolateral prefrontal cortex and brain areas associated with mentalizing as well as areas in the visual associative cortex. These modulations might reflect automatic analysis of acoustic features, attention direction to informative aspects of the laughter signal and the retention of those in working memory during evaluation processes. These processes may be associated with visual imagery supporting the formation of inferences on the intentions of our social counterparts. Here, the right dorsolateral precentral cortex appears as a network node potentially linking the functions of auditory and visual associative sensory cortices with those of the mentalizing-associated anterior mediofrontal cortex during the decoding of social information in laughter. PMID:23667619
Wildgruber, Dirk; Szameitat, Diana P; Ethofer, Thomas; Brück, Carolin; Alter, Kai; Grodd, Wolfgang; Kreifelts, Benjamin
2013-01-01
Laughter is an ancient signal of social communication among humans and non-human primates. Laughter types with complex social functions (e.g., taunt and joy) presumably evolved from the unequivocal and reflex-like social bonding signal of tickling laughter already present in non-human primates. Here, we investigated the modulations of cerebral connectivity associated with different laughter types as well as the effects of attention shifts between implicit and explicit processing of social information conveyed by laughter using functional magnetic resonance imaging (fMRI). Complex social laughter types and tickling laughter were found to modulate connectivity in two distinguishable but partially overlapping parts of the laughter perception network irrespective of task instructions. Connectivity changes, presumably related to the higher acoustic complexity of tickling laughter, occurred between areas in the prefrontal cortex and the auditory association cortex, potentially reflecting higher demands on acoustic analysis associated with increased information load on auditory attention, working memory, evaluation and response selection processes. In contrast, the higher degree of socio-relational information in complex social laughter types was linked to increases of connectivity between auditory association cortices, the right dorsolateral prefrontal cortex and brain areas associated with mentalizing as well as areas in the visual associative cortex. These modulations might reflect automatic analysis of acoustic features, attention direction to informative aspects of the laughter signal and the retention of those in working memory during evaluation processes. These processes may be associated with visual imagery supporting the formation of inferences on the intentions of our social counterparts. Here, the right dorsolateral precentral cortex appears as a network node potentially linking the functions of auditory and visual associative sensory cortices with those of the mentalizing-associated anterior mediofrontal cortex during the decoding of social information in laughter.
Binaural fusion and the representation of virtual pitch in the human auditory cortex.
Pantev, C; Elbert, T; Ross, B; Eulitz, C; Terhardt, E
1996-10-01
The auditory system derives the pitch of complex tones from the tone's harmonics. Research in psychoacoustics predicted that binaural fusion was an important feature of pitch processing. Based on neuromagnetic human data, the first neurophysiological confirmation of binaural fusion in hearing is presented. The centre of activation within the cortical tonotopic map corresponds to the location of the perceived pitch and not to the locations that are activated when the single frequency constituents are presented. This is also true when the different harmonics of a complex tone are presented dichotically. We conclude that the pitch processor includes binaural fusion to determine the particular pitch location which is activated in the auditory cortex.
Corticofugal modulation of peripheral auditory responses
Terreros, Gonzalo; Delano, Paul H.
2015-01-01
The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body (MGB), inferior colliculus (IC), cochlear nucleus (CN) and superior olivary complex (SOC) reaching the cochlea through olivocochlear (OC) fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i) colliculo-thalamic-cortico-collicular; (ii) cortico-(collicular)-OC; and (iii) cortico-(collicular)-CN pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the OC reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on CN, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed. PMID:26483647
Cross-modal links among vision, audition, and touch in complex environments.
Ferris, Thomas K; Sarter, Nadine B
2008-02-01
This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.
Modeling complex tone perception: grouping harmonics with combination-sensitive neurons.
Medvedev, Andrei V; Chiao, Faye; Kanwal, Jagmeet S
2002-06-01
Perception of complex communication sounds is a major function of the auditory system. To create a coherent precept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as "combination-sensitivity," are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to "recognize" the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing.
The 'F-complex' and MMN tap different aspects of deviance.
Laufer, Ilan; Pratt, Hillel
2005-02-01
To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.
Enhanced Pure-Tone Pitch Discrimination among Persons with Autism but not Asperger Syndrome
ERIC Educational Resources Information Center
Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A.; Mottron, Laurent
2010-01-01
Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis.…
Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.; Munhall, Kevin G.; Cusack, Rhodri; Johnsrude, Ingrid S.
2013-01-01
The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multi-voxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was employed to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared to during passive listening. One network of regions appears to encode an ‘error signal’ irrespective of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a fronto-temporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Taken together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems. PMID:23467350
Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing
Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
2016-01-01
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812
Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.
Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
2016-01-01
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.
Morphological Decomposition and Semantic Integration in Word Processing
ERIC Educational Resources Information Center
Meunier, Fanny; Longtin, Catherine-Marie
2007-01-01
In the present study, we looked at cross-modal priming effects produced by auditory presentation of morphologically complex pseudowords in order to investigate semantic integration during the processing of French morphologically complex items. In Experiment 1, we used as primes pseudowords consisting of a non-interpretable combination of roots and…
Coding principles of the canonical cortical microcircuit in the avian brain
Calabrese, Ana; Woolley, Sarah M. N.
2015-01-01
Mammalian neocortex is characterized by a layered architecture and a common or “canonical” microcircuit governing information flow among layers. This microcircuit is thought to underlie the computations required for complex behavior. Despite the absence of a six-layered cortex, birds are capable of complex cognition and behavior. In addition, the avian auditory pallium is composed of adjacent information-processing regions with genetically identified neuron types and projections among regions comparable with those found in the neocortex. Here, we show that the avian auditory pallium exhibits the same information-processing principles that define the canonical cortical microcircuit, long thought to have evolved only in mammals. These results suggest that the canonical cortical microcircuit evolved in a common ancestor of mammals and birds and provide a physiological explanation for the evolution of neural processes that give rise to complex behavior in the absence of cortical lamination. PMID:25691736
Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.
Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer
2010-09-29
Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole.
Connectivity in the human brain dissociates entropy and complexity of auditory inputs☆
Nastase, Samuel A.; Iacovella, Vittorio; Davis, Ben; Hasson, Uri
2015-01-01
Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. PMID:25536493
Auditory memory can be object based.
Dyson, Benjamin J; Ishfaq, Feraz
2008-04-01
Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.
Love, Tracy; Haist, Frank; Nicol, Janet; Swinney, David
2009-01-01
Using functional magnetic resonance imaging (fMRI), this study directly examined an issue that bridges the potential language processing and multi-modal views of the role of Broca’s area: the effects of task-demands in language comprehension studies. We presented syntactically simple and complex sentences for auditory comprehension under three different (differentially complex) task-demand conditions: passive listening, probe verification, and theme judgment. Contrary to many language imaging findings, we found that both simple and complex syntactic structures activated left inferior frontal cortex (L-IFC). Critically, we found activation in these frontal regions increased together with increased task-demands. Specifically, tasks that required greater manipulation and comparison of linguistic material recruited L-IFC more strongly; independent of syntactic structure complexity. We argue that much of the presumed syntactic effects previously found in sentence imaging studies of L-IFC may, among other things, reflect the tasks employed in these studies and that L-IFC is a region underlying mnemonic and other integrative functions, on which much language processing may rely. PMID:16881268
Stability of auditory discrimination and novelty processing in physiological aging.
Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele
2013-01-01
Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.
A sound advantage: Increased auditory capacity in autism.
Remington, Anna; Fairnie, Jake
2017-09-01
Autism Spectrum Disorder (ASD) has an intriguing auditory processing profile. Individuals show enhanced pitch discrimination, yet often find seemingly innocuous sounds distressing. This study used two behavioural experiments to examine whether an increased capacity for processing sounds in ASD could underlie both the difficulties and enhanced abilities found in the auditory domain. Autistic and non-autistic young adults performed a set of auditory detection and identification tasks designed to tax processing capacity and establish the extent of perceptual capacity in each population. Tasks were constructed to highlight both the benefits and disadvantages of increased capacity. Autistic people were better at detecting additional unexpected and expected sounds (increased distraction and superior performance respectively). This suggests that they have increased auditory perceptual capacity relative to non-autistic people. This increased capacity may offer an explanation for the auditory superiorities seen in autism (e.g. heightened pitch detection). Somewhat counter-intuitively, this same 'skill' could result in the sensory overload that is often reported - which subsequently can interfere with social communication. Reframing autistic perceptual processing in terms of increased capacity, rather than a filtering deficit or inability to maintain focus, increases our understanding of this complex condition, and has important practical implications that could be used to develop intervention programs to minimise the distress that is often seen in response to sensory stimuli. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Escera, Carles; Leung, Sumie; Grimm, Sabine
2014-07-01
Detection of changes in the acoustic environment is critical for survival, as it prevents missing potentially relevant events outside the focus of attention. In humans, deviance detection based on acoustic regularity encoding has been associated with a brain response derived from the human EEG, the mismatch negativity (MMN) auditory evoked potential, peaking at about 100-200 ms from deviance onset. By its long latency and cerebral generators, the cortical nature of both the processes of regularity encoding and deviance detection has been assumed. Yet, intracellular, extracellular, single-unit and local-field potential recordings in rats and cats have shown much earlier (circa 20-30 ms) and hierarchically lower (primary auditory cortex, medial geniculate body, inferior colliculus) deviance-related responses. Here, we review the recent evidence obtained with the complex auditory brainstem response (cABR), the middle latency response (MLR) and magnetoencephalography (MEG) demonstrating that human auditory deviance detection based on regularity encoding-rather than on refractoriness-occurs at latencies and in neural networks comparable to those revealed in animals. Specifically, encoding of simple acoustic-feature regularities and detection of corresponding deviance, such as an infrequent change in frequency or location, occur in the latency range of the MLR, in separate auditory cortical regions from those generating the MMN, and even at the level of human auditory brainstem. In contrast, violations of more complex regularities, such as those defined by the alternation of two different tones or by feature conjunctions (i.e., frequency and location) fail to elicit MLR correlates but elicit sizable MMNs. Altogether, these findings support the emerging view that deviance detection is a basic principle of the functional organization of the auditory system, and that regularity encoding and deviance detection is organized in ascending levels of complexity along the auditory pathway expanding from the brainstem up to higher-order areas of the cerebral cortex.
Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael
2014-01-01
Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.
Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael
2014-01-01
Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called “cocktail-party” problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments. PMID:25540608
Verhulst, Sarah; Altoè, Alessandro; Vasilkov, Viacheslav
2018-03-01
Models of the human auditory periphery range from very basic functional descriptions of auditory filtering to detailed computational models of cochlear mechanics, inner-hair cell (IHC), auditory-nerve (AN) and brainstem signal processing. It is challenging to include detailed physiological descriptions of cellular components into human auditory models because single-cell data stems from invasive animal recordings while human reference data only exists in the form of population responses (e.g., otoacoustic emissions, auditory evoked potentials). To embed physiological models within a comprehensive human auditory periphery framework, it is important to capitalize on the success of basic functional models of hearing and render their descriptions more biophysical where possible. At the same time, comprehensive models should capture a variety of key auditory features, rather than fitting their parameters to a single reference dataset. In this study, we review and improve existing models of the IHC-AN complex by updating their equations and expressing their fitting parameters into biophysical quantities. The quality of the model framework for human auditory processing is evaluated using recorded auditory brainstem response (ABR) and envelope-following response (EFR) reference data from normal and hearing-impaired listeners. We present a model with 12 fitting parameters from the cochlea to the brainstem that can be rendered hearing impaired to simulate how cochlear gain loss and synaptopathy affect human population responses. The model description forms a compromise between capturing well-described single-unit IHC and AN properties and human population response features. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829
The effect of changing the secondary task in dual-task paradigms for measuring listening effort.
Picou, Erin M; Ricketts, Todd A
2014-01-01
The purpose of this study was to evaluate the effect of changing the secondary task in dual-task paradigms that measure listening effort. Specifically, the effects of increasing the secondary task complexity or the depth of processing on a paradigm's sensitivity to changes in listening effort were quantified in a series of two experiments. Specific factors investigated within each experiment were background noise and visual cues. Participants in Experiment 1 were adults with normal hearing (mean age 23 years) and participants in Experiment 2 were adults with mild sloping to moderately severe sensorineural hearing loss (mean age 60.1 years). In both experiments, participants were tested using three dual-task paradigms. These paradigms had identical primary tasks, which were always monosyllable word recognition. The secondary tasks were all physical reaction time measures. The stimulus for the secondary task varied by paradigm and was a (1) simple visual probe, (2) a complex visual probe, or (3) the category of word presented. In this way, the secondary tasks mainly varied from the simple paradigm by either complexity or depth of speech processing. Using all three paradigms, participants were tested in four conditions, (1) auditory-only stimuli in quiet, (2) auditory-only stimuli in noise, (3) auditory-visual stimuli in quiet, and (4) auditory-visual stimuli in noise. During auditory-visual conditions, the talker's face was visible. Signal-to-noise ratios used during conditions with background noise were set individually so word recognition performance was matched in auditory-only and auditory-visual conditions. In noise, word recognition performance was approximately 80% and 65% for Experiments 1 and 2, respectively. For both experiments, word recognition performance was stable across the three paradigms, confirming that none of the secondary tasks interfered with the primary task. In Experiment 1 (listeners with normal hearing), analysis of median reaction times revealed a significant main effect of background noise on listening effort only with the paradigm that required deep processing. Visual cues did not change listening effort as measured with any of the three dual-task paradigms. In Experiment 2 (listeners with hearing loss), analysis of median reaction times revealed expected significant effects of background noise using all three paradigms, but no significant effects of visual cues. None of the dual-task paradigms were sensitive to the effects of visual cues. Furthermore, changing the complexity of the secondary task did not change dual-task paradigm sensitivity to the effects of background noise on listening effort for either group of listeners. However, the paradigm whose secondary task involved deeper processing was more sensitive to the effects of background noise for both groups of listeners. While this paradigm differed from the others in several respects, depth of processing may be partially responsible for the increased sensitivity. Therefore, this paradigm may be a valuable tool for evaluating other factors that affect listening effort.
Moving in time: Bayesian causal inference explains movement coordination to auditory beats
Elliott, Mark T.; Wing, Alan M.; Welchman, Andrew E.
2014-01-01
Many everyday skilled actions depend on moving in time with signals that are embedded in complex auditory streams (e.g. musical performance, dancing or simply holding a conversation). Such behaviour is apparently effortless; however, it is not known how humans combine auditory signals to support movement production and coordination. Here, we test how participants synchronize their movements when there are potentially conflicting auditory targets to guide their actions. Participants tapped their fingers in time with two simultaneously presented metronomes of equal tempo, but differing in phase and temporal regularity. Synchronization therefore depended on integrating the two timing cues into a single-event estimate or treating the cues as independent and thereby selecting one signal over the other. We show that a Bayesian inference process explains the situations in which participants choose to integrate or separate signals, and predicts motor timing errors. Simulations of this causal inference process demonstrate that this model provides a better description of the data than other plausible models. Our findings suggest that humans exploit a Bayesian inference process to control movement timing in situations where the origin of auditory signals needs to be resolved. PMID:24850915
Parthasarathy, Aravindakshan; Bartlett, Edward
2012-07-01
Auditory brainstem responses (ABRs), and envelope and frequency following responses (EFRs and FFRs) are widely used to study aberrant auditory processing in conditions such as aging. We have previously reported age-related deficits in auditory processing for rapid amplitude modulation (AM) frequencies using EFRs recorded from a single channel. However, sensitive testing of EFRs along a wide range of modulation frequencies is required to gain a more complete understanding of the auditory processing deficits. In this study, ABRs and EFRs were recorded simultaneously from two electrode configurations in young and old Fischer-344 rats, a common auditory aging model. Analysis shows that the two channels respond most sensitively to complementary AM frequencies. Channel 1, recorded from Fz to mastoid, responds better to faster AM frequencies in the 100-700 Hz range of frequencies, while Channel 2, recorded from the inter-aural line to the mastoid, responds better to slower AM frequencies in the 16-100 Hz range. Simultaneous recording of Channels 1 and 2 using AM stimuli with varying sound levels and modulation depths show that age-related deficits in temporal processing are not present at slower AM frequencies but only at more rapid ones, which would not have been apparent recording from either channel alone. Comparison of EFRs between un-anesthetized and isoflurane-anesthetized recordings in young animals, as well as comparison with previously published ABR waveforms, suggests that the generators of Channel 1 may emphasize more caudal brainstem structures while those of Channel 2 may emphasize more rostral auditory nuclei including the inferior colliculus and the forebrain, with the boundary of separation potentially along the cochlear nucleus/superior olivary complex. Simultaneous two-channel recording of EFRs help to give a more complete understanding of the properties of auditory temporal processing over a wide range of modulation frequencies which is useful in understanding neural representations of sound stimuli in normal, developmental or pathological conditions. Copyright © 2012 Elsevier B.V. All rights reserved.
Turning down the noise: the benefit of musical training on the aging auditory brain.
Alain, Claude; Zendel, Benjamin Rich; Hutka, Stefanie; Bidelman, Gavin M
2014-02-01
Age-related decline in hearing abilities is a ubiquitous part of aging, and commonly impacts speech understanding, especially when there are competing sound sources. While such age effects are partially due to changes within the cochlea, difficulties typically exist beyond measurable hearing loss, suggesting that central brain processes, as opposed to simple peripheral mechanisms (e.g., hearing sensitivity), play a critical role in governing hearing abilities late into life. Current training regimens aimed to improve central auditory processing abilities have experienced limited success in promoting listening benefits. Interestingly, recent studies suggest that in young adults, musical training positively modifies neural mechanisms, providing robust, long-lasting improvements to hearing abilities as well as to non-auditory tasks that engage cognitive control. These results offer the encouraging possibility that musical training might be used to counteract age-related changes in auditory cognition commonly observed in older adults. Here, we reviewed studies that have examined the effects of age and musical experience on auditory cognition with an emphasis on auditory scene analysis. We infer that musical training may offer potential benefits to complex listening and might be utilized as a means to delay or even attenuate declines in auditory perception and cognition that often emerge later in life. Copyright © 2013 Elsevier B.V. All rights reserved.
Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J
2015-06-01
Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.
2015-01-01
Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721
Terband, H; Maassen, B; Guenther, F H; Brumberg, J
2014-01-01
Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. The reader will be able to: (1) identify the difficulties in studying disordered speech motor development; (2) describe the differences in speech motor characteristics between SSD and subtype CAS; (3) describe the different types of learning that occur in the sensory-motor system during babbling and early speech acquisition; (4) identify the neural control subsystems involved in speech production; (5) describe the potential role of auditory self-monitoring in developmental speech disorders. Copyright © 2014 Elsevier Inc. All rights reserved.
Connectivity in the human brain dissociates entropy and complexity of auditory inputs.
Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri
2015-03-01
Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.
Auditory motion processing after early blindness
Jiang, Fang; Stecker, G. Christopher; Fine, Ione
2014-01-01
Studies showing that occipital cortex responds to auditory and tactile stimuli after early blindness are often interpreted as demonstrating that early blind subjects “see” auditory and tactile stimuli. However, it is not clear whether these occipital responses directly mediate the perception of auditory/tactile stimuli, or simply modulate or augment responses within other sensory areas. We used fMRI pattern classification to categorize the perceived direction of motion for both coherent and ambiguous auditory motion stimuli. In sighted individuals, perceived motion direction was accurately categorized based on neural responses within the planum temporale (PT) and right lateral occipital cortex (LOC). Within early blind individuals, auditory motion decisions for both stimuli were successfully categorized from responses within the human middle temporal complex (hMT+), but not the PT or right LOC. These findings suggest that early blind responses within hMT+ are associated with the perception of auditory motion, and that these responses in hMT+ may usurp some of the functions of nondeprived PT. Thus, our results provide further evidence that blind individuals do indeed “see” auditory motion. PMID:25378368
Soskey, Laura N; Allen, Paul D; Bennetto, Loisa
2017-08-01
One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
It's about time: Presentation in honor of Ira Hirsh
NASA Astrophysics Data System (ADS)
Grant, Ken
2002-05-01
Over his long and illustrious career, Ira Hirsh has returned time and time again to his interest in the temporal aspects of pattern perception. Although Hirsh has studied and published articles and books pertaining to many aspects of the auditory system, such as sound conduction in the ear, cochlear mechanics, masking, auditory localization, psychoacoustic behavior in animals, speech perception, medical and audiological applications, coupling between psychophysics and physiology, and ecological acoustics, it is his work on auditory timing of simple and complex rhythmic patterns, the backbone of speech and music, that are at the heart of his more recent work. Here, we will focus on several aspects of temporal processing of simple and complex signals, both within and across sensory systems. Data will be reviewed on temporal order judgments of simple tones, and simultaneity judgments and intelligibility of unimodal and bimodal complex stimuli where stimulus components are presented either synchronously or asynchronously. Differences in the symmetry and shape of ``temporal windows'' derived from these data sets will be highlighted.
Intracerebral evidence of rhythm transform in the human auditory cortex.
Nozaradan, Sylvie; Mouraux, André; Jonas, Jacques; Colnat-Coulbois, Sophie; Rossion, Bruno; Maillard, Louis
2017-07-01
Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (<30 Hz) and high-frequency (>30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.
Effects of Voice Harmonic Complexity on ERP Responses to Pitch-Shifted Auditory Feedback
Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R.
2011-01-01
Objective The present study investigated the neural mechanisms of voice pitch control for different levels of harmonic complexity in the auditory feedback. Methods Event-related potentials (ERPs) were recorded in response to +200 cents pitch perturbations in the auditory feedback of self-produced natural human vocalizations, complex and pure tone stimuli during active vocalization and passive listening conditions. Results During active vocal production, ERP amplitudes were largest in response to pitch shifts in the natural voice, moderately large for non-voice complex stimuli and smallest for the pure tones. However, during passive listening, neural responses were equally large for pitch shifts in voice and non-voice complex stimuli but still larger than that for pure tones. Conclusions These findings suggest that pitch change detection is facilitated for spectrally rich sounds such as natural human voice and non-voice complex stimuli compared with pure tones. Vocalization-induced increase in neural responses for voice feedback suggests that sensory processing of naturally-produced complex sounds such as human voice is enhanced by means of motor-driven mechanisms (e.g. efference copies) during vocal production. Significance This enhancement may enable the audio-vocal system to more effectively detect and correct for vocal errors in the feedback of natural human vocalizations to maintain an intended vocal output for speaking. PMID:21719346
Seither-Preisler, Annemarie; Parncutt, Richard; Schneider, Peter
2014-08-13
Playing a musical instrument is associated with numerous neural processes that continuously modify the human brain and may facilitate characteristic auditory skills. In a longitudinal study, we investigated the auditory and neural plasticity of musical learning in 111 young children (aged 7-9 y) as a function of the intensity of instrumental practice and musical aptitude. Because of the frequent co-occurrence of central auditory processing disorders and attentional deficits, we also tested 21 children with attention deficit (hyperactivity) disorder [AD(H)D]. Magnetic resonance imaging and magnetoencephalography revealed enlarged Heschl's gyri and enhanced right-left hemispheric synchronization of the primary evoked response (P1) to harmonic complex sounds in children who spent more time practicing a musical instrument. The anatomical characteristics were positively correlated with frequency discrimination, reading, and spelling skills. Conversely, AD(H)D children showed reduced volumes of Heschl's gyri and enhanced volumes of the plana temporalia that were associated with a distinct bilateral P1 asynchrony. This may indicate a risk for central auditory processing disorders that are often associated with attentional and literacy problems. The longitudinal comparisons revealed a very high stability of auditory cortex morphology and gray matter volumes, suggesting that the combined anatomical and functional parameters are neural markers of musicality and attention deficits. Educational and clinical implications are considered. Copyright © 2014 the authors 0270-6474/14/3410937-13$15.00/0.
A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.
Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D
2018-06-01
The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants <18, nonhuman participants, and article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as neurophysiological substrates for auditory prediction. Tinnitus has been modeled as an auditory object which may demonstrate incomplete processing during auditory scene analysis resulting in tinnitus salience and therefore difficulty in habituation. Within the electrophysiological domain, there is currently mixed evidence regarding oscillatory band changes in tinnitus. There are theoretical proposals for a relationship between prediction error and tinnitus but few published empirical studies. American Academy of Audiology.
Non-visual spatial tasks reveal increased interactions with stance postural control.
Woollacott, Marjorie; Vander Velde, Timothy
2008-05-07
The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.
Neural network retuning and neural predictors of learning success associated with cello training.
Wollman, Indiana; Penhune, Virginia; Segado, Melanie; Carpentier, Thibaut; Zatorre, Robert J
2018-06-26
The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio-motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio-motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory-motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio-motor learning.
Mahr, Angela; Wentura, Dirk
2014-02-01
Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory-visual interactions, which rapidly increase the denoted target object's salience. This would apply, in particular, to complex visual scenes.
Spectra-temporal patterns underlying mental addition: an ERP and ERD/ERS study.
Ku, Yixuan; Hong, Bo; Gao, Xiaorong; Gao, Shangkai
2010-03-12
Functional neuroimaging data have shown that mental calculation involves fronto-parietal areas that are composed of different subsystems shared with other cognitive functions such as working memory and language. Event-related potential (ERP) analysis has also indicated sequential information changes during the calculation process. However, little is known about the dynamic properties of oscillatory networks in this process. In the present study, we applied both ERP and event-related (de-)synchronization (ERS/ERD) analyses to EEG data recorded from normal human subjects performing tasks for sequential visual/auditory mental addition. Results in the study indicate that the late positive components (LPCs) can be decomposed into two separate parts. The earlier element LPC1 (around 360ms) reflects the computing attribute and is more prominent in calculation tasks. The later element LPC2 (around 590ms) indicates an effect of number size and appears larger only in a more complex 2-digit addition task. The theta ERS and alpha ERD show modality-independent frontal and parietal differential patterns between the mental addition and control groups, and discrepancies are noted in the beta ERD between the 2-digit and 1-digit mental addition groups. The 2-digit addition (both visual and auditory) results in similar beta ERD patterns to the auditory control, which may indicate a reliance on auditory-related resources in mental arithmetic, especially with increasing task difficulty. These results coincide with the theory of simple calculation relying on the visuospatial process and complex calculation depending on the phonological process. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Information flow in the auditory cortical network
Hackett, Troy A.
2011-01-01
Auditory processing in the cerebral cortex is comprised of an interconnected network of auditory and auditory-related areas distributed throughout the forebrain. The nexus of auditory activity is located in temporal cortex among several specialized areas, or fields, that receive dense inputs from the medial geniculate complex. These areas are collectively referred to as auditory cortex. Auditory activity is extended beyond auditory cortex via connections with auditory-related areas elsewhere in the cortex. Within this network, information flows between areas to and from countless targets, but in a manner that is characterized by orderly regional, areal and laminar patterns. These patterns reflect some of the structural constraints that passively govern the flow of information at all levels of the network. In addition, the exchange of information within these circuits is dynamically regulated by intrinsic neurochemical properties of projecting neurons and their targets. This article begins with an overview of the principal circuits and how each is related to information flow along major axes of the network. The discussion then turns to a description of neurochemical gradients along these axes, highlighting recent work on glutamate transporters in the thalamocortical projections to auditory cortex. The article concludes with a brief discussion of relevant neurophysiological findings as they relate to structural gradients in the network. PMID:20116421
Neuronal basis of speech comprehension.
Specht, Karsten
2014-01-01
Verbal communication does not rely only on the simple perception of auditory signals. It is rather a parallel and integrative processing of linguistic and non-linguistic information, involving temporal and frontal areas in particular. This review describes the inherent complexity of auditory speech comprehension from a functional-neuroanatomical perspective. The review is divided into two parts. In the first part, structural and functional asymmetry of language relevant structures will be discus. The second part of the review will discuss recent neuroimaging studies, which coherently demonstrate that speech comprehension processes rely on a hierarchical network involving the temporal, parietal, and frontal lobes. Further, the results support the dual-stream model for speech comprehension, with a dorsal stream for auditory-motor integration, and a ventral stream for extracting meaning but also the processing of sentences and narratives. Specific patterns of functional asymmetry between the left and right hemisphere can also be demonstrated. The review article concludes with a discussion on interactions between the dorsal and ventral streams, particularly the involvement of motor related areas in speech perception processes, and outlines some remaining unresolved issues. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.
Neuroplasticity in the auditory system.
Gil-Loyzaga, P
2005-01-01
An increasing interest on neuroplasticity and nerve regeneration within the auditory receptor and pathway has developed in recent years. The receptor and the auditory pathway are controlled by highly complex circuits that appear during embryonic development. During this early maturation process of the auditory sensory elements, we observe the development of two types of nerve fibers: permanent fibers that will remain to reach full-term maturity and other transient fibers that will ultimately disappear. Both stable and transitory fibers however, as well as developing sensory cells, express, and probably release, their respective neuro-transmitters that could be involved in neuroplasticity. Cell culture experiments have added significant information; the in vitro administration of glutamate or GABA to isolated spiral ganglion neurons clearly modified neural development. Neuroplasticity has been also found in the adult. Nerve regeneration and neuroplasticity have been demonstrated in the adult auditory receptors as well as throughout the auditory pathway. Neuroplasticity studies could prove interesting in the elaboration of current or future therapy strategies (e.g.: cochlear implants or stem cells), but also to really understand the pathogenesis of auditory or language diseases (e.g.: deafness, tinnitus, dyslexia, etc.).
Audio-tactile integration and the influence of musical training.
Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo
2014-01-01
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.
EEG signatures accompanying auditory figure-ground segregation
Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P.; Szerafin, Ágnes; Shinn-Cunningham, Barbara; Winkler, István
2017-01-01
In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased – i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. PMID:27421185
Sun, Hai-Ying; Hu, Yu-Juan; Zhao, Xue-Yan; Zhong, Yi; Zeng, Ling-Ling; Chen, Xu-Bo; Yuan, Jie; Wu, Jing; Sun, Yu; Kong, Wen; Kong, Wei-Jia
2015-07-01
Age-associated degeneration in the central auditory system, which is defined as central presbycusis, can impair sound localization and speech perception. Research has shown that oxidative stress plays a central role in the pathological process of central presbycusis. Thioredoxin 2 (Trx2), one member of thioredoxin family, plays a key role in regulating the homeostasis of cellular reactive oxygen species and anti-apoptosis. The purpose of this study was to explore the association between Trx2 and the phenotype of central presbycusis using a mimetic aging animal model induced by long-term exposure to d-galactose (d-Gal). We also explored changes in thioredoxin-interacting protein (TXNIP), apoptosis signal regulating kinase 1 (ASK1) and phosphorylated ASK1 (p-ASK1) expression, as well as the Trx2-TXNIP/Trx2-ASK1 binding complex in the auditory cortex of mimetic aging rats. Our results demonstrate that, compared with control groups, the levels of Trx2 and Trx2-ASK1 binding complex were significantly reduced, whereas TXNIP, ASK1 p-ASK1 expression, and Trx2-TXNIP binding complex were significantly increased in the auditory cortex of the mimetic aging groups. Our results indicated that changes in Trx2 and the TXNIP-Trx2-ASK1 signal pathway may participate in the pathogenesis of central presbycusis. © 2015 FEBS.
Near-Term Fetuses Process Temporal Features of Speech
ERIC Educational Resources Information Center
Granier-Deferre, Carolyn; Ribeiro, Aurelie; Jacquet, Anne-Yvonne; Bassereau, Sophie
2011-01-01
The perception of speech and music requires processing of variations in spectra and amplitude over different time intervals. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, but whether they can process complex auditory streams, such as speech sequences and more specifically their temporal variations, fast or…
Prediction and constraint in audiovisual speech perception
Peelle, Jonathan E.; Sommers, Mitchell S.
2015-01-01
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390
Paavilainen, P; Simola, J; Jaramillo, M; Näätänen, R; Winkler, I
2001-03-01
Brain mechanisms extracting invariant information from varying auditory inputs were studied using the mismatch-negativity (MMN) brain response. We wished to determine whether the preattentive sound-analysis mechanisms, reflected by MMN, are capable of extracting invariant relationships based on abstract conjunctions between two sound features. The standard stimuli varied over a large range in frequency and intensity dimensions following the rule that the higher the frequency, the louder the intensity. The occasional deviant stimuli violated this frequency-intensity relationship and elicited an MMN. The results demonstrate that preattentive processing of auditory stimuli extends to unexpectedly complex relationships between the stimulus features.
Reduced auditory efferent activity in childhood selective mutism.
Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava
2004-06-01
Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.
Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle
NASA Astrophysics Data System (ADS)
Oppenheim, Jacob N.; Magnasco, Marcelo O.
2013-01-01
The time-frequency uncertainty principle states that the product of the temporal and frequency extents of a signal cannot be smaller than 1/(4π). We study human ability to simultaneously judge the frequency and the timing of a sound. Our subjects often exceeded the uncertainty limit, sometimes by more than tenfold, mostly through remarkable timing acuity. Our results establish a lower bound for the nonlinearity and complexity of the algorithms employed by our brains in parsing transient sounds, rule out simple “linear filter” models of early auditory processing, and highlight timing acuity as a central feature in auditory object processing.
Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh
2017-04-01
The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Effects of voice harmonic complexity on ERP responses to pitch-shifted auditory feedback.
Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R
2011-12-01
The present study investigated the neural mechanisms of voice pitch control for different levels of harmonic complexity in the auditory feedback. Event-related potentials (ERPs) were recorded in response to+200 cents pitch perturbations in the auditory feedback of self-produced natural human vocalizations, complex and pure tone stimuli during active vocalization and passive listening conditions. During active vocal production, ERP amplitudes were largest in response to pitch shifts in the natural voice, moderately large for non-voice complex stimuli and smallest for the pure tones. However, during passive listening, neural responses were equally large for pitch shifts in voice and non-voice complex stimuli but still larger than that for pure tones. These findings suggest that pitch change detection is facilitated for spectrally rich sounds such as natural human voice and non-voice complex stimuli compared with pure tones. Vocalization-induced increase in neural responses for voice feedback suggests that sensory processing of naturally-produced complex sounds such as human voice is enhanced by means of motor-driven mechanisms (e.g. efference copies) during vocal production. This enhancement may enable the audio-vocal system to more effectively detect and correct for vocal errors in the feedback of natural human vocalizations to maintain an intended vocal output for speaking. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Leech, Robert; Aydelott, Jennifer; Symons, Germaine; Carnevale, Julia; Dick, Frederic
2007-11-01
How does the development and consolidation of perceptual, attentional, and higher cognitive abilities interact with language acquisition and processing? We explored children's (ages 5-17) and adults' (ages 18-51) comprehension of morphosyntactically varied sentences under several competing speech conditions that varied in the degree of attentional demands, auditory masking, and semantic interference. We also evaluated the relationship between subjects' syntactic comprehension and their word reading efficiency and general 'speed of processing'. We found that the interactions between perceptual and attentional processes and complex sentence interpretation changed considerably over the course of development. Perceptual masking of the speech signal had an early and lasting impact on comprehension, particularly for more complex sentence structures. In contrast, increased attentional demand in the absence of energetic auditory masking primarily affected younger children's comprehension of difficult sentence types. Finally, the predictability of syntactic comprehension abilities by external measures of development and expertise is contingent upon the perceptual, attentional, and semantic milieu in which language processing takes place.
Maskey, Dhiraj; Kim, Hyung Gun; Suh, Myung-Whan; Roh, Gu Seob; Kim, Myeung Ju
2014-08-01
The increasing use of mobile communication has triggered an interest in its possible effects on the regulation of neurotransmitter signals. Due to the close proximity of mobile phones to hearing-related brain regions during usage, its use may lead to a decrease in the ability to segregate sounds, leading to serious auditory dysfunction caused by the prolonged exposure to radiofrequency (RF) radiation. The interplay among auditory processing, excitation and inhibitory molecule interactions plays a major role in auditory function. In particular, inhibitory molecules, such a glycine, are predominantly localized in the auditory brainstem. However, the effects of exposure to RF radiation on auditory function have not been reported to date. Thus, the aim of the present study was to investigate the effects of exposure to RF radiation on glycine receptor (GlyR) immunoreactivity (IR) in the auditory brainstem region at 835 MHz with a specific absorption rate of 4.0 W/kg for three months using free-floating immunohistochemistry. Compared with the sham control (SC) group, a significant loss of staining intensity of neuropils and cells in the different subdivisions of the auditory brainstem regions was observed in the mice exposed to RF radiation (E4 group). A decrease in the number of GlyR immunoreactive cells was also noted in the cochlear nuclear complex [anteroventral cochlear nucleus (AVCN), 31.09%; dorsal cochlear nucleus (DCN), 14.08%; posteroventral cochlear nucleus (PVCN), 32.79%] and the superior olivary complex (SOC) [lateral superior olivary nucleus (LSO), 36.85%; superior paraolivary nucleus (SPN), 24.33%, medial superior olivary nucleus (MSO), 23.23%; medial nucleus of the trapezoid body (MNTB), 10.15%] of the mice in the E4 group. Auditory brainstem response (ABR) analysis also revealed a significant threshold elevation of in the exposed (E4) group, which may be associated with auditory dysfunction. The present study suggests that the auditory brainstem region is susceptible to chronic exposure to RF radiation, which may affect the function of the central auditory system.
NASA Astrophysics Data System (ADS)
Misurelli, Sara M.
The ability to analyze an "auditory scene"---that is, to selectively attend to a target source while simultaneously segregating and ignoring distracting information---is one of the most important and complex skills utilized by normal hearing (NH) adults. The NH adult auditory system and brain work rather well to segregate auditory sources in adverse environments. However, for some children and individuals with hearing loss, selectively attending to one source in noisy environments can be extremely challenging. In a normal auditory system, information arriving at each ear is integrated, and thus these binaural cues aid in speech understanding in noise. A growing number of individuals who are deaf now receive cochlear implants (CIs), which supply hearing through electrical stimulation to the auditory nerve. In particular, bilateral cochlear implants (BICIs) are now becoming more prevalent, especially in children. However, because CI sound processing lacks both fine structure cues and coordination between stimulation at the two ears, binaural cues may either be absent or inconsistent. For children with NH and with BiCIs, this difficulty in segregating sources is of particular concern because their learning and development commonly occurs within the context of complex auditory environments. This dissertation intends to explore and understand the ability of children with NH and with BiCIs to function in everyday noisy environments. The goals of this work are to (1) Investigate source segregation abilities in children with NH and with BiCIs; (2) Examine the effect of target-interferer similarity and the benefits of source segregation for children with NH and with BiCIs; (3) Investigate measures of executive function that may predict performance in complex and realistic auditory tasks of source segregation for listeners with NH; and (4) Examine source segregation abilities in NH listeners, from school-age to adults.
ERIC Educational Resources Information Center
Leech, Robert; Saygin, Ayse Pinar
2011-01-01
Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial…
ERIC Educational Resources Information Center
Swink, Shannon; Stuart, Andrew
2012-01-01
The effect of gender on the N1-P2 auditory complex was examined while listening and speaking with altered auditory feedback. Fifteen normal hearing adult males and 15 females participated. N1-P2 components were evoked while listening to self-produced nonaltered and frequency shifted /a/ tokens and during production of /a/ tokens during nonaltered…
Chernyshev, Boris V; Bryzgalov, Dmitri V; Lazarev, Ivan E; Chernysheva, Elena G
2016-08-03
Current understanding of feature binding remains controversial. Studies involving mismatch negativity (MMN) measurement show a low level of binding, whereas behavioral experiments suggest a higher level. We examined the possibility that the two levels of feature binding coexist and may be shown within one experiment. The electroencephalogram was recorded while participants were engaged in an auditory two-alternative choice task, which was a combination of the oddball and the condensation tasks. Two types of deviant target stimuli were used - complex stimuli, which required feature conjunction to be identified, and simple stimuli, which differed from standard stimuli in a single feature. Two behavioral outcomes - correct responses and errors - were analyzed separately. Responses to complex stimuli were slower and less accurate than responses to simple stimuli. MMN was prominent and its amplitude was similar for both simple and complex stimuli, whereas the respective stimuli differed from standards in a single feature or two features respectively. Errors in response only to complex stimuli were associated with decreased MMN amplitude. P300 amplitude was greater for complex stimuli than for simple stimuli. Our data are compatible with the explanation that feature binding in auditory modality depends on two concurrent levels of processing. We speculate that the earlier level related to MMN generation is an essential and critical stage. Yet, a later analysis is also carried out, affecting P300 amplitude and response time. The current findings provide resolution to conflicting views on the nature of feature binding and show that feature binding is a distributed multilevel process.
Montefinese, Maria; Semenza, Carlo
2018-05-17
It is widely accepted that different number-related tasks, including solving simple addition and subtraction, may induce attentional shifts on the so-called mental number line, which represents larger numbers on the right and smaller numbers on the left. Recently, it has been shown that different number-related tasks also employ spatial attention shifts along with general cognitive processes. Here we investigated for the first time whether number line estimation and complex mental arithmetic recruit a common mechanism in healthy adults. Participants' performance in two-digit mental additions and subtractions using visual stimuli was compared with their performance in a mental bisection task using auditory numerical intervals. Results showed significant correlations between participants' performance in number line bisection and that in two-digit mental arithmetic operations, especially in additions, providing a first proof of a shared cognitive mechanism (or multiple shared cognitive mechanisms) between auditory number bisection and complex mental calculation.
A. Smith, Nicholas; A. Folland, Nicholas; Martinez, Diana M.; Trainor, Laurel J.
2017-01-01
Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain et al., 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. PMID:28346869
Grandin, Cécile B.; Dricot, Laurence; Plaza, Paula; Lerens, Elodie; Rombaux, Philippe; De Volder, Anne G.
2013-01-01
Using functional magnetic resonance imaging (fMRI) in ten early blind humans, we found robust occipital activation during two odor-processing tasks (discrimination or categorization of fruit and flower odors), as well as during control auditory-verbal conditions (discrimination or categorization of fruit and flower names). We also found evidence for reorganization and specialization of the ventral part of the occipital cortex, with dissociation according to stimulus modality: the right fusiform gyrus was most activated during olfactory conditions while part of the left ventral lateral occipital complex showed a preference for auditory-verbal processing. Only little occipital activation was found in sighted subjects, but the same right-olfactory/left-auditory-verbal hemispheric lateralization was found overall in their brain. This difference between the groups was mirrored by superior performance of the blind in various odor-processing tasks. Moreover, the level of right fusiform gyrus activation during the olfactory conditions was highly correlated with individual scores in a variety of odor recognition tests, indicating that the additional occipital activation may play a functional role in odor processing. PMID:23967263
Auditory and audio-vocal responses of single neurons in the monkey ventral premotor cortex.
Hage, Steffen R
2018-03-20
Monkey vocalization is a complex behavioral pattern, which is flexibly used in audio-vocal communication. A recently proposed dual neural network model suggests that cognitive control might be involved in this behavior, originating from a frontal cortical network in the prefrontal cortex and mediated via projections from the rostral portion of the ventral premotor cortex (PMvr) and motor cortex to the primary vocal motor network in the brainstem. For the rapid adjustment of vocal output to external acoustic events, strong interconnections between vocal motor and auditory sites are needed, which are present at cortical and subcortical levels. However, the role of the PMvr in audio-vocal integration processes remains unclear. In the present study, single neurons in the PMvr were recorded in rhesus monkeys (Macaca mulatta) while volitionally producing vocalizations in a visual detection task or passively listening to monkey vocalizations. Ten percent of randomly selected neurons in the PMvr modulated their discharge rate in response to acoustic stimulation with species-specific calls. More than four-fifths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of the vocalization. Based on these audio-vocal interactions, the PMvr might be well positioned to mediate higher order auditory processing with cognitive control of the vocal motor output to the primary vocal motor network. Such audio-vocal integration processes in the premotor cortex might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. Copyright © 2018 Elsevier B.V. All rights reserved.
Combined mirror visual and auditory feedback therapy for upper limb phantom pain: a case report
2011-01-01
Introduction Phantom limb sensation and phantom limb pain is a very common issue after amputations. In recent years there has been accumulating data implicating 'mirror visual feedback' or 'mirror therapy' as helpful in the treatment of phantom limb sensation and phantom limb pain. Case presentation We present the case of a 24-year-old Caucasian man, a left upper limb amputee, treated with mirror visual feedback combined with auditory feedback with improved pain relief. Conclusion This case may suggest that auditory feedback might enhance the effectiveness of mirror visual feedback and serve as a valuable addition to the complex multi-sensory processing of body perception in patients who are amputees. PMID:21272334
Kornilov, Sergey A; Landi, Nicole; Rakhlin, Natalia; Fang, Shin-Yi; Grigorenko, Elena L; Magnuson, James S
2014-01-01
We examined neural indices of pre-attentive phonological and attentional auditory discrimination in children with developmental language disorder (DLD, n = 23) and typically developing (n = 16) peers from a geographically isolated Russian-speaking population with an elevated prevalence of DLD. Pre-attentive phonological MMN components were robust and did not differ in two groups. Children with DLD showed attenuated P3 and atypically distributed P2 components in the attentional auditory discrimination task; P2 and P3 amplitudes were linked to working memory capacity, development of complex syntax, and vocabulary. The results corroborate findings of reduced processing capacity in DLD and support a multifactorial view of the disorder.
Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T; Schalk, Gerwin
2014-08-15
Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these relationships reflect direct cortico-cortical connections rather than common driving input from subcortical structures such as the thalamus. In summary, our inter-subject analyses defined the spatial and temporal relationships between music-related brain activity in the alpha and high gamma bands. They provide experimental evidence supporting current theories about the putative mechanisms of alpha and gamma activity, i.e., reflections of thalamo-cortical interactions and local cortical neural activity, respectively, and the results are also in agreement with existing functional models of auditory processing. Copyright © 2014 Elsevier Inc. All rights reserved.
[Children with specific language impairment: electrophysiological and pedaudiological findings].
Rinker, T; Hartmann, K; Smith, E; Reiter, R; Alku, P; Kiefer, M; Brosch, S
2014-08-01
Auditory deficits may be at the core of the language delay in children with Specific Language Impairment (SLI). It was therefore hypothesized that children with SLI perform poorly on 4 tests typically used to diagnose central auditory processing disorder (CAPD) as well in the processing of phonetic and tone stimuli in an electrophysiological experiment. 14 children with SLI (mean age 61,7 months) and 16 children without SLI (mean age 64,9 months) were tested with 4 tasks: non-word repetition, language discrimination in noise, directional hearing, and dichotic listening. The electrophysiological recording Mismatch Negativity (MMN) employed sine tones (600 vs. 650 Hz) and phonetic stimuli (/ε/ versus /e/). Control children and children with SLI differed significantly in the non-word repetition as well as in the dichotic listening task but not in the two other tasks. Only the control children recognized the frequency difference in the MMN-experiment. The phonetic difference was discriminated by both groups, however, effects were longer lasting for the control children. Group differences were not significant. Children with SLI show limitations in auditory processing that involve either a complex task repeating unfamiliar or difficult material and show subtle deficits in auditory processing at the neural level. © Georg Thieme Verlag KG Stuttgart · New York.
Modeling the Development of Audiovisual Cue Integration in Speech Perception
Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.
2017-01-01
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558
Modeling the Development of Audiovisual Cue Integration in Speech Perception.
Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C
2017-03-21
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.
EEG signatures accompanying auditory figure-ground segregation.
Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P; Szerafin, Ágnes; Shinn-Cunningham, Barbara G; Winkler, István
2016-11-01
In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased - i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. Copyright © 2016. Published by Elsevier Inc.
Functional MRI of the vocalization-processing network in the macaque brain
Ortiz-Rios, Michael; Kuśmierek, Paweł; DeWitt, Iain; Archakov, Denis; Azevedo, Frederico A. C.; Sams, Mikko; Jääskeläinen, Iiro P.; Keliris, Georgios A.; Rauschecker, Josef P.
2015-01-01
Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC), medial geniculate nucleus (MGN), auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG) and sulcus (STS). Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (“scrambled calls”) also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt. PMID:25883546
The Potential Role of the cABR in Assessment and Management of Hearing Impairment
Anderson, Samira; Kraus, Nina
2013-01-01
Hearing aid technology has improved dramatically in the last decade, especially in the ability to adaptively respond to dynamic aspects of background noise. Despite these advancements, however, hearing aid users continue to report difficulty hearing in background noise and having trouble adjusting to amplified sound quality. These difficulties may arise in part from current approaches to hearing aid fittings, which largely focus on increased audibility and management of environmental noise. These approaches do not take into account the fact that sound is processed all along the auditory system from the cochlea to the auditory cortex. Older adults represent the largest group of hearing aid wearers; yet older adults are known to have deficits in temporal resolution in the central auditory system. Here we review evidence that supports the use of the auditory brainstem response to complex sounds (cABR) in the assessment of hearing-in-noise difficulties and auditory training efficacy in older adults. PMID:23431313
How the songbird brain listens to its own songs
NASA Astrophysics Data System (ADS)
Hahnloser, Richard
2010-03-01
Songbirds are capable of vocal learning and communication and are ideally suited to the study of neural mechanisms of auditory feedback processing. When a songbird is deafened in the early sensorimotor phase after tutoring, it fails to imitate the song of its tutor and develops a highly aberrant song. It is also known that birds are capable of storing a long-term memory of tutor song and that they need intact auditory feedback to match their own vocalizations to the tutor's song. Based on these behavioral observations, we investigate feedback processing in single auditory forebrain neurons of juvenile zebra finches that are in a late developmental stage of song learning. We implant birds with miniature motorized microdrives that allow us to record the electrical activity of single neurons while birds are freely moving and singing in their cages. Occasionally, we deliver a brief sound through a loudspeaker to perturb the auditory feedback the bird experiences during singing. These acoustic perturbations of auditory feedback reveal complex sensitivity that cannot be predicted from passive playback responses. Some neurons are highly feedback sensitive in that they respond vigorously to song perturbations, but not to unperturbed songs or perturbed playback. These findings suggest that a computational function of forebrain auditory areas may be to detect errors between actual feedback and mirrored feedback deriving from an internal model of the bird's own song or that of its tutor.
Language-Specific Attention Treatment for Aphasia: Description and Preliminary Findings.
Peach, Richard K; Nathan, Meghana R; Beck, Katherine M
2017-02-01
The need for a specific, language-based treatment approach to aphasic impairments associated with attentional deficits is well documented. We describe language-specific attention treatment, a specific skill-based approach for aphasia that exploits increasingly complex linguistic tasks that focus attention. The program consists of eight tasks, some with multiple phases, to assess and treat lexical and sentence processing. Validation results demonstrate that these tasks load on six attentional domains: (1) executive attention; (2) attentional switching; (3) visual selective attention/processing speed; (4) sustained attention; (5) auditory-verbal working memory; and (6) auditory processing speed. The program demonstrates excellent inter- and intrarater reliability and adequate test-retest reliability. Two of four people with aphasia exposed to this program demonstrated good language recovery whereas three of the four participants showed improvements in auditory-verbal working memory. The results provide support for this treatment program in patients with aphasia having no greater than a moderate degree of attentional impairment. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Procedures for central auditory processing screening in schoolchildren.
Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella
2018-03-22
Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that allow the selection of as many hearing skills as possible, validated by comparison with the battery of tests used in the diagnosis. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Auditory dysfunction in schizophrenia: integrating clinical and basic features
Javitt, Daniel C.; Sweet, Robert A.
2015-01-01
Schizophrenia is a complex neuropsychiatric disorder that is associated with persistent psychosocial disability in affected individuals. Although studies of schizophrenia have traditionally focused on deficits in higher-order processes such as working memory and executive function, there is an increasing realization that, in this disorder, deficits can be found throughout the cortex and are manifest even at the level of early sensory processing. These deficits are highly amenable to translational investigation and represent potential novel targets for clinical intervention. Deficits, moreover, have been linked to specific structural abnormalities in post-mortem auditory cortex tissue from individuals with schizophrenia, providing unique insights into underlying pathophysiological mechanisms. PMID:26289573
Erfanian Saeedi, Nafise; Blamey, Peter J; Burkitt, Anthony N; Grayden, David B
2016-04-01
Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons' action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.
Erfanian Saeedi, Nafise; Blamey, Peter J.; Burkitt, Anthony N.; Grayden, David B.
2016-01-01
Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons’ action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy. PMID:27049657
Memory for sound, with an ear toward hearing in complex auditory scenes.
Snyder, Joel S; Gregg, Melissa K
2011-10-01
An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.
Aedo, Cristian; Terreros, Gonzalo; León, Alex; Delano, Paul H.
2016-01-01
Background and Objective The auditory efferent system is a complex network of descending pathways, which mainly originate in the primary auditory cortex and are directed to several auditory subcortical nuclei. These descending pathways are connected to olivocochlear neurons, which in turn make synapses with auditory nerve neurons and outer hair cells (OHC) of the cochlea. The olivocochlear function can be studied using contralateral acoustic stimulation, which suppresses auditory nerve and cochlear responses. In the present work, we tested the proposal that the corticofugal effects that modulate the strength of the olivocochlear reflex on auditory nerve responses are produced through cholinergic synapses between medial olivocochlear (MOC) neurons and OHCs via alpha-9/10 nicotinic receptors. Methods We used wild type (WT) and alpha-9 nicotinic receptor knock-out (KO) mice, which lack cholinergic transmission between MOC neurons and OHC, to record auditory cortex evoked potentials and to evaluate the consequences of auditory cortex electrical microstimulation in the effects produced by contralateral acoustic stimulation on auditory brainstem responses (ABR). Results Auditory cortex evoked potentials at 15 kHz were similar in WT and KO mice. We found that auditory cortex microstimulation produces an enhancement of contralateral noise suppression of ABR waves I and III in WT mice but not in KO mice. On the other hand, corticofugal modulations of wave V amplitudes were significant in both genotypes. Conclusion These findings show that the corticofugal modulation of contralateral acoustic suppressions of auditory nerve (ABR wave I) and superior olivary complex (ABR wave III) responses are mediated through MOC synapses. PMID:27195498
The Representation of Prediction Error in Auditory Cortex
Rubin, Jonathan; Ulanovsky, Nachum; Tishby, Naftali
2016-01-01
To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of ‘oddball’ sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. PMID:27490251
Auditory Learning. Dimensions in Early Learning Series.
ERIC Educational Resources Information Center
Zigmond, Naomi K.; Cicci, Regina
The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…
The Effect of Cognitive Control on Different Types of Auditory Distraction.
Bell, Raoul; Röer, Jan P; Marsh, John E; Storch, Dunja; Buchner, Axel
2017-09-01
Deviant as well as changing auditory distractors interfere with short-term memory. According to the duplex model of auditory distraction, the deviation effect is caused by a shift of attention while the changing-state effect is due to obligatory order processing. This theory predicts that foreknowledge should reduce the deviation effect, but should have no effect on the changing-state effect. We compared the effect of foreknowledge on the two phenomena directly within the same experiment. In a pilot study, specific foreknowledge was impotent in reducing either the changing-state effect or the deviation effect, but it reduced disruption by sentential speech, suggesting that the effects of foreknowledge on auditory distraction may increase with the complexity of the stimulus material. Given the unexpected nature of this finding, we tested whether the same finding would be obtained in (a) a direct preregistered replication in Germany and (b) an additional replication with translated stimulus materials in Sweden.
Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de
2017-12-07
To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.
Developmental changes in distinguishing concurrent auditory objects.
Alain, Claude; Theunissen, Eef L; Chevalier, Hélène; Batty, Magali; Taylor, Margot J
2003-04-01
Children have considerable difficulties in identifying speech in noise. In the present study, we examined age-related differences in central auditory functions that are crucial for parsing co-occurring auditory events using behavioral and event-related brain potential measures. Seventeen pre-adolescent children and 17 adults were presented with complex sounds containing multiple harmonics, one of which could be 'mistuned' so that it was no longer an integer multiple of the fundamental. Both children and adults were more likely to report hearing the mistuned harmonic as a separate sound with an increase in mistuning. However, children were less sensitive in detecting mistuning across all levels as revealed by lower d' scores than adults. The perception of two concurrent auditory events was accompanied by a negative wave that peaked at about 160 ms after sound onset. In both age groups, the negative wave, referred to as the 'object-related negativity' (ORN), increased in amplitude with mistuning. The ORN was larger in children than in adults despite a lower d' score. Together, the behavioral and electrophysiological results suggest that concurrent sound segregation is probably adult-like in pre-adolescent children, but that children are inefficient in processing the information following the detection of mistuning. These findings also suggest that processes involved in distinguishing concurrent auditory objects continue to mature during adolescence.
Fonseca, P J; Correia, T
2007-05-01
The effects of temperature on hearing in the cicada Tettigetta josei were studied. The activity of the auditory nerve and the responses of auditory interneurons to stimuli of different frequencies and intensities were recorded at different temperatures ranging from 16 degrees C to 29 degrees C. Firstly, in order to investigate the temperature dependence of hearing processes, we analyzed its effects on auditory tuning, sensitivity, latency and Q(10dB). Increasing temperature led to an upward shift of the characteristic hearing frequency, to an increase in sensitivity and to a decrease in the latency of the auditory response both in the auditory nerve recordings (periphery) and in some interneurons at the metathoracic-abdominal ganglionic complex (MAC). Characteristic frequency shifts were only observed at low frequency (3-8 kHz). No changes were seen in Q(10dB). Different tuning mechanisms underlying frequency selectivity may explain the results observed. Secondly, we investigated the role of the mechanical sensory structures that participate in the transduction process. Laser vibrometry measurements revealed that the vibrations of the tympanum and tympanal apodeme are temperature independent in the biologically relevant range (18-35 degrees C). Since the above mentioned effects of temperature are present in the auditory nerve recordings, the observed shifts in frequency tuning must be performed by mechanisms intrinsic to the receptor cells. Finally, the role of potassium channels in the response of the auditory system was investigated using a specific inhibitor of these channels, tetraethylammonium (TEA). TEA caused shifts on tuning and sensitivity of the summed response of the receptors similar to the effects of temperature. Thus, potassium channels are implicated in the tuning of the receptor cells.
Towards a neural basis of music perception.
Koelsch, Stefan; Siebel, Walter A
2005-12-01
Music perception involves complex brain functions underlying acoustic analysis, auditory memory, auditory scene analysis, and processing of musical syntax and semantics. Moreover, music perception potentially affects emotion, influences the autonomic nervous system, the hormonal and immune systems, and activates (pre)motor representations. During the past few years, research activities on different aspects of music processing and their neural correlates have rapidly progressed. This article provides an overview of recent developments and a framework for the perceptual side of music processing. This framework lays out a model of the cognitive modules involved in music perception, and incorporates information about the time course of activity of some of these modules, as well as research findings about where in the brain these modules might be located.
Kornilov, Sergey A.; Landi, Nicole; Rakhlin, Natalia; Fang, Shin-Yi; Grigorenko, Elena L.; Magnuson, James S.
2015-01-01
We examined neural indices of pre-attentive phonological and attentional auditory discrimination in children with developmental language disorder (DLD, n=23) and typically developing (n=16) peers from a geographically isolated Russian-speaking population with an elevated prevalence of DLD. Pre-attentive phonological MMN components were robust and did not differ in two groups. Children with DLD showed attenuated P3 and atypically distributed P2 components in the attentional auditory discrimination task; P2 and P3 amplitudes were linked to working memory capacity, development of complex syntax, and vocabulary. The results corroborate findings of reduced processing capacity in DLD and support a multifactorial view of the disorder. PMID:25350759
Source analysis of auditory steady-state responses in acoustic and electric hearing.
Luke, Robert; De Vos, Astrid; Wouters, Jan
2017-02-15
Speech is a complex signal containing a broad variety of acoustic information. For accurate speech reception, the listener must perceive modulations over a range of envelope frequencies. Perception of these modulations is particularly important for cochlear implant (CI) users, as all commercial devices use envelope coding strategies. Prolonged deafness affects the auditory pathway. However, little is known of how cochlear implantation affects the neural processing of modulated stimuli. This study investigates and contrasts the neural processing of envelope rate modulated signals in acoustic and CI listeners. Auditory steady-state responses (ASSRs) are used to study the neural processing of amplitude modulated (AM) signals. A beamforming technique is applied to determine the increase in neural activity relative to a control condition, with particular attention paid to defining the accuracy and precision of this technique relative to other tomographies. In a cohort of 44 acoustic listeners, the location, activity and hemispheric lateralisation of ASSRs is characterised while systematically varying the modulation rate (4, 10, 20, 40 and 80Hz) and stimulation ear (right, left and bilateral). We demonstrate a complex pattern of laterality depending on both modulation rate and stimulation ear that is consistent with, and extends, existing literature. We present a novel extension to the beamforming method which facilitates source analysis of electrically evoked auditory steady-state responses (EASSRs). In a cohort of 5 right implanted unilateral CI users, the neural activity is determined for the 40Hz rate and compared to the acoustic cohort. Results indicate that CI users activate typical thalamic locations for 40Hz stimuli. However, complementary to studies of transient stimuli, the CI population has atypical hemispheric laterality, preferentially activating the contralateral hemisphere. Copyright © 2016. Published by Elsevier Inc.
Inter-subject synchronization of brain responses during natural music listening
Abrams, Daniel A.; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J.; Menon, Vinod
2015-01-01
Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic ‘real-world’ music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. PMID:23578016
Puschmann, Sebastian; Weerda, Riklef; Klump, Georg; Thiel, Christiane M
2013-05-01
Psychophysical experiments show that auditory change detection can be disturbed in situations in which listeners have to monitor complex auditory input. We made use of this change deafness effect to segregate the neural correlates of physical change in auditory input from brain responses related to conscious change perception in an fMRI experiment. Participants listened to two successively presented complex auditory scenes, which consisted of six auditory streams, and had to decide whether scenes were identical or whether the frequency of one stream was changed between presentations. Our results show that physical changes in auditory input, independent of successful change detection, are represented at the level of auditory cortex. Activations related to conscious change perception, independent of physical change, were found in the insula and the ACC. Moreover, our data provide evidence for significant effective connectivity between auditory cortex and the insula in the case of correctly detected auditory changes, but not for missed changes. This underlines the importance of the insula/anterior cingulate network for conscious change detection.
2013-01-01
Background Previous studies have demonstrated functional and structural temporal lobe abnormalities located close to the auditory cortical regions in schizophrenia. The goal of this study was to determine whether functional abnormalities exist in the cortical processing of musical sound in schizophrenia. Methods Twelve schizophrenic patients and twelve age- and sex-matched healthy controls were recruited, and participants listened to a random sequence of two kinds of sonic entities, intervals (tritones and perfect fifths) and chords (atonal chords, diminished chords, and major triads), of varying degrees of complexity and consonance. The perception of musical sound was investigated by the auditory evoked potentials technique. Results Our results showed that schizophrenic patients exhibited significant reductions in the amplitudes of the N1 and P2 components elicited by musical stimuli, to which consonant sounds contributed more significantly than dissonant sounds. Schizophrenic patients could not perceive the dissimilarity between interval and chord stimuli based on the evoked potentials responses as compared with the healthy controls. Conclusion This study provided electrophysiological evidence of functional abnormalities in the cortical processing of sound complexity and music consonance in schizophrenia. The preliminary findings warrant further investigations for the underlying mechanisms. PMID:23721126
De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia
2017-11-13
Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept. Copyright © 2017 Elsevier Inc. All rights reserved.
Sakurai, Y
2002-01-01
This study reports how hippocampal individual cells and cell assemblies cooperate for neural coding of pitch and temporal information in memory processes for auditory stimuli. Each rat performed two tasks, one requiring discrimination of auditory pitch (high or low) and the other requiring discrimination of their duration (long or short). Some CA1 and CA3 complex-spike neurons showed task-related differential activity between the high and low tones in only the pitch-discrimination task. However, without exception, neurons which showed task-related differential activity between the long and short tones in the duration-discrimination task were always task-related neurons in the pitch-discrimination task. These results suggest that temporal information (long or short), in contrast to pitch information (high or low), cannot be coded independently by specific neurons. The results also indicate that the two different behavioral tasks cannot be fully differentiated by the task-related single neurons alone and suggest a model of cell-assembly coding of the tasks. Cross-correlation analysis among activities of simultaneously recorded multiple neurons supported the suggested cell-assembly model.Considering those results, this study concludes that dual coding by hippocampal single neurons and cell assemblies is working in memory processing of pitch and temporal information of auditory stimuli. The single neurons encode both auditory pitches and their temporal lengths and the cell assemblies encode types of tasks (contexts or situations) in which the pitch and the temporal information are processed.
Auditory Processing of Older Adults with Probable Mild Cognitive Impairment
ERIC Educational Resources Information Center
Edwards, Jerri D.; Lister, Jennifer J.; Elias, Maya N.; Tetlow, Amber M.; Sardina, Angela L.; Sadeq, Nasreen A.; Brandino, Amanda D.; Bush, Aryn L. Harrison
2017-01-01
Purpose: Studies suggest that deficits in auditory processing predict cognitive decline and dementia, but those studies included limited measures of auditory processing. The purpose of this study was to compare older adults with and without probable mild cognitive impairment (MCI) across two domains of auditory processing (auditory performance in…
Pitch perception prior to cortical maturation
NASA Astrophysics Data System (ADS)
Lau, Bonnie K.
Pitch perception plays an important role in many complex auditory tasks including speech perception, music perception, and sound source segregation. Because of the protracted and extensive development of the human auditory cortex, pitch perception might be expected to mature, at least over the first few months of life. This dissertation investigates complex pitch perception in 3-month-olds, 7-month-olds and adults -- time points when the organization of the auditory pathway is distinctly different. Using an observer-based psychophysical procedure, a series of four studies were conducted to determine whether infants (1) discriminate the pitch of harmonic complex tones, (2) discriminate the pitch of unresolved harmonics, (3) discriminate the pitch of missing fundamental melodies, and (4) have comparable sensitivity to pitch and spectral changes as adult listeners. The stimuli used in these studies were harmonic complex tones, with energy missing at the fundamental frequency. Infants at both three and seven months of age discriminated the pitch of missing fundamental complexes composed of resolved and unresolved harmonics as well as missing fundamental melodies, demonstrating perception of complex pitch by three months of age. More surprisingly, infants in both age groups had lower pitch and spectral discrimination thresholds than adult listeners. Furthermore, no differences in performance on any of the tasks presented were observed between infants at three and seven months of age. These results suggest that subcortical processing is not only sufficient to support pitch perception prior to cortical maturation, but provides adult-like sensitivity to pitch by three months.
Scanning silence: mental imagery of complex sounds.
Bunzeck, Nico; Wuestenberg, Torsten; Lutz, Kai; Heinze, Hans-Jochen; Jancke, Lutz
2005-07-15
In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.
NASA Astrophysics Data System (ADS)
Stoelinga, Christophe; Heo, Inseok; Long, Glenis; Lee, Jungmee; Lutfi, Robert; Chang, An-Chieh
2015-12-01
The human auditory system has a remarkable ability to "hear out" a wanted sound (target) in the background of unwanted sounds. One important property of sound which helps us hear-out the target is inharmonicity. When a single harmonic component of a harmonic complex is slightly mistuned, that component is heard to separate from the rest. At high harmonic numbers, where components are unresolved, the harmonic segregation effect is thought to result from detection of modulation of the time envelope (roughness cue) resulting from the mistuning. Neurophysiological research provides evidence that such envelope modulations are represented early in the auditory system, at the level of the auditory nerve. When the mistuned harmonic is a low harmonic, where components are resolved, the harmonic segregation is attributed to more centrally-located auditory processes, leading harmonic components to form a perceptual group heard separately from the mistuned component. Here we consider an alternative explanation that attributes the harmonic segregation to detection of modulation when both high and low harmonic numbers are mistuned. Specifically, we evaluate the possibility that distortion products in the cochlea generated by the mistuned component introduce detectable beating patterns for both high and low harmonic numbers. Distortion product otoacoustic emissions (DPOAEs) were measured using 3, 7, or 12-tone harmonic complexes with a fundamental frequency (F0) of 200 or 400 Hz. One of two harmonic components was mistuned at each F0: one when harmonics are expected to be resulted and the other from unresolved harmonics. Many non-harmonic DPOAEs are present whenever a harmonic component is mistuned. These non-harmonic DPOAEs are often separated by the amount of the mistuning (ΔF). This small frequency difference will generate a slow beating pattern at ΔF, because this beating is only present when a harmonic component is mistuned, it could provide a cue for behavioral detection of harmonic complex mistuning and may also be associated with the modulation of auditory nerve responses.
Auditory pathways: anatomy and physiology.
Pickles, James O
2015-01-01
This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.
Auditory Imagery: Empirical Findings
ERIC Educational Resources Information Center
Hubbard, Timothy L.
2010-01-01
The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…
Mutation of Dcdc2 in mice leads to impairments in auditory processing and memory ability.
Truong, D T; Che, A; Rendall, A R; Szalkowski, C E; LoTurco, J J; Galaburda, A M; Holly Fitch, R
2014-11-01
Dyslexia is a complex neurodevelopmental disorder characterized by impaired reading ability despite normal intellect, and is associated with specific difficulties in phonological and rapid auditory processing (RAP), visual attention and working memory. Genetic variants in Doublecortin domain-containing protein 2 (DCDC2) have been associated with dyslexia, impairments in phonological processing and in short-term/working memory. The purpose of this study was to determine whether sensory and behavioral impairments can result directly from mutation of the Dcdc2 gene in mice. Several behavioral tasks, including a modified pre-pulse inhibition paradigm (to examine auditory processing), a 4/8 radial arm maze (to assess/dissociate working vs. reference memory) and rotarod (to examine sensorimotor ability and motor learning), were used to assess the effects of Dcdc2 mutation. Behavioral results revealed deficits in RAP, working memory and reference memory in Dcdc2(del2/del2) mice when compared with matched wild types. Current findings parallel clinical research linking genetic variants of DCDC2 with specific impairments of phonological processing and memory ability. © 2014 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.
Vasconcelos, Raquel O.; Fonseca, Paulo J.; Amorim, M. Clara P.; Ladich, Friedrich
2011-01-01
Many fishes rely on their auditory skills to interpret crucial information about predators and prey, and to communicate intraspecifically. Few studies, however, have examined how complex natural sounds are perceived in fishes. We investigated the representation of conspecific mating and agonistic calls in the auditory system of the Lusitanian toadfish Halobatrachus didactylus, and analysed auditory responses to heterospecific signals from ecologically relevant species: a sympatric vocal fish (meagre Argyrosomus regius) and a potential predator (dolphin Tursiops truncatus). Using auditory evoked potential (AEP) recordings, we showed that both sexes can resolve fine features of conspecific calls. The toadfish auditory system was most sensitive to frequencies well represented in the conspecific vocalizations (namely the mating boatwhistle), and revealed a fine representation of duration and pulsed structure of agonistic and mating calls. Stimuli and corresponding AEP amplitudes were highly correlated, indicating an accurate encoding of amplitude modulation. Moreover, Lusitanian toadfish were able to detect T. truncatus foraging sounds and A. regius calls, although at higher amplitudes. We provide strong evidence that the auditory system of a vocal fish, lacking accessory hearing structures, is capable of resolving fine features of complex vocalizations that are probably important for intraspecific communication and other relevant stimuli from the auditory scene. PMID:20861044
Prediction and constraint in audiovisual speech perception.
Peelle, Jonathan E; Sommers, Mitchell S
2015-07-01
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Tillery, Kim L.; Katz, Jack; Keller, Warren D.
2000-01-01
A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…
Maturation of Visual and Auditory Temporal Processing in School-Aged Children
ERIC Educational Resources Information Center
Dawes, Piers; Bishop, Dorothy V. M.
2008-01-01
Purpose: To examine development of sensitivity to auditory and visual temporal processes in children and the association with standardized measures of auditory processing and communication. Methods: Normative data on tests of visual and auditory processing were collected on 18 adults and 98 children aged 6-10 years of age. Auditory processes…
Audio-vocal interaction in single neurons of the monkey ventrolateral prefrontal cortex.
Hage, Steffen R; Nieder, Andreas
2015-05-06
Complex audio-vocal integration systems depend on a strong interconnection between the auditory and the vocal motor system. To gain cognitive control over audio-vocal interaction during vocal motor control, the PFC needs to be involved. Neurons in the ventrolateral PFC (VLPFC) have been shown to separately encode the sensory perceptions and motor production of vocalizations. It is unknown, however, whether single neurons in the PFC reflect audio-vocal interactions. We therefore recorded single-unit activity in the VLPFC of rhesus monkeys (Macaca mulatta) while they produced vocalizations on command or passively listened to monkey calls. We found that 12% of randomly selected neurons in VLPFC modulated their discharge rate in response to acoustic stimulation with species-specific calls. Almost three-fourths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of vocalization. Based on these audio-vocal interactions, the VLPFC might be well positioned to combine higher order auditory processing with cognitive control of the vocal motor output. Such audio-vocal integration processes in the VLPFC might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. Copyright © 2015 the authors 0270-6474/15/357030-11$15.00/0.
Young children's recall and reconstruction of audio and audiovisual narratives.
Gibbons, J; Anderson, D R; Smith, R; Field, D E; Fischer, C
1986-08-01
It has been claimed that the visual component of audiovisual media dominates young children's cognitive processing. This experiment examines the effects of input modality while controlling the complexity of the visual and auditory content and while varying the comprehension task (recall vs. reconstruction). 4- and 7-year-olds were presented brief stories through either audio or audiovisual media. The audio version consisted of narrated character actions and character utterances. The narrated actions were matched to the utterances on the basis of length and propositional complexity. The audiovisual version depicted the actions visually by means of stop animation instead of by auditory narrative statements. The character utterances were the same in both versions. Audiovisual input produced superior performance on explicit information in the 4-year-olds and produced more inferences at both ages. Because performance on utterances was superior in the audiovisual condition as compared to the audio condition, there was no evidence that visual input inhibits processing of auditory information. Actions were more likely to be produced by the younger children than utterances, regardless of input medium, indicating that prior findings of visual dominance may have been due to the salience of narrative action. Reconstruction, as compared to recall, produced superior depiction of actions at both ages as well as more constrained relevant inferences and narrative conventions.
Martins, Kelly Vasconcelos Chaves; Gil, Daniela
2017-01-01
Introduction The registry of the component P1 of the cortical auditory evoked potential has been widely used to analyze the behavior of auditory pathways in response to cochlear implant stimulation. Objective To determine the influence of aural rehabilitation in the parameters of latency and amplitude of the P1 cortical auditory evoked potential component elicited by simple auditory stimuli (tone burst) and complex stimuli (speech) in children with cochlear implants. Method The study included six individuals of both genders aged 5 to 10 years old who have been cochlear implant users for at least 12 months, and who attended auditory rehabilitation with an aural rehabilitation therapy approach. Participants were submitted to research of the cortical auditory evoked potential at the beginning of the study and after 3 months of aural rehabilitation. To elicit the responses, simple stimuli (tone burst) and complex stimuli (speech) were used and presented in free field at 70 dB HL. The results were statistically analyzed, and both evaluations were compared. Results There was no significant difference between the type of eliciting stimulus of the cortical auditory evoked potential for the latency and the amplitude of P1. There was a statistically significant difference in the P1 latency between the evaluations for both stimuli, with reduction of the latency in the second evaluation after 3 months of auditory rehabilitation. There was no statistically significant difference regarding the amplitude of P1 under the two types of stimuli or in the two evaluations. Conclusion A decrease in latency of the P1 component elicited by both simple and complex stimuli was observed within a three-month interval in children with cochlear implant undergoing aural rehabilitation. PMID:29018498
Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence
Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D.; Chait, Maria
2016-01-01
To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” PMID:27325682
Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence.
Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D; Chait, Maria
2016-09-01
To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis." © The Author 2016. Published by Oxford University Press.
Neural Correlates of Sound Localization in Complex Acoustic Environments
Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto
2013-01-01
Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185
Dyslexia risk gene relates to representation of sound in the auditory brainstem.
Neef, Nicole E; Müller, Bent; Liebig, Johanna; Schaadt, Gesa; Grigutsch, Maren; Gunter, Thomas C; Wilcke, Arndt; Kirsten, Holger; Skeide, Michael A; Kraft, Indra; Kraus, Nina; Emmrich, Frank; Brauer, Jens; Boltze, Johannes; Friederici, Angela D
2017-04-01
Dyslexia is a reading disorder with strong associations with KIAA0319 and DCDC2. Both genes play a functional role in spike time precision of neurons. Strikingly, poor readers show an imprecise encoding of fast transients of speech in the auditory brainstem. Whether dyslexia risk genes are related to the quality of sound encoding in the auditory brainstem remains to be investigated. Here, we quantified the response consistency of speech-evoked brainstem responses to the acoustically presented syllable [da] in 159 genotyped, literate and preliterate children. When controlling for age, sex, familial risk and intelligence, partial correlation analyses associated a higher dyslexia risk loading with KIAA0319 with noisier responses. In contrast, a higher risk loading with DCDC2 was associated with a trend towards more stable responses. These results suggest that unstable representation of sound, and thus, reduced neural discrimination ability of stop consonants, occurred in genotypes carrying a higher amount of KIAA0319 risk alleles. Current data provide the first evidence that the dyslexia-associated gene KIAA0319 can alter brainstem responses and impair phoneme processing in the auditory brainstem. This brain-gene relationship provides insight into the complex relationships between phenotype and genotype thereby improving the understanding of the dyslexia-inherent complex multifactorial condition. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
NASA Astrophysics Data System (ADS)
Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan
2005-12-01
Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.
Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model
Marsh, John E.; Campbell, Tom A.
2016-01-01
The rostral brainstem receives both “bottom-up” input from the ascending auditory system and “top-down” descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory. PMID:27242396
Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model.
Marsh, John E; Campbell, Tom A
2016-01-01
The rostral brainstem receives both "bottom-up" input from the ascending auditory system and "top-down" descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory.
Källstrand, Johan; Olsson, Olle; Nehlstedt, Sara Fristedt; Sköld, Mia Ling; Nielzén, Sören
2010-01-01
Abnormal auditory information processing has been reported in individuals with autism spectrum disorders (ASD). In the present study auditory processing was investigated by recording auditory brainstem responses (ABRs) elicited by forward masking in adults diagnosed with Asperger syndrome (AS). Sixteen AS subjects were included in the forward masking experiment and compared to three control groups consisting of healthy individuals (n = 16), schizophrenic patients (n = 16) and attention deficit hyperactivity disorder patients (n = 16), respectively, of matching age and gender. The results showed that the AS subjects exhibited abnormally low activity in the early part of their ABRs that distinctly separated them from the three control groups. Specifically, wave III amplitudes were significantly lower in the AS group than for all the control groups in the forward masking condition (P < 0.005), which was not the case in the baseline condition. Thus, electrophysiological measurements of ABRs to complex sound stimuli (eg, forward masking) may lead to a better understanding of the underlying neurophysiology of AS. Future studies may further point to specific ABR characteristics in AS individuals that separate them from individuals diagnosed with other neurodevelopmental diseases. PMID:20628629
Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.
NASA Astrophysics Data System (ADS)
Wang, Avery Li-Chun
This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters which require a small fraction of the computational power of conventional FIR implementations. This design strategy is based on truncated and stabilized IIR filters. These signal-processing methods have been applied to the problem of auditory source separation, resulting in voice separation from complex music that is significantly better than previous results at far lower computational cost.
A hardware experimental platform for neural circuits in the auditory cortex
NASA Astrophysics Data System (ADS)
Rodellar-Biarge, Victoria; García-Dominguez, Pablo; Ruiz-Rizaldos, Yago; Gómez-Vilda, Pedro
2011-05-01
Speech processing in the human brain is a very complex process far from being fully understood although much progress has been done recently. Neuromorphic Speech Processing is a new research orientation in bio-inspired systems approach to find solutions to automatic treatment of specific problems (recognition, synthesis, segmentation, diarization, etc) which can not be adequately solved using classical algorithms. In this paper a neuromorphic speech processing architecture is presented. The systematic bottom-up synthesis of layered structures reproduce the dynamic feature detection of speech related to plausible neural circuits which work as interpretation centres located in the Auditory Cortex. The elementary model is based on Hebbian neuron-like units. For the computation of the architecture a flexible framework is proposed in the environment of Matlab®/Simulink®/HDL, which allows building models in different description styles, complexity and implementation levels. It provides a flexible platform for experimenting on the influence of the number of neurons and interconnections, in the precision of the results and in performance evaluation. The experimentation with different architecture configurations may help both in better understanding how neural circuits may work in the brain as well as in how speech processing can benefit from this understanding.
ERIC Educational Resources Information Center
Mayer, Jennifer L.; Hannent, Ian; Heaton, Pamela F.
2016-01-01
Whilst enhanced perception has been widely reported in individuals with Autism Spectrum Disorders (ASDs), relatively little is known about the developmental trajectory and impact of atypical auditory processing on speech perception in intellectually high-functioning adults with ASD. This paper presents data on perception of complex tones and…
Scheperle, Rachel A; Abbas, Paul J
2015-01-01
The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford-Kowal-Bench Speech-in-Noise test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. All electrophysiological measures were significantly correlated with each other and with speech scores for the mixed-model analysis, which takes into account multiple measures per person (i.e., experimental MAPs). The ECAP measures were the best predictor. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech scores; spectral auditory change complex amplitude was the strongest predictor. The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be most useful for within-subject applications when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on a single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered.
Amin, Noopur; Gastpar, Michael; Theunissen, Frédéric E.
2013-01-01
Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment. PMID:23630587
De Cosmo, G; Aceto, P; Clemente, A; Congedo, E
2004-05-01
Auditory evoked potentials (AEPs) are an electrical manifestation of the brain response to an auditory stimulus. Mid-latency auditory evoked potentials (MLAEPs) and the coherent frequency of the AEP are the most promising for monitoring depth of anaesthesia. MLAEPs show graded changes with increasing anaesthetic concentration over the clinical concentration range. The latencies of Pa and Nb lengthen and their amplitudes reduce. These changes in features of waveform are similar with both inhaled and intravenous anaesthetics. Changes in latency of Pa and Nb waves are highly correlated to a transition from awake to loss of consciousness. MLAEPs recording may also provide information about cerebral processing of the auditory input, probably because it reflects activity in the temporal lobe/primary cortex, sites involved in sounds elaboration and in a complex mechanism of implicit (non declarative) memory processing. The coherent frequency has found to be disrupted by the anaesthetics as well as to be implicated in attentional mechanism. These results support the concept that the AEPs reflects the balance between the arousal effects of surgical stimulation and the depressant effects of anaesthetics. However, AEPs aren't a perfect measure of anaesthesia depth. They can't predict patients movements during surgery and the signal may be affected by muscle artefacts, diathermy and other electrical operating theatre interferences. In conclusion, once reliability of the AEPs recording became proved and the signal acquisition improved it is likely to became a routine feature of clinical anaesthetic practice.
François, Clément; Schön, Daniele
2014-02-01
There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations. Copyright © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Mokhemar, Mary Ann
This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…
Almeida, Diogo; Poeppel, David; Corina, David
The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.
Nozaradan, Sylvie; Schönwiesner, Marc; Keller, Peter E; Lenc, Tomas; Lehmann, Alexandre
2018-02-01
The spontaneous ability to entrain to meter periodicities is central to music perception and production across cultures. There is increasing evidence that this ability involves selective neural responses to meter-related frequencies. This phenomenon has been observed in the human auditory cortex, yet it could be the product of evolutionarily older lower-level properties of brainstem auditory neurons, as suggested by recent recordings from rodent midbrain. We addressed this question by taking advantage of a new method to simultaneously record human EEG activity originating from cortical and lower-level sources, in the form of slow (< 20 Hz) and fast (> 150 Hz) responses to auditory rhythms. Cortical responses showed increased amplitudes at meter-related frequencies compared to meter-unrelated frequencies, regardless of the prominence of the meter-related frequencies in the modulation spectrum of the rhythmic inputs. In contrast, frequency-following responses showed increased amplitudes at meter-related frequencies only in rhythms with prominent meter-related frequencies in the input but not for a more complex rhythm requiring more endogenous generation of the meter. This interaction with rhythm complexity suggests that the selective enhancement of meter-related frequencies does not fully rely on subcortical auditory properties, but is critically shaped at the cortical level, possibly through functional connections between the auditory cortex and other, movement-related, brain structures. This process of temporal selection would thus enable endogenous and motor entrainment to emerge with substantial flexibility and invariance with respect to the rhythmic input in humans in contrast with non-human animals. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Alais, David; Cass, John
2010-06-23
An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.
Mismatch and conflict: neurophysiological and behavioral evidence for conflict priming.
Mager, Ralph; Meuth, Sven G; Kräuchi, Kurt; Schmidlin, Maria; Müller-Spahn, Franz; Falkenstein, Michael
2009-11-01
Conflict-related cognitive processes are critical for adapting to sudden environmental changes that confront the individual with inconsistent or ambiguous information. Thus, these processes play a crucial role to cope with daily life. Generally, conflicts tend to accumulate especially in complex and threatening situations. Therefore, the question arises how conflict-related cognitive processes are modulated by the close succession of conflicts. In the present study, we investigated the effect of interactions between different types of conflict on performance as well as on electrophysiological parameters. A task-irrelevant auditory stimulus and a task-relevant visual stimulus were presented successively. The auditory stimulus consisted of a standard or deviant tone, followed by a congruent or incongruent Stroop stimulus. After standard prestimuli, performance deteriorated for incongruent compared to congruent Stroop stimuli, which were accompanied by a widespread negativity for incongruent versus congruent stimuli in the event-related potentials (ERPs). However, after deviant prestimuli, performance was better for incongruent than for congruent Stroop stimuli and an additional early negativity in the ERP emerged with a fronto-central maximum. Our data show that deviant auditory prestimuli facilitate specifically the processing of stimulus-related conflict, providing evidence for a conflict-priming effect.
Cacace, Anthony T; McFarland, Dennis J
2013-01-01
Tests of auditory perception, such as those used in the assessment of central auditory processing disorders ([C]APDs), represent a domain in audiological assessment where measurement of this theoretical construct is often confounded by nonauditory abilities due to methodological shortcomings. These confounds include the effects of cognitive variables such as memory and attention and suboptimal testing paradigms, including the use of verbal reproduction as a form of response selection. We argue that these factors need to be controlled more carefully and/or modified so that their impact on tests of auditory and visual perception is only minimal. To advocate for a stronger theoretical framework than currently exists and to suggest better methodological strategies to improve assessment of auditory processing disorders (APDs). Emphasis is placed on adaptive forced-choice psychophysical methods and the use of matched tasks in multiple sensory modalities to achieve these goals. Together, this approach has potential to improve the construct validity of the diagnosis, enhance and develop theory, and evolve into a preferred method of testing. Examination of methods commonly used in studies of APDs. Where possible, currently used methodology is compared to contemporary psychophysical methods that emphasize computer-controlled forced-choice paradigms. In many cases, the procedures used in studies of APD introduce confounding factors that could be minimized if computer-controlled forced-choice psychophysical methods were utilized. Ambiguities of interpretation, indeterminate diagnoses, and unwanted confounds can be avoided by minimizing memory and attentional demands on the input end and precluding the use of response-selection strategies that use complex motor processes on the output end. Advocated are the use of computer-controlled forced-choice psychophysical paradigms in combination with matched tasks in multiple sensory modalities to enhance the prospect of obtaining a valid diagnosis. American Academy of Audiology.
Inter-subject synchronization of brain responses during natural music listening.
Abrams, Daniel A; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J; Menon, Vinod
2013-05-01
Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic 'real-world' music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Dykstra, Andrew R; Halgren, Eric; Gutschalk, Alexander; Eskandar, Emad N; Cash, Sydney S
2016-01-01
In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release) has not been well-characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity (HGA) between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus) as well as a broad P3b-like potential (between ~300 and 600 ms) with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas.
Auditory cortex of bats and primates: managing species-specific calls for social communication
Kanwal, Jagmeet S.; Rauschecker, Josef P.
2014-01-01
Individuals of many animal species communicate with each other using sounds or “calls” that are made up of basic acoustic patterns and their combinations. We are interested in questions about the processing of communication calls and their representation within the mammalian auditory cortex. Our studies compare in particular two species for which a large body of data has accumulated: the mustached bat and the rhesus monkey. We conclude that the brains of both species share a number of functional and organizational principles, which differ only in the extent to which and how they are implemented. For instance, neurons in both species use “combination-sensitivity” (nonlinear spectral and temporal integration of stimulus components) as a basic mechanism to enable exquisite sensitivity to and selectivity for particular call types. Whereas combination-sensitivity is already found abundantly at the primary auditory cortical and also at subcortical levels in bats, it becomes prevalent only at the level of the lateral belt in the secondary auditory cortex of monkeys. A parallel-hierarchical framework for processing complex sounds up to the level of the auditory cortex in bats and an organization into parallel-hierarchical, cortico-cortical auditory processing streams in monkeys is another common principle. Response specialization of neurons seems to be more pronounced in bats than in monkeys, whereas a functional specialization into “what” and “where” streams in the cerebral cortex is more pronounced in monkeys than in bats. These differences, in part, are due to the increased number and larger size of auditory areas in the parietal and frontal cortex in primates. Accordingly, the computational prowess of neural networks and the functional hierarchy resulting in specializations is established early and accelerated across brain regions in bats. The principles proposed here for the neural “management” of species-specific calls in bats and primates can be tested by studying the details of call processing in additional species. Also, computational modeling in conjunction with coordinated studies in bats and monkeys can help to clarify the fundamental question of perceptual invariance (or “constancy”) in call recognition, which has obvious relevance for understanding speech perception and its disorders in humans. PMID:17485400
The Contribution of Brainstem and Cerebellar Pathways to Auditory Recognition
McLachlan, Neil M.; Wilson, Sarah J.
2017-01-01
The cerebellum has been known to play an important role in motor functions for many years. More recently its role has been expanded to include a range of cognitive and sensory-motor processes, and substantial neuroimaging and clinical evidence now points to cerebellar involvement in most auditory processing tasks. In particular, an increase in the size of the cerebellum over recent human evolution has been attributed in part to the development of speech. Despite this, the auditory cognition literature has largely overlooked afferent auditory connections to the cerebellum that have been implicated in acoustically conditioned reflexes in animals, and could subserve speech and other auditory processing in humans. This review expands our understanding of auditory processing by incorporating cerebellar pathways into the anatomy and functions of the human auditory system. We reason that plasticity in the cerebellar pathways underpins implicit learning of spectrotemporal information necessary for sound and speech recognition. Once learnt, this information automatically recognizes incoming auditory signals and predicts likely subsequent information based on previous experience. Since sound recognition processes involving the brainstem and cerebellum initiate early in auditory processing, learnt information stored in cerebellar memory templates could then support a range of auditory processing functions such as streaming, habituation, the integration of auditory feature information such as pitch, and the recognition of vocal communications. PMID:28373850
Research and Studies Directory for Manpower, Personnel, and Training
1988-01-01
314-889-6505 PSYCHOPHYSIOLCGICAL MAPPING OF COGNITIVE PROCESSES SUGA N* WASHINGTON UNIV ST LOUIS MO 314-889-6805 CONTROL OF BIOSONAR BEHAVIOR BY THE...VISUAL PERCEPTION CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX DICHOTIC LISTENING TO COMPLEX SOUNDS: EFFECTS OF STIMULUS CHARACTERISTICS AND
Strategy Choice Mediates the Link between Auditory Processing and Spelling
Kwong, Tru E.; Brachman, Kyle J.
2014-01-01
Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities. PMID:25198787
Strategy choice mediates the link between auditory processing and spelling.
Kwong, Tru E; Brachman, Kyle J
2014-01-01
Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.
Ma, Xiaoran; McPherson, Bradley; Ma, Lian
2016-03-01
Objective Children with nonsyndromic cleft lip and/or palate often have a high prevalence of middle ear dysfunction. However, there are also indications that they may have a higher prevalence of (central) auditory processing disorder. This study used Fisher's Auditory Problems Checklist for caregivers to determine whether children with nonsyndromic cleft lip and/or palate have potentially more auditory processing difficulties compared with craniofacially normal children. Methods Caregivers of 147 school-aged children with nonsyndromic cleft lip and/or palate were recruited for the study. This group was divided into three subgroups: cleft lip, cleft palate, and cleft lip and palate. Caregivers of 60 craniofacially normal children were recruited as a control group. Hearing health tests were conducted to evaluate peripheral hearing. Caregivers of children who passed this assessment battery completed Fisher's Auditory Problems Checklist, which contains 25 questions related to behaviors linked to (central) auditory processing disorder. Results Children with cleft palate showed the lowest scores on the Fisher's Auditory Problems Checklist questionnaire, consistent with a higher index of suspicion for (central) auditory processing disorder. There was a significant difference in the manifestation of (central) auditory processing disorder-linked behaviors between the cleft palate and the control groups. The most common behaviors reported in the nonsyndromic cleft lip and/or palate group were short attention span and reduced learning motivation, along with hearing difficulties in noise. Conclusion A higher occurrence of (central) auditory processing disorder-linked behaviors were found in children with nonsyndromic cleft lip and/or palate, particularly cleft palate. Auditory processing abilities should not be ignored in children with nonsyndromic cleft lip and/or palate, and it is necessary to consider assessment tests for (central) auditory processing disorder when an auditory diagnosis is made for this population.
[Auditory processing and high frequency audiometry in students of São Paulo].
Ramos, Cristina Silveira; Pereira, Liliane Desgualdo
2005-01-01
Auditory processing and auditory sensibility to high Frequency sounds. To characterize the localization processes, temporal ordering, hearing patterns and detection of high frequency sounds, looking for possible relations between these factors. 32 hearing fourth grade students, born in city of São Paulo, were submitted to: a simplified evaluation of the auditory processing; duration pattern test; high frequency audiometry. Three (9,4%) individuals presented auditory processing disorder (APD) and in one of them there was the coexistence of lower hearing thresholds in high frequency audiometry. APD associated to an auditory sensibility loss in high frequencies should be further investigated.
Perrone-Bertolotti, Marcela; Kujala, Jan; Vidal, Juan R; Hamame, Carlos M; Ossandon, Tomas; Bertrand, Olivier; Minotti, Lorella; Kahane, Philippe; Jerbi, Karim; Lachaux, Jean-Philippe
2012-12-05
As you might experience it while reading this sentence, silent reading often involves an imagery speech component: we can hear our own "inner voice" pronouncing words mentally. Recent functional magnetic resonance imaging studies have associated that component with increased metabolic activity in the auditory cortex, including voice-selective areas. It remains to be determined, however, whether this activation arises automatically from early bottom-up visual inputs or whether it depends on late top-down control processes modulated by task demands. To answer this question, we collaborated with four epileptic human patients recorded with intracranial electrodes in the auditory cortex for therapeutic purposes, and measured high-frequency (50-150 Hz) "gamma" activity as a proxy of population level spiking activity. Temporal voice-selective areas (TVAs) were identified with an auditory localizer task and monitored as participants viewed words flashed on screen. We compared neural responses depending on whether words were attended or ignored and found a significant increase of neural activity in response to words, strongly enhanced by attention. In one of the patients, we could record that response at 800 ms in TVAs, but also at 700 ms in the primary auditory cortex and at 300 ms in the ventral occipital temporal cortex. Furthermore, single-trial analysis revealed a considerable jitter between activation peaks in visual and auditory cortices. Altogether, our results demonstrate that the multimodal mental experience of reading is in fact a heterogeneous complex of asynchronous neural responses, and that auditory and visual modalities often process distinct temporal frames of our environment at the same time.
Ahnaou, Abdallah; Biermans, Ria; Drinkenburg, Wilhelmus H.
2016-01-01
Improvement of cognitive impairments represents a high medical need in the development of new antipsychotics. Aberrant EEG gamma oscillations and reductions in the P1/N1 complex peak amplitude of the auditory evoked potential (AEP) are neurophysiological biomarkers for schizophrenia that indicate disruption in sensory information processing. Inhibition of phosphodiesterase (i.e. PDE10A) and activation of metabotropic glutamate receptor (mGluR2) signaling are believed to provide antipsychotic efficacy in schizophrenia, but it is unclear whether this occurs with cognition-enhancing potential. The present study used the auditory paired click paradigm in passive awake Sprague Dawley rats to 1) model disruption of AEP waveforms and oscillations as observed in schizophrenia by peripheral administration of amphetamine and the N-methyl-D-aspartate (NMDA) antagonist phencyclidine (PCP); 2) confirm the potential of the antipsychotics risperidone and olanzapine to attenuate these disruptions; 3) evaluate the potential of mGluR2 agonist LY404039 and PDE10 inhibitor PQ-10 to improve AEP deficits in both the amphetamine and PCP models. PCP and amphetamine disrupted auditory information processing to the first click, associated with suppression of the P1/N1 complex peak amplitude, and increased cortical gamma oscillations. Risperidone and olanzapine normalized PCP and amphetamine-induced abnormalities in AEP waveforms and aberrant gamma/alpha oscillations, respectively. LY404039 increased P1/N1 complex peak amplitudes and potently attenuated the disruptive effects of both PCP and amphetamine on AEPs amplitudes and oscillations. However, PQ-10 failed to show such effect in either models. These outcomes indicate that modulation of the mGluR2 results in effective restoration of abnormalities in AEP components in two widely used animal models of psychosis, whereas PDE10A inhibition does not. PMID:26808689
[Auditory processing evaluation in children born preterm].
Gallo, Júlia; Dias, Karin Ziliotto; Pereira, Liliane Desgualdo; Azevedo, Marisa Frasson de; Sousa, Elaine Colombo
2011-01-01
To verify the performance of children born preterm on auditory processing evaluation, and to correlate the data with behavioral hearing assessment carried out at 12 months of age, comparing the results to those of auditory processing evaluation of children born full-term. Participants were 30 children with ages between 4 and 7 years, who were divided into two groups: Group 1 (children born preterm), and Group 2 (children born full-term). The auditory processing results of Group 1 were correlated to data obtained from the behavioral auditory evaluation carried out at 12 months of age. The results were compared between groups. Subjects in Group 1 presented at least one risk indicator for hearing loss at birth. In the behavioral auditory assessment carried out at 12 months of age, 38% of the children in Group 1 were at risk for central auditory processing deficits, and 93.75% presented auditory processing deficits on the evaluation. Significant differences were found between the groups for the temporal order test, the PSI test with ipsilateral competitive message, and the speech-in-noise test. The delay in sound localization ability was associated to temporal processing deficits. Children born preterm have worse performance in auditory processing evaluation than children born full-term. Delay in sound localization at 12 months is associated to deficits on the physiological mechanism of temporal processing in the auditory processing evaluation carried out between 4 and 7 years.
Bidelman, Gavin M.; Hutka, Stefanie; Moreno, Sylvain
2013-01-01
Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language. PMID:23565267
Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.
Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal
2016-01-01
Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.
Whispering - The hidden side of auditory communication.
Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier
2016-11-15
Whispering is a unique expression mode that is specific to auditory communication. Individuals switch their vocalization mode to whispering especially when affected by inner emotions in certain social contexts, such as in intimate relationships or intimidating social interactions. Although this context-dependent whispering is adaptive, whispered voices are acoustically far less rich than phonated voices and thus impose higher hearing and neural auditory decoding demands for recognizing their socio-affective value by listeners. The neural dynamics underlying this recognition especially from whispered voices are largely unknown. Here we show that whispered voices in humans are considerably impoverished as quantified by an entropy measure of spectral acoustic information, and this missing information needs large-scale neural compensation in terms of auditory and cognitive processing. Notably, recognizing the socio-affective information from voices was slightly more difficult from whispered voices, probably based on missing tonal information. While phonated voices elicited extended activity in auditory regions for decoding of relevant tonal and time information and the valence of voices, whispered voices elicited activity in a complex auditory-frontal brain network. Our data suggest that a large-scale multidirectional brain network compensates for the impoverished sound quality of socially meaningful environmental signals to support their accurate recognition and valence attribution. Copyright © 2016 Elsevier Inc. All rights reserved.
Beyond the real world: attention debates in auditory mismatch negativity.
Chung, Kyungmi; Park, Jin Young
2018-04-11
The aim of this study was to address the potential for the auditory mismatch negativity (aMMN) to be used in applied event-related potential (ERP) studies by determining whether the aMMN would be an attention-dependent ERP component and could be differently modulated across visual tasks or virtual reality (VR) stimuli with different visual properties and visual complexity levels. A total of 80 participants, aged 19-36 years, were assigned to either a reading-task (21 men and 19 women) or a VR-task (22 men and 18 women) group. Two visual-task groups of healthy young adults were matched in age, sex, and handedness. All participants were instructed to focus only on the given visual tasks and ignore auditory change detection. While participants in the reading-task group read text slides, those in the VR-task group viewed three 360° VR videos in a random order and rated how visually complex the given virtual environment was immediately after each VR video ended. Inconsistent with the finding of a partial significant difference in perceived visual complexity in terms of brightness of virtual environments, both visual properties of distance and brightness showed no significant differences in the modulation of aMMN amplitudes. A further analysis was carried out to compare elicited aMMN amplitudes of a typical MMN task and an applied VR task. No significant difference in the aMMN amplitudes was found across the two groups who completed visual tasks with different visual-task demands. In conclusion, the aMMN is a reliable ERP marker of preattentive cognitive processing for auditory deviance detection.
Constructing Noise-Invariant Representations of Sound in the Auditory Pathway
Rabinowitz, Neil C.; Willmore, Ben D. B.; King, Andrew J.; Schnupp, Jan W. H.
2013-01-01
Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. PMID:24265596
Kouni, Sophia N; Giannopoulos, Sotirios; Ziavra, Nausika; Koutsojannis, Constantinos
2013-01-01
Acoustic signals are transmitted through the external and middle ear mechanically to the cochlea where they are transduced into electrical impulse for further transmission via the auditory nerve. The auditory nerve encodes the acoustic sounds that are conveyed to the auditory brainstem. Multiple brainstem nuclei, the cochlea, the midbrain, the thalamus, and the cortex constitute the central auditory system. In clinical practice, auditory brainstem responses (ABRs) to simple stimuli such as click or tones are widely used. Recently, complex stimuli or complex auditory brain responses (cABRs), such as monosyllabic speech stimuli and music, are being used as a tool to study the brainstem processing of speech sounds. We have used the classic 'click' as well as, for the first time, the artificial successive complex stimuli 'ba', which constitutes the Greek word 'baba' corresponding to the English 'daddy'. Twenty young adults institutionally diagnosed as dyslexic (10 subjects) or light dyslexic (10 subjects) comprised the diseased group. Twenty sex-, age-, education-, hearing sensitivity-, and IQ-matched normal subjects comprised the control group. Measurements included the absolute latencies of waves I through V, the interpeak latencies elicited by the classical acoustic click, the negative peak latencies of A and C waves, as well as the interpeak latencies of A-C elicited by the verbal stimulus 'baba' created on a digital speech synthesizer. The absolute peak latencies of waves I, III, and V in response to monoaural rarefaction clicks as well as the interpeak latencies I-III, III-V, and I-V in the dyslexic subjects, although increased in comparison with normal subjects, did not reach the level of a significant difference (p<0.05). However, the absolute peak latencies of the negative wave C and the interpeak latencies of A-C elicited by verbal stimuli were found to be increased in the dyslexic group in comparison with the control group (p=0.0004 and p=0.045, respectively). In the subgroup consisting of 10 patients suffering from 'other learning disabilities' and who were characterized as with 'light' dyslexia according to dyslexia tests, no significant delays were found in peak latencies A and C and interpeak latencies A-C in comparison with the control group. Acoustic representation of a speech sound and, in particular, the disyllabic word 'baba' was found to be abnormal, as low as the auditory brainstem. Because ABRs mature in early life, this can help to identify subjects with acoustically based learning problems and apply early intervention, rehabilitation, and treatment. Further studies and more experience with more patients and pathological conditions such as plasticity of the auditory system, cochlear implants, hearing aids, presbycusis, or acoustic neuropathy are necessary until this type of testing is ready for clinical application. © 2013 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Wagner, Monica; Shafer, Valerie L.; Haxhari, Evis; Kiprovski, Kevin; Behrmann, Katherine; Griffiths, Tara
2017-01-01
Purpose: Atypical cortical sensory waveforms reflecting impaired encoding of auditory stimuli may result from inconsistency in cortical response to the acoustic feature changes within spoken words. Thus, the present study assessed intrasubject stability of the P1-N1-P2 complex and T-complex to multiple productions of spoken nonwords in 48 adults…
Abnormal auditory pattern perception in schizophrenia.
Haigh, Sarah M; Coffman, Brian A; Murphy, Timothy K; Butera, Christiana D; Salisbury, Dean F
2016-10-01
Mismatch negativity (MMN) in response to deviation from physical sound parameters (e.g., pitch, duration) is reduced in individuals with long-term schizophrenia (Sz), suggesting deficits in deviance detection. However, MMN can appear at several time intervals as part of deviance detection. Understanding which part of the processing stream is abnormal in Sz is crucial for understanding MMN pathophysiology. We measured MMN to complex pattern deviants, which have been shown to produce multiple MMNs in healthy controls (HC). Both simple and complex MMNs were recorded from 27 Sz and 27 matched HC. For simple MMN, pitch- and duration-deviants were presented among frequent standard tones. For complex MMN, patterns of five single tones were repeatedly presented, with the occasional deviant group of tones containing an extra sixth tone. Sz showed smaller pitch MMN (p=0.009, ~110ms) and duration MMN (p=0.030, ~170ms) than healthy controls. For complex MMN, there were two deviance-related negativities. The first (~150ms) was not significantly different between HC and SZ. The second was significantly reduced in Sz (p=0.011, ~400ms). The topography of the late complex MMN was consistent with generators in anterior temporal cortex. Worse late MMN in Sz was associated with increased emotional withdrawal, poor attention, lack of spontaneity/conversation, and increased preoccupation. Late MMN blunting in schizophrenia suggests a deficit in later stages of deviance processing. Correlations with negative symptoms measures are preliminary, but suggest that abnormal complex auditory perceptual processes may compound higher-order cognitive and social deficits in the disorder. Copyright © 2016 Elsevier B.V. All rights reserved.
Return of Function after Hair Cell Regeneration
Ryals, Brenda M.; Dent, Micheal L.; Dooling, Robert J.
2012-01-01
The ultimate goal of hair cell regeneration is to restore functional hearing. Because birds begin perceiving and producing song early in life, they provide a propitious model for studying not only whether regeneration of lost hair cells can return auditory sensitivity but also whether this regenerated periphery can restore complex auditory perception and production. They are the only animal where hair cell regeneration occurs naturally after hair cell loss and where the ability to correctly perceive and produce complex acoustic signals is critical to procreation and survival. The purpose of this review article is to survey the most recent literature on behavioral measures of auditory functional return in adult birds after hair cell regeneration. The first portion of the review summarizes the effect of ototoxic drug induced hair cell loss and regeneration on hearing loss and recovery for pure tones. The second portion reviews studies of complex, species-specific vocalization discrimination and recognition after hair cell regeneration. Finally, we discuss the relevance of temporary hearing loss and recovery through hair cell regeneration on complex call and song production. Hearing sensitivity is restored, except for the highest frequencies, after hair cell regeneration in birds, but there are enduring changes to complex auditory perception. These changes do not appear to provide any obstacle to future auditory or vocal learning. PMID:23202051
Auditory motion-specific mechanisms in the primate brain
Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.
2017-01-01
This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038
Auditory Processing of Amplitude Envelope Rise Time in Adults Diagnosed with Developmental Dyslexia
ERIC Educational Resources Information Center
Pasquini, Elisabeth S.; Corriveau, Kathleen H.; Goswami, Usha
2007-01-01
Studies of basic (nonspeech) auditory processing in adults thought to have developmental dyslexia have yielded a variety of data. Yet there has been little consensus regarding the explanatory value of auditory processing in accounting for reading difficulties. Recently, however, a number of studies of basic auditory processing in children with…
Lu, Sara A; Wickens, Christopher D; Prinet, Julie C; Hutchins, Shaun D; Sarter, Nadine; Sebok, Angelia
2013-08-01
The aim of this study was to integrate empirical data showing the effects of interrupting task modality on the performance of an ongoing visual-manual task and the interrupting task itself. The goal is to support interruption management and the design of multimodal interfaces. Multimodal interfaces have been proposed as a promising means to support interruption management.To ensure the effectiveness of this approach, their design needs to be based on an analysis of empirical data concerning the effectiveness of individual and redundant channels of information presentation. Three meta-analyses were conducted to contrast performance on an ongoing visual task and interrupting tasks as a function of interrupting task modality (auditory vs. tactile, auditory vs. visual, and single modality vs. redundant auditory-visual). In total, 68 studies were included and six moderator variables were considered. The main findings from the meta-analyses are that response times are faster for tactile interrupting tasks in case of low-urgency messages.Accuracy is higher with tactile interrupting tasks for low-complexity signals but higher with auditory interrupting tasks for high-complexity signals. Redundant auditory-visual combinations are preferable for communication tasks during high workload and with a small visual angle of separation. The three meta-analyses contribute to the knowledge base in multimodal information processing and design. They highlight the importance of moderator variables in predicting the effects of interruption task modality on ongoing and interrupting task performance. The findings from this research will help inform the design of multimodal interfaces in data-rich, event-driven domains.
Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J
2016-11-01
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.
Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.
2016-01-01
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647
Vilela, Nadia; Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Sanches, Seisse Gabriela Gandolfi; Wertzner, Haydée Fiszbein; Carvallo, Renata Mota Mamede
2016-02-01
To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central) auditory processing disorder . Language, audiological and (central) auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central) auditory processing evaluation results. When a (central) auditory processing disorder was present in association with a speech disorder, the children tended to have lower scores on phonological assessments. A greater severity of speech disorder was related to a greater probability of the child having a (central) auditory processing disorder. The use of a cutoff value for the Percentage of Consonants Correct-Revised index successfully distinguished between children with and without a (central) auditory processing disorder. The severity of speech-sound disorder in children was influenced by the presence of (central) auditory processing disorder. The attempt to identify a cutoff value based on a severity index was successful.
FTAP: a Linux-based program for tapping and music experiments.
Finney, S A
2001-02-01
This paper describes FTAP, a flexible data collection system for tapping and music experiments. FTAP runs on standard PC hardware with the Linux operating system and can process input keystrokes and auditory output with reliable millisecond resolution. It uses standard MIDI devices for input and output and is particularly flexible in the area of auditory feedback manipulation. FTAP can run a wide variety of experiments, including synchronization/continuation tasks (Wing & Kristofferson, 1973), synchronization tasks combined with delayed auditory feedback (Aschersleben & Prinz, 1997), continuation tasks with isolated feedback perturbations (Wing, 1977), and complex alterations of feedback in music performance (Finney, 1997). Such experiments have often been implemented with custom hardware and software systems, but with FTAP they can be specified by a simple ASCII text parameter file. FTAP is available at no cost in source-code form.
Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments
NASA Astrophysics Data System (ADS)
Horowitz, Seth S.; Simmons, Andrea M.; Blue, China
2005-09-01
Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.
A novel method of brainstem auditory evoked potentials using complex verbal stimuli.
Kouni, Sophia N; Koutsojannis, Constantinos; Ziavra, Nausika; Giannopoulos, Sotirios
2014-08-01
The click and tone-evoked auditory brainstem responses are widely used in clinical practice due to their consistency and predictability. More recently, the speech-evoked responses have been used to evaluate subcortical processing of complex signals, not revealed by responses to clicks and tones. Disyllable stimuli corresponding to familiar words can induce a pattern of voltage fluctuations in the brain stem resulting in a familiar waveform, and they can yield better information about brain stem nuclei along the ascending central auditory pathway. We describe a new method with the use of the disyllable word "baba" corresponding to English "daddy" that is commonly used in many other ethnic languages spanning from West Africa to the Eastern Mediterranean all the way to the East Asia. This method was applied in 20 young adults institutionally diagnosed as dyslexic (10 subjects) or light dyslexic (10 subjects) who were matched with 20 sex, age, education, hearing sensitivity, and IQ-matched normal subjects. The absolute peak latencies of the negative wave C and the interpeak latencies of A-C elicited by verbal stimuli "baba" were found to be significantly increased in the dyslexic group in comparison with the control group. The method is easy and helpful to diagnose abnormalities affecting the auditory pathway, to identify subjects with early perception and cortical representation abnormalities, and to apply the suitable therapeutic and rehabilitation management.
A Brain for Speech. Evolutionary Continuity in Primate and Human Auditory-Vocal Processing
Aboitiz, Francisco
2018-01-01
In this review article, I propose a continuous evolution from the auditory-vocal apparatus and its mechanisms of neural control in non-human primates, to the peripheral organs and the neural control of human speech. Although there is an overall conservatism both in peripheral systems and in central neural circuits, a few changes were critical for the expansion of vocal plasticity and the elaboration of proto-speech in early humans. Two of the most relevant changes were the acquisition of direct cortical control of the vocal fold musculature and the consolidation of an auditory-vocal articulatory circuit, encompassing auditory areas in the temporoparietal junction and prefrontal and motor areas in the frontal cortex. This articulatory loop, also referred to as the phonological loop, enhanced vocal working memory capacity, enabling early humans to learn increasingly complex utterances. The auditory-vocal circuit became progressively coupled to multimodal systems conveying information about objects and events, which gradually led to the acquisition of modern speech. Gestural communication accompanies the development of vocal communication since very early in human evolution, and although both systems co-evolved tightly in the beginning, at some point speech became the main channel of communication. PMID:29636657
NASA Astrophysics Data System (ADS)
Leek, Marjorie R.; Neff, Donna L.
2004-05-01
Charles Watson's studies of informational masking and the effects of stimulus uncertainty on auditory perception have had a profound impact on auditory research. His series of seminal studies in the mid-1970s on the detection and discrimination of target sounds in sequences of brief tones with uncertain properties addresses the fundamental problem of extracting target signals from background sounds. As conceptualized by Chuck and others, informational masking results from more central (even ``cogneetive'') processes as a consequence of stimulus uncertainty, and can be distinguished from ``energetic'' masking, which primarily arises from the auditory periphery. Informational masking techniques are now in common use to study the detection, discrimination, and recognition of complex sounds, the capacity of auditory memory and aspects of auditory selective attention, the often large effects of training to reduce detrimental effects of uncertainty, and the perceptual segregation of target sounds from irrelevant context sounds. This paper will present an overview of past and current research on informational masking, and show how Chuck's work has been expanded in several directions by other scientists to include the effects of informational masking on speech perception and on perception by listeners with hearing impairment. [Work supported by NIDCD.
A roadmap for the study of conscious audition and its neural basis
Cariani, Peter A.; Gutschalk, Alexander
2017-01-01
How and which aspects of neural activity give rise to subjective perceptual experience—i.e. conscious perception—is a fundamental question of neuroscience. To date, the vast majority of work concerning this question has come from vision, raising the issue of generalizability of prominent resulting theories. However, recent work has begun to shed light on the neural processes subserving conscious perception in other modalities, particularly audition. Here, we outline a roadmap for the future study of conscious auditory perception and its neural basis, paying particular attention to how conscious perception emerges (and of which elements or groups of elements) in complex auditory scenes. We begin by discussing the functional role of the auditory system, particularly as it pertains to conscious perception. Next, we ask: what are the phenomena that need to be explained by a theory of conscious auditory perception? After surveying the available literature for candidate neural correlates, we end by considering the implications that such results have for a general theory of conscious perception as well as prominent outstanding questions and what approaches/techniques can best be used to address them. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044014
Auditory Processing Disorder in Children
... News & Events NIDCD News Inside NIDCD Newsletter Shareable Images ... Info » Hearing, Ear Infections, and Deafness Auditory Processing Disorder Auditory processing disorder (APD) describes a condition ...
Processing of Communication Sounds: Contributions of Learning, Memory, and Experience
Bigelow, James; Rossi, Breein
2013-01-01
Abundant evidence from both field and lab studies has established that conspecific vocalizations (CVs) are of critical ecological significance for a wide variety of species, including humans, nonhuman primates, rodents, and other mammals and birds. Correspondingly, a number of experiments have demonstrated behavioral processing advantages for CVs, such as in discrimination and memory tasks. Further, a wide range of experiments have described brain regions in many species that appear to be specialized for processing CVs. For example, several neural regions have been described in both mammals and birds wherein greater neural responses are elicited by CVs than by comparison stimuli such as heterospecific vocalizations, nonvocal complex sounds, and artificial stimuli. These observations raise the question of whether these regions reflect domain-specific neural mechanisms dedicated to processing CVs, or alternatively, if these regions reflect domain-general neural mechanisms for representing complex sounds of learned significance. Inasmuch as CVs can be viewed as complex combinations of basic spectrotemporal features, the plausibility of the latter position is supported by a large body of literature describing modulated cortical and subcortical representation of a variety of acoustic features that have been experimentally associated with stimuli of natural behavioral significance (such as food rewards). Herein, we review a relatively small body of existing literature describing the roles of experience, learning, and memory in the emergence of species-typical neural representations of CVs and auditory system plasticity. In both songbirds and mammals, manipulations of auditory experience as well as specific learning paradigms are shown to modulate neural responses evoked by CVs, either in terms of overall firing rate or temporal firing patterns. In some cases, CV-sensitive neural regions gradually acquire representation of non-CV stimuli with which subjects have training and experience. These results parallel literature in humans describing modulation of responses in face-sensitive neural regions through learning and experience. Thus, although many questions remain, the available evidence is consistent with the notion that CVs may acquire distinct neural representation through domain-general mechanisms for representing complex auditory objects that are of learned importance to the animal. PMID:23792078
Process Timing and Its Relation to the Coding of Tonal Harmony
ERIC Educational Resources Information Center
Aksentijevic, Aleksandar; Barber, Paul J.; Elliott, Mark A.
2011-01-01
Advances in auditory research suggest that gamma-band synchronization of frequency-specific cortical loci could be responsible for the integration of pure tones (harmonics) into harmonic complex tones. Thus far, evidence for such a mechanism has been revealed in neurophysiological studies, with little corroborative psychophysical evidence. In six…
Auditory Discrimination of Frequency Ratios: The Octave Singularity
ERIC Educational Resources Information Center
Bonnard, Damien; Micheyl, Christophe; Semal, Catherine; Dauman, Rene; Demany, Laurent
2013-01-01
Sensitivity to frequency ratios is essential for the perceptual processing of complex sounds and the appreciation of music. This study assessed the effect of ratio simplicity on ratio discrimination for pure tones presented either simultaneously or sequentially. Each stimulus consisted of four 100-ms pure tones, equally spaced in terms of…
Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception
ERIC Educational Resources Information Center
Mottron, Laurent; Dawson, Michelle; Soulieres, Isabelle; Hubert, Benedicte; Burack, Jake
2006-01-01
We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception…
Paltoglou, Aspasia E; Sumner, Christian J; Hall, Deborah A
2011-01-01
Feature-specific enhancement refers to the process by which selectively attending to a particular stimulus feature specifically increases the response in the same region of the brain that codes that stimulus property. Whereas there are many demonstrations of this mechanism in the visual system, the evidence is less clear in the auditory system. The present functional magnetic resonance imaging (fMRI) study examined this process for two complex sound features, namely frequency modulation (FM) and spatial motion. The experimental design enabled us to investigate whether selectively attending to FM and spatial motion enhanced activity in those auditory cortical areas that were sensitive to the two features. To control for attentional effort, the difficulty of the target-detection tasks was matched as closely as possible within listeners. Locations of FM-related and motion-related activation were broadly compatible with previous research. The results also confirmed a general enhancement across the auditory cortex when either feature was being attended to, as compared with passive listening. The feature-specific effects of selective attention revealed the novel finding of enhancement for the nonspatial (FM) feature, but not for the spatial (motion) feature. However, attention to spatial features also recruited several areas outside the auditory cortex. Further analyses led us to conclude that feature-specific effects of selective attention are not statistically robust, and appear to be sensitive to the choice of fMRI experimental design and localizer contrast. PMID:21447093
Gestures, vocalizations, and memory in language origins.
Aboitiz, Francisco
2012-01-01
THIS ARTICLE DISCUSSES THE POSSIBLE HOMOLOGIES BETWEEN THE HUMAN LANGUAGE NETWORKS AND COMPARABLE AUDITORY PROJECTION SYSTEMS IN THE MACAQUE BRAIN, IN AN ATTEMPT TO RECONCILE TWO EXISTING VIEWS ON LANGUAGE EVOLUTION: one that emphasizes hand control and gestures, and the other that emphasizes auditory-vocal mechanisms. The capacity for language is based on relatively well defined neural substrates whose rudiments have been traced in the non-human primate brain. At its core, this circuit constitutes an auditory-vocal sensorimotor circuit with two main components, a "ventral pathway" connecting anterior auditory regions with anterior ventrolateral prefrontal areas, and a "dorsal pathway" connecting auditory areas with parietal areas and with posterior ventrolateral prefrontal areas via the arcuate fasciculus and the superior longitudinal fasciculus. In humans, the dorsal circuit is especially important for phonological processing and phonological working memory, capacities that are critical for language acquisition and for complex syntax processing. In the macaque, the homolog of the dorsal circuit overlaps with an inferior parietal-premotor network for hand and gesture selection that is under voluntary control, while vocalizations are largely fixed and involuntary. The recruitment of the dorsal component for vocalization behavior in the human lineage, together with a direct cortical control of the subcortical vocalizing system, are proposed to represent a fundamental innovation in human evolution, generating an inflection point that permitted the explosion of vocal language and human communication. In this context, vocal communication and gesturing have a common history in primate communication.
Speech comprehension training and auditory and cognitive processing in older adults.
Pichora-Fuller, M Kathleen; Levitt, Harry
2012-12-01
To provide a brief history of speech comprehension training systems and an overview of research on auditory and cognitive aging as background to recommendations for future directions for rehabilitation. Two distinct domains were reviewed: one concerning technological and the other concerning psychological aspects of training. Historical trends and advances in these 2 domains were interrelated to highlight converging trends and directions for future practice. Over the last century, technological advances have influenced both the design of hearing aids and training systems. Initially, training focused on children and those with severe loss for whom amplification was insufficient. Now the focus has shifted to older adults with relatively little loss but difficulties listening in noise. Evidence of brain plasticity from auditory and cognitive neuroscience provides new insights into how to facilitate perceptual (re-)learning by older adults. There is a new imperative to complement training to increase bottom-up processing of the signal with more ecologically valid training to boost top-down information processing based on knowledge of language and the world. Advances in digital technologies enable the development of increasingly sophisticated training systems incorporating complex meaningful materials such as music, audiovisual interactive displays, and conversation.
McKeown, Denis; Wellsted, David
2009-06-01
Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex was decreased (Experiments 1 and 2) or increased (Experiments 3, 4, and 5) in intensity on half of trials: The task was simply to identify those trials. Prior to each trial, a pure tone inducer was introduced either at the same frequency as the target component or at the frequency of a different component of the complex. Consistent with a frequency-specific form of disruption, discrimination performance was impaired when the inducing tone matched the frequency of the following decrement or increment. A timbre memory model (TMM) is proposed incorporating channel-specific interference allied to inhibition of attending in the coding of sounds in the context of memory traces of recent sounds. (c) 2009 APA, all rights reserved.
Ptok, M; Meisen, R
2008-01-01
The rapid auditory processing defi-cit theory holds that impaired reading/writing skills are not caused exclusively by a cognitive deficit specific to representation and processing of speech sounds but arise due to sensory, mainly auditory, deficits. To further explore this theory we compared different measures of auditory low level skills to writing skills in school children. prospective study. School children attending third and fourth grade. just noticeable differences for intensity and frequency (JNDI, JNDF), gap detection (GD) monaural and binaural temporal order judgement (TOJb and TOJm); grade in writing, language and mathematics. correlation analysis. No relevant correlation was found between any auditory low level processing variable and writing skills. These data do not support the rapid auditory processing deficit theory.
Auditory priming improves neural synchronization in auditory-motor entrainment.
Crasta, Jewel E; Thaut, Michael H; Anderson, Charles W; Davies, Patricia L; Gavin, William J
2018-05-22
Neurophysiological research has shown that auditory and motor systems interact during movement to rhythmic auditory stimuli through a process called entrainment. This study explores the neural oscillations underlying auditory-motor entrainment using electroencephalography. Forty young adults were randomly assigned to one of two control conditions, an auditory-only condition or a motor-only condition, prior to a rhythmic auditory-motor synchronization condition (referred to as combined condition). Participants assigned to the auditory-only condition auditory-first group) listened to 400 trials of auditory stimuli presented every 800 ms, while those in the motor-only condition (motor-first group) were asked to tap rhythmically every 800 ms without any external stimuli. Following their control condition, all participants completed an auditory-motor combined condition that required tapping along with auditory stimuli every 800 ms. As expected, the neural processes for the combined condition for each group were different compared to their respective control condition. Time-frequency analysis of total power at an electrode site on the left central scalp (C3) indicated that the neural oscillations elicited by auditory stimuli, especially in the beta and gamma range, drove the auditory-motor entrainment. For the combined condition, the auditory-first group had significantly lower evoked power for a region of interest representing sensorimotor processing (4-20 Hz) and less total power in a region associated with anticipation and predictive timing (13-16 Hz) than the motor-first group. Thus, the auditory-only condition served as a priming facilitator of the neural processes in the combined condition, more so than the motor-only condition. Results suggest that even brief periods of rhythmic training of the auditory system leads to neural efficiency facilitating the motor system during the process of entrainment. These findings have implications for interventions using rhythmic auditory stimulation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Connecting the ear to the brain: molecular mechanisms of auditory circuit assembly
Appler, Jessica M.; Goodrich, Lisa V.
2011-01-01
Our sense of hearing depends on precisely organized circuits that allow us to sense, perceive, and respond to complex sounds in our environment, from music and language to simple warning signals. Auditory processing begins in the cochlea of the inner ear, where sounds are detected by sensory hair cells and then transmitted to the central nervous system by spiral ganglion neurons, which faithfully preserve the frequency, intensity, and timing of each stimulus. During the assembly of auditory circuits, spiral ganglion neurons establish precise connections that link hair cells in the cochlea to target neurons in the auditory brainstem, develop specific firing properties, and elaborate unusual synapses both in the periphery and in the CNS. Understanding how spiral ganglion neurons acquire these unique properties is a key goal in auditory neuroscience, as these neurons represent the sole input of auditory information to the brain. In addition, the best currently available treatment for many forms of deafness is the cochlear implant, which compensates for lost hair cell function by directly stimulating the auditory nerve. Historically, studies of the auditory system have lagged behind other sensory systems due to the small size and inaccessibility of the inner ear. With the advent of new molecular genetic tools, this gap is narrowing. Here, we summarize recent insights into the cellular and molecular cues that guide the development of spiral ganglion neurons, from their origin in the proneurosensory domain of the otic vesicle to the formation of specialized synapses that ensure rapid and reliable transmission of sound information from the ear to the brain. PMID:21232575
Geissler, Diana B; Ehret, Günter
2004-02-01
Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies.
The plastic ear and perceptual relearning in auditory spatial perception
Carlile, Simon
2014-01-01
The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497
2017-01-01
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238
Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L
2017-12-13
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.
Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System
Anderson, Lucy A.
2016-01-01
High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the findings suggest that auditory temporal processing deficits, such as impairments in gap-in-noise detection, could arise from reduced brain sensitivity to sound offsets alone. PMID:26865621
Estrogenic modulation of auditory processing: a vertebrate comparison
Caras, Melissa L.
2013-01-01
Sex-steroid hormones are well-known regulators of vocal motor behavior in several organisms. A large body of evidence now indicates that these same hormones modulate processing at multiple levels of the ascending auditory pathway. The goal of this review is to provide a comparative analysis of the role of estrogens in vertebrate auditory function. Four major conclusions can be drawn from the literature: First, estrogens may influence the development of the mammalian auditory system. Second, estrogenic signaling protects the mammalian auditory system from noise- and age-related damage. Third, estrogens optimize auditory processing during periods of reproductive readiness in multiple vertebrate lineages. Finally, brain-derived estrogens can act locally to enhance auditory response properties in at least one avian species. This comparative examination may lead to a better appreciation of the role of estrogens in the processing of natural vocalizations and may provide useful insights toward alleviating auditory dysfunctions emanating from hormonal imbalances. PMID:23911849
Reduced auditory processing capacity during vocalization in children with Selective Mutism.
Arie, Miri; Henkin, Yael; Lamy, Dominique; Tetin-Schneider, Simona; Apter, Alan; Sadeh, Avi; Bar-Haim, Yair
2007-02-01
Because abnormal Auditory Efferent Activity (AEA) is associated with auditory distortions during vocalization, we tested whether auditory processing is impaired during vocalization in children with Selective Mutism (SM). Participants were children with SM and abnormal AEA, children with SM and normal AEA, and normally speaking controls, who had to detect aurally presented target words embedded within word lists under two conditions: silence (single task), and while vocalizing (dual task). To ascertain specificity of auditory-vocal deficit, effects of concurrent vocalizing were also examined during a visual task. Children with SM and abnormal AEA showed impaired auditory processing during vocalization relative to children with SM and normal AEA, and relative to control children. This impairment is specific to the auditory modality and does not reflect difficulties in dual task per se. The data extends previous findings suggesting that deficient auditory processing is involved in speech selectivity in SM.
Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne
2016-12-01
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Fox, Allison M.; Reid, Corinne L.; Anderson, Mike; Richardson, Cassandra; Bishop, Dorothy V. M.
2012-01-01
According to the rapid auditory processing theory, the ability to parse incoming auditory information underpins learning of oral and written language. There is wide variation in this low-level perceptual ability, which appears to follow a protracted developmental course. We studied the development of rapid auditory processing using event-related…
Neural Processing of Musical and Vocal Emotions Through Cochlear Implants Simulation.
Ahmed, Duha G; Paquette, Sebastian; Zeitouni, Anthony; Lehmann, Alexandre
2018-05-01
Cochlear implants (CIs) partially restore the sense of hearing in the deaf. However, the ability to recognize emotions in speech and music is reduced due to the implant's electrical signal limitations and the patient's altered neural pathways. Electrophysiological correlations of these limitations are not yet well established. Here we aimed to characterize the effect of CIs on auditory emotion processing and, for the first time, directly compare vocal and musical emotion processing through a CI-simulator. We recorded 16 normal hearing participants' electroencephalographic activity while listening to vocal and musical emotional bursts in their original form and in a degraded (CI-simulated) condition. We found prolonged P50 latency and reduced N100-P200 complex amplitude in the CI-simulated condition. This points to a limitation in encoding sound signals processed through CI simulation. When comparing the processing of vocal and musical bursts, we found a delay in latency with the musical bursts compared to the vocal bursts in both conditions (original and CI-simulated). This suggests that despite the cochlear implants' limitations, the auditory cortex can distinguish between vocal and musical stimuli. In addition, it adds to the literature supporting the complexity of musical emotion. Replicating this study with actual CI users might lead to characterizing emotional processing in CI users and could ultimately help develop optimal rehabilitation programs or device processing strategies to improve CI users' quality of life.
Musical Experience, Auditory Perception and Reading-Related Skills in Children
Banai, Karen; Ahissar, Merav
2013-01-01
Background The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Methodology/Principal Findings Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Conclusions/Significance Participants’ previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case. PMID:24086654
Prefrontal Hemodynamics of Physical Activity and Environmental Complexity During Cognitive Work.
McKendrick, Ryan; Mehta, Ranjana; Ayaz, Hasan; Scheldrup, Melissa; Parasuraman, Raja
2017-02-01
The aim of this study was to assess performance and cognitive states during cognitive work in the presence of physical work and in natural settings. Authors of previous studies have examined the interaction between cognitive and physical work, finding performance decrements in working memory. Neuroimaging has revealed increases and decreases in prefrontal oxygenated hemoglobin during the interaction of cognitive and physical work. The effect of environment on cognitive-physical dual tasking has not been previously considered. Thirteen participants were monitored with wireless functional near-infrared spectroscopy (fNIRS) as they performed an auditory 1-back task while sitting, walking indoors, and walking outdoors. Relative to sitting and walking indoors, auditory working memory performance declined when participants were walking outdoors. Sitting during the auditory 1-back task increased oxygenated hemoglobin and decreased deoxygenated hemoglobin in bilateral prefrontal cortex. Walking reduced the total hemoglobin available to bilateral prefrontal cortex. An increase in environmental complexity reduced oxygenated hemoglobin and increased deoxygenated hemoglobin in bilateral prefrontal cortex. Wireless fNIRS is capable of monitoring cognitive states in naturalistic environments. Selective attention and physical work compete with executive processing. During executive processing loading of selective attention and physical work results in deactivation of bilateral prefrontal cortex and degraded working memory performance, indicating that physical work and concomitant selective attention may supersede executive processing in the distribution of mental resources. This research informs decision-making procedures in work where working memory, physical activity, and attention interact. Where working memory is paramount, precautions should be taken to eliminate competition from physical work and selective attention.
Redfern, Mark S; Chambers, April J; Jennings, J Richard; Furman, Joseph M
2017-08-01
This study investigated the impact of attention on the sensory and motor actions during postural recovery from underfoot perturbations in young and older adults. A dual-task paradigm was used involving disjunctive and choice reaction time (RT) tasks to auditory and visual stimuli at different delays from the onset of two types of platform perturbations (rotations and translations). The RTs were increased prior to the perturbation (preparation phase) and during the immediate recovery response (response initiation) in young and older adults, but this interference dissipated rapidly after the perturbation response was initiated (<220 ms). The sensory modality of the RT task impacted the results with interference being greater for the auditory task compared to the visual task. As motor complexity of the RT task increased (disjunctive versus choice) there was greater interference from the perturbation. Finally, increasing the complexity of the postural perturbation by mixing the rotational and translational perturbations together increased interference for the auditory RT tasks, but did not affect the visual RT responses. These results suggest that sensory and motoric components of postural control are under the influence of different dynamic attentional processes.
Barker, Matthew D; Purdy, Suzanne C
2016-01-01
This research investigates a novel method for identifying and measuring school-aged children with poor auditory processing through a tablet computer. Feasibility and test-retest reliability are investigated by examining the percentage of Group 1 participants able to complete the tasks and developmental effects on performance. Concurrent validity was investigated against traditional tests of auditory processing using Group 2. There were 847 students aged 5 to 13 years in group 1, and 46 aged 5 to 14 years in group 2. Some tasks could not be completed by the youngest participants. Significant correlations were found between results of most auditory processing areas assessed by the Feather Squadron test and traditional auditory processing tests. Test-retest comparisons indicated good reliability for most of the Feather Squadron assessments and some of the traditional tests. The results indicate the Feather Squadron assessment is a time-efficient, feasible, concurrently valid, and reliable approach for measuring auditory processing in school-aged children. Clinically, this may be a useful option for audiologists when performing auditory processing assessments as it is a relatively fast, engaging, and easy way to assess auditory processing abilities. Research is needed to investigate further the construct validity of this new assessment by examining the association between performance on Feather Squadron and objective evoked potential, lesion studies, and/or functional imaging measures of auditory function.
Demopoulos, Carly; Hopkins, Joyce; Kopald, Brandon E; Paulson, Kim; Doyle, Lauren; Andrews, Whitney E; Lewine, Jeffrey David
2015-11-01
The primary aim of this study was to examine whether there is an association between magnetoencephalography-based (MEG) indices of basic cortical auditory processing and vocal affect recognition (VAR) ability in individuals with autism spectrum disorder (ASD). MEG data were collected from 25 children/adolescents with ASD and 12 control participants using a paired-tone paradigm to measure quality of auditory physiology, sensory gating, and rapid auditory processing. Group differences were examined in auditory processing and vocal affect recognition ability. The relationship between differences in auditory processing and vocal affect recognition deficits was examined in the ASD group. Replicating prior studies, participants with ASD showed longer M1n latencies and impaired rapid processing compared with control participants. These variables were significantly related to VAR, with the linear combination of auditory processing variables accounting for approximately 30% of the variability after controlling for age and language skills in participants with ASD. VAR deficits in ASD are typically interpreted as part of a core, higher order dysfunction of the "social brain"; however, these results suggest they also may reflect basic deficits in auditory processing that compromise the extraction of socially relevant cues from the auditory environment. As such, they also suggest that therapeutic targeting of sensory dysfunction in ASD may have additional positive implications for other functional deficits. (c) 2015 APA, all rights reserved).
Spectral context affects temporal processing in awake auditory cortex
Beitel, Ralph E.; Vollmer, Maike; Heiser, Marc A; Schreiner, Christoph E.
2013-01-01
Amplitude modulation encoding is critical for human speech perception and complex sound processing in general. The modulation transfer function (MTF) is a staple of auditory psychophysics, and has been shown to predict speech intelligibility performance in a range of adverse listening conditions and hearing impairments, including cochlear implant-supported hearing. Although both tonal and broadband carriers have been employed in psychophysical studies of modulation detection and discrimination, relatively little is known about differences in the cortical representation of such signals. We obtained MTFs in response to sinusoidal amplitude modulation (SAM) for both narrowband tonal carriers and 2-octave bandwidth noise carriers in the auditory core of awake squirrel monkeys. MTFs spanning modulation frequencies from 4 to 512 Hz were obtained using 16 channel linear recording arrays sampling across all cortical laminae. Carrier frequency for tonal SAM and center frequency for noise SAM was set at the estimated best frequency for each penetration. Changes in carrier type affected both rate and temporal MTFs in many neurons. Using spike discrimination techniques, we found that discrimination of modulation frequency was significantly better for tonal SAM than for noise SAM, though the differences were modest at the population level. Moreover, spike trains elicited by tonal and noise SAM could be readily discriminated in most cases. Collectively, our results reveal remarkable sensitivity to the spectral content of modulated signals, and indicate substantial interdependence between temporal and spectral processing in neurons of the core auditory cortex. PMID:23719811
Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan
2016-10-01
Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.
Tokoro, Kazuhiko; Sato, Hironobu; Yamamoto, Mayumi; Nagai, Yoshiko
2015-12-01
Attention is the process by which information and selection occurs, the thalamus plays an important role in the selective attention of visual and auditory information. Selective attention is a conscious effort; however, it occurs subconsciously, as well. The lateral geniculate body (LGB) filters visual information before it reaches the cortex (bottom-up attention). The thalamic reticular nucleus (TRN) provides a strong inhibitory input to both the LGB and pulvinar. This regulation involves focusing a spotlight on important information, as well as inhibiting unnecessary background information. Behavioral contexts more strongly modulate activity of the TRN and pulvinar influencing feedforward and feedback information transmission between the frontal, temporal, parietal and occipital cortical areas (top-down attention). The medial geniculate body (MGB) filters auditory information the TRN inhibits the MGB. Attentional modulation occurring in the auditory pathway among the cochlea, cochlear nucleus, superior olivary complex, and inferior colliculus is more important than that of the MGB and TRN. We also discuss the attentional consequence of thalamic hemorrhage.
Dynamic sound localization in cats
Ruhland, Janet L.; Jones, Amy E.
2015-01-01
Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772
Predictive motor control of sensory dynamics in Auditory Active Sensing
Morillon, Benjamin; Hackett, Troy A.; Kajikawa, Yoshinao; Schroeder, Charles E.
2016-01-01
Neuronal oscillations present potential physiological substrates for brain operations that require temporal prediction. We review this idea in the context of auditory perception. Using speech as an exemplar, we illustrate how hierarchically organized oscillations can be used to parse and encode complex input streams. We then consider the motor system as a major source of rhythms (temporal priors) in auditory processing, that act in concert with attention to sharpen sensory representations and link them across areas. We discuss the anatomo-functional pathways that could mediate this audio-motor interaction, and notably the potential role of the somatosensory cortex. Finally, we reposition temporal predictions in the context of internal models, discussing how they interact with feature-based or spatial predictions. We argue that complementary predictions interact synergistically according to the organizational principles of each sensory system, forming multidimensional filters crucial to perception. PMID:25594376
Central auditory neurons have composite receptive fields.
Kozlov, Andrei S; Gentner, Timothy Q
2016-02-02
High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes.
Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli.
Kanaya, Shoko; Yokosawa, Kazuhiko
2011-02-01
Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants' auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants' auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.
Donkers, Franc C.L.; Schipul, Sarah E.; Baranek, Grace T.; Cleary, Katherine M.; Willoughby, Michael T.; Evans, Anna M.; Bulluck, John C.; Lovmo, Jeanne E.; Belger, Aysenil
2015-01-01
Neurobiological underpinnings of unusual sensory features in individuals with autism are unknown. Event-related potentials (ERPs) elicited by task-irrelevant sounds were used to elucidate neural correlates of auditory processing and associations with three common sensory response patterns (hyperresponsiveness; hyporesponsiveness; sensory seeking). Twenty-eight children with autism and 39 typically developing children (4–12 year-olds) completed an auditory oddball paradigm. Results revealed marginally attenuated P1 and N2 to standard tones and attenuated P3a to novel sounds in autism versus controls. Exploratory analyses suggested that within the autism group, attenuated N2 and P3a amplitudes were associated with greater sensory seeking behaviors for specific ranges of P1 responses. Findings suggest that attenuated early sensory as well as later attention-orienting neural responses to stimuli may underlie selective sensory features via complex mechanisms. PMID:24072639
Baltus, Alina; Herrmann, Christoph Siegfried
2016-06-01
Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam
2013-01-01
Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. PMID:23791629
Musical Experience, Sensorineural Auditory Processing, and Reading Subskills in Adults.
Tichko, Parker; Skoe, Erika
2018-04-27
Developmental research suggests that sensorineural auditory processing, reading subskills (e.g., phonological awareness and rapid naming), and musical experience are related during early periods of reading development. Interestingly, recent work suggests that these relations may extend into adulthood, with indices of sensorineural auditory processing relating to global reading ability. However, it is largely unknown whether sensorineural auditory processing relates to specific reading subskills, such as phonological awareness and rapid naming, as well as musical experience in mature readers. To address this question, we recorded electrophysiological responses to a repeating click (auditory stimulus) in a sample of adult readers. We then investigated relations between electrophysiological responses to sound, reading subskills, and musical experience in this same set of adult readers. Analyses suggest that sensorineural auditory processing, reading subskills, and musical experience are related in adulthood, with faster neural conduction times and greater musical experience associated with stronger rapid-naming skills. These results are similar to the developmental findings that suggest reading subskills are related to sensorineural auditory processing and musical experience in children.
Auditory temporal processing skills in musicians with dyslexia.
Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha
2014-08-01
The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.
Behavioral Indications of Auditory Processing Disorders.
ERIC Educational Resources Information Center
Hartman, Kerry McGoldrick
1988-01-01
Identifies disruptive behaviors of children that may indicate central auditory processing disorders (CAPDs), perceptual handicaps of auditory discrimination or auditory memory not related to hearing ability. Outlines steps to modify the communication environment for CAPD children at home and in the classroom. (SV)
Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.
2016-01-01
Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too. PMID:26758822
ERIC Educational Resources Information Center
Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean
2015-01-01
We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…
Development of a Pitch Discrimination Screening Test for Preschool Children.
Abramson, Maria Kulick; Lloyd, Peter J
2016-04-01
There is a critical need for tests of auditory discrimination for young children as this skill plays a fundamental role in the development of speaking, prereading, reading, language, and more complex auditory processes. Frequency discrimination is important with regard to basic sensory processing affecting phonological processing, dyslexia, measurements of intelligence, auditory memory, Asperger syndrome, and specific language impairment. This study was performed to determine the clinical feasibility of the Pitch Discrimination Test (PDT) to screen the preschool child's ability to discriminate some of the acoustic demands of speech perception, primarily pitch discrimination, without linguistic content. The PDT used brief speech frequency tones to gather normative data from preschool children aged 3 to 5 yrs. A cross-sectional study was used to gather data regarding the pitch discrimination abilities of a sample of typically developing preschool children, between 3 and 5 yrs of age. The PDT consists of ten trials using two pure tones of 100-msec duration each, and was administered in an AA or AB forced-choice response format. Data from 90 typically developing preschool children between the ages of 3 and 5 yrs were used to provide normative data. Nonparametric Mann-Whitney U-testing was used to examine the effects of age as a continuous variable on pitch discrimination. The Kruskal-Wallis test was used to determine the significance of age on performance on the PDT. Spearman rank was used to determine the correlation of age and performance on the PDT. Pitch discrimination of brief tones improved significantly from age 3 yrs to age 4 yrs, as well as from age 3 yrs to the age 4- and 5-yrs group. Results indicated that between ages 3 and 4 yrs, children's auditory discrimination of pitch improved on the PDT. The data showed that children can be screened for auditory discrimination of pitch beginning with age 4 yrs. The PDT proved to be a time efficient, feasible tool for a simple form of frequency discrimination screening in the preschool population before the age where other diagnostic tests of auditory processing disorders can be used. American Academy of Audiology.
Vicario, David S.
2017-01-01
Sensory and motor brain structures work in collaboration during perception. To evaluate their respective contributions, the present study recorded neural responses to auditory stimulation at multiple sites simultaneously in both the higher-order auditory area NCM and the premotor area HVC of the songbird brain in awake zebra finches (Taeniopygia guttata). Bird’s own song (BOS) and various conspecific songs (CON) were presented in both blocked and shuffled sequences. Neural responses showed plasticity in the form of stimulus-specific adaptation, with markedly different dynamics between the two structures. In NCM, the response decrease with repetition of each stimulus was gradual and long-lasting and did not differ between the stimuli or the stimulus presentation sequences. In contrast, HVC responses to CON stimuli decreased much more rapidly in the blocked than in the shuffled sequence. Furthermore, this decrease was more transient in HVC than in NCM, as shown by differential dynamics in the shuffled sequence. Responses to BOS in HVC decreased more gradually than to CON stimuli. The quality of neural representations, computed as the mutual information between stimuli and neural activity, was higher in NCM than in HVC. Conversely, internal functional correlations, estimated as the coherence between recording sites, were greater in HVC than in NCM. The cross-coherence between the two structures was weak and limited to low frequencies. These findings suggest that auditory communication signals are processed according to very different but complementary principles in NCM and HVC, a contrast that may inform study of the auditory and motor pathways for human speech processing. NEW & NOTEWORTHY Neural responses to auditory stimulation in sensory area NCM and premotor area HVC of the songbird forebrain show plasticity in the form of stimulus-specific adaptation with markedly different dynamics. These two structures also differ in stimulus representations and internal functional correlations. Accordingly, NCM seems to process the individually specific complex vocalizations of others based on prior familiarity, while HVC responses appear to be modulated by transitions and/or timing in the ongoing sequence of sounds. PMID:28031398
Ebbers, Lena; Weber, Maren; Nothwang, Hans Gerd
2017-10-26
In the mammalian superior olivary complex (SOC), synaptic inhibition contributes to the processing of binaural sound cues important for sound localization. Previous analyses demonstrated a tonotopic gradient for postsynaptic proteins mediating inhibitory neurotransmission in the lateral superior olive (LSO), a major nucleus of the SOC. To probe, whether a presynaptic molecular gradient exists as well, we investigated immunoreactivity against the vesicular inhibitory amino acid transporter (VIAAT) in the mouse auditory brainstem. Immunoreactivity against VIAAT revealed a gradient in the LSO and the superior paraolivary nucleus (SPN) of NMRI mice, with high expression in the lateral, low frequency processing limb and low expression in the medial, high frequency processing limb of both nuclei. This orientation is opposite to the previously reported gradient of glycine receptors in the LSO. Other nuclei of the SOC showed a uniform distribution of VIAAT-immunoreactivity. No gradient was observed for the glycine transporter GlyT2 and the neuronal protein NeuN. Formation of the VIAAT gradient was developmentally regulated and occurred around hearing-onset between postnatal days 8 and 16. Congenital deaf Claudin14 -/- mice bred on an NMRI background showed a uniform VIAAT-immunoreactivity in the LSO, whereas cochlear ablation in NMRI mice after hearing-onset did not affect the gradient. Additional analysis of C57Bl6/J, 129/SvJ and CBA/J mice revealed a strain-specific formation of the gradient. Our results identify an activity-regulated gradient of VIAAT in the SOC of NRMI mice. Its absence in other mouse strains adds a novel layer of strain-specific features in the auditory system, i.e. tonotopic organization of molecular gradients. This calls for caution when comparing data from different mouse strains frequently used in studies involving transgenic animals. The presence of strain-specific differences offers the possibility of genetic mapping to identify molecular factors involved in activity-dependent developmental processes in the auditory system. This would provide an important step forward concerning improved auditory rehabilitation in cases of congenital deafness.
Skouras, Stavros; Lohmann, Gabriele
2018-01-01
Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions. PMID:29385142
ERIC Educational Resources Information Center
Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquiere, Pol
2007-01-01
This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school…
Spatial processing in the auditory cortex of the macaque monkey
NASA Astrophysics Data System (ADS)
Recanzone, Gregg H.
2000-10-01
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging
Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell
2010-01-01
During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516
The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.
Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte
2016-02-03
Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.
Visual form predictions facilitate auditory processing at the N1.
Paris, Tim; Kim, Jeesun; Davis, Chris
2017-02-20
Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.
Research and Studies Directory for Manpower, Personnel, and Training
1989-05-01
LOUIS MO 314-889-6805 CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX TANGNEY J AIR FORCE OFFICE OF SCIENTIFIC RESEARCH 202-767-5021 A MODEL FOR...VISUAL ATTENTION AUDITORY PERCEPTION OF COMPLEX SOUNDS CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX EYE MOVEMENTS AND SPATIAL PATTERN VISION EYE
Auditory Spatial Perception: Auditory Localization
2012-05-01
cochlear nucleus, TB – trapezoid body, SOC – superior olivary complex, LL – lateral lemniscus, IC – inferior colliculus. Adapted from Aharonson and...Figure 5. Auditory pathways in the central nervous system. LE – left ear, RE – right ear, AN – auditory nerve, CN – cochlear nucleus, TB...fibers leaving the left and right inner ear connect directly to the synaptic inputs of the cochlear nucleus (CN) on the same (ipsilateral) side of
ERIC Educational Resources Information Center
Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.
2016-01-01
Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…
Impact of Educational Level on Performance on Auditory Processing Tests.
Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane
2016-01-01
Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.
A Novel Method of Brainstem Auditory Evoked Potentials Using Complex Verbal Stimuli
Kouni, Sophia N; Koutsojannis, Constantinos; Ziavra, Nausika; Giannopoulos, Sotirios
2014-01-01
Background: The click and tone-evoked auditory brainstem responses are widely used in clinical practice due to their consistency and predictability. More recently, the speech-evoked responses have been used to evaluate subcortical processing of complex signals, not revealed by responses to clicks and tones. Aims: Disyllable stimuli corresponding to familiar words can induce a pattern of voltage fluctuations in the brain stem resulting in a familiar waveform, and they can yield better information about brain stem nuclei along the ascending central auditory pathway. Materials and Methods: We describe a new method with the use of the disyllable word “baba” corresponding to English “daddy” that is commonly used in many other ethnic languages spanning from West Africa to the Eastern Mediterranean all the way to the East Asia. Results: This method was applied in 20 young adults institutionally diagnosed as dyslexic (10 subjects) or light dyslexic (10 subjects) who were matched with 20 sex, age, education, hearing sensitivity, and IQ-matched normal subjects. The absolute peak latencies of the negative wave C and the interpeak latencies of A-C elicited by verbal stimuli “baba” were found to be significantly increased in the dyslexic group in comparison with the control group. Conclusions: The method is easy and helpful to diagnose abnormalities affecting the auditory pathway, to identify subjects with early perception and cortical representation abnormalities, and to apply the suitable therapeutic and rehabilitation management. PMID:25210677
Using complex auditory-visual samples to produce emergent relations in children with autism.
Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P
2010-03-01
Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.
Fundamental deficits of auditory perception in Wernicke's aphasia.
Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen
2013-01-01
This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
Auditory processing disorders, verbal disfluency, and learning difficulties: a case study.
Jutras, Benoît; Lagacé, Josée; Lavigne, Annik; Boissonneault, Andrée; Lavoie, Charlen
2007-01-01
This case study reports the findings of auditory behavioral and electrophysiological measures performed on a graduate student (identified as LN) presenting verbal disfluency and learning difficulties. Results of behavioral audiological testing documented the presence of auditory processing disorders, particularly temporal processing and binaural integration. Electrophysiological test results, including middle latency, late latency and cognitive potentials, revealed that LN's central auditory system processes acoustic stimuli differently to a reference group with normal hearing.
Perceptual Bias and Loudness Change: An Investigation of Memory, Masking, and Psychophysiology
NASA Astrophysics Data System (ADS)
Olsen, Kirk N.
Loudness is a fundamental aspect of human auditory perception that is closely associated with a sound's physical acoustic intensity. The dynamic quality of intensity change is an inherent acoustic feature in real-world listening domains such as speech and music. However, perception of loudness change in response to continuous intensity increases (up-ramps) and decreases (down-ramps) has received relatively little empirical investigation. Overestimation of loudness change in response to up-ramps is said to be linked to an adaptive survival response associated with looming (or approaching) motion in the environment. The hypothesised 'perceptual bias' to looming auditory motion suggests why perceptual overestimation of up-ramps may occur; however it does not offer a causal explanation. It is concluded that post-stimulus judgements of perceived loudness change are significantly affected by a cognitive recency response bias that, until now, has been an artefact of experimental procedure. Perceptual end-level differences caused by duration specific sensory adaptation at peripheral and/or central stages of auditory processing may explain differences in post-stimulus judgements of loudness change. Experiments that investigate human responses to acoustic intensity dynamics, encompassing topics from basic auditory psychophysics (e.g., sensory adaptation) to cognitive-emotional appraisal of increasingly complex stimulus events such as music and auditory warnings, are proposed for future research.
Auditory hedonic phenotypes in dementia: A behavioural and neuroanatomical analysis
Fletcher, Phillip D.; Downey, Laura E.; Golden, Hannah L.; Clark, Camilla N.; Slattery, Catherine F.; Paterson, Ross W.; Schott, Jonathan M.; Rohrer, Jonathan D.; Rossor, Martin N.; Warren, Jason D.
2015-01-01
Patients with dementia may exhibit abnormally altered liking for environmental sounds and music but such altered auditory hedonic responses have not been studied systematically. Here we addressed this issue in a cohort of 73 patients representing major canonical dementia syndromes (behavioural variant frontotemporal dementia (bvFTD), semantic dementia (SD), progressive nonfluent aphasia (PNFA) amnestic Alzheimer's disease (AD)) using a semi-structured caregiver behavioural questionnaire and voxel-based morphometry (VBM) of patients' brain MR images. Behavioural responses signalling abnormal aversion to environmental sounds, aversion to music or heightened pleasure in music (‘musicophilia’) occurred in around half of the cohort but showed clear syndromic and genetic segregation, occurring in most patients with bvFTD but infrequently in PNFA and more commonly in association with MAPT than C9orf72 mutations. Aversion to sounds was the exclusive auditory phenotype in AD whereas more complex phenotypes including musicophilia were common in bvFTD and SD. Auditory hedonic alterations correlated with grey matter loss in a common, distributed, right-lateralised network including antero-mesial temporal lobe, insula, anterior cingulate and nucleus accumbens. Our findings suggest that abnormalities of auditory hedonic processing are a significant issue in common dementias. Sounds may constitute a novel probe of brain mechanisms for emotional salience coding that are targeted by neurodegenerative disease. PMID:25929717
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation
Oliva, Aude
2017-01-01
Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630
Processing of harmonics in the lateral belt of macaque auditory cortex.
Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer; Rauschecker, Josef P
2014-01-01
Many speech sounds and animal vocalizations contain components, referred to as complex tones, that consist of a fundamental frequency (F0) and higher harmonics. In this study we examined single-unit activity recorded in the core (A1) and lateral belt (LB) areas of auditory cortex in two rhesus monkeys as they listened to pure tones and pitch-shifted conspecific vocalizations ("coos"). The latter consisted of complex-tone segments in which F0 was matched to a corresponding pure-tone stimulus. In both animals, neuronal latencies to pure-tone stimuli at the best frequency (BF) were ~10 to 15 ms longer in LB than in A1. This might be expected, since LB is considered to be at a hierarchically higher level than A1. On the other hand, the latency of LB responses to coos was ~10 to 20 ms shorter than to the corresponding pure-tone BF, suggesting facilitation in LB by the harmonics. This latency reduction by coos was not observed in A1, resulting in similar coo latencies in A1 and LB. Multi-peaked neurons were present in both A1 and LB; however, harmonically-related peaks were observed in LB for both early and late response components, whereas in A1 they were observed only for late components. Our results suggest that harmonic features, such as relationships between specific frequency intervals of communication calls, are processed at relatively early stages of the auditory cortical pathway, but preferentially in LB.
Processing of harmonics in the lateral belt of macaque auditory cortex
Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer; Rauschecker, Josef P.
2014-01-01
Many speech sounds and animal vocalizations contain components, referred to as complex tones, that consist of a fundamental frequency (F0) and higher harmonics. In this study we examined single-unit activity recorded in the core (A1) and lateral belt (LB) areas of auditory cortex in two rhesus monkeys as they listened to pure tones and pitch-shifted conspecific vocalizations (“coos”). The latter consisted of complex-tone segments in which F0 was matched to a corresponding pure-tone stimulus. In both animals, neuronal latencies to pure-tone stimuli at the best frequency (BF) were ~10 to 15 ms longer in LB than in A1. This might be expected, since LB is considered to be at a hierarchically higher level than A1. On the other hand, the latency of LB responses to coos was ~10 to 20 ms shorter than to the corresponding pure-tone BF, suggesting facilitation in LB by the harmonics. This latency reduction by coos was not observed in A1, resulting in similar coo latencies in A1 and LB. Multi-peaked neurons were present in both A1 and LB; however, harmonically-related peaks were observed in LB for both early and late response components, whereas in A1 they were observed only for late components. Our results suggest that harmonic features, such as relationships between specific frequency intervals of communication calls, are processed at relatively early stages of the auditory cortical pathway, but preferentially in LB. PMID:25100935
Miyazaki, Takahiro; Thompson, Jessica; Fujioka, Takako; Ross, Bernhard
2013-04-19
Amplitude fluctuations of natural sounds carry multiple types of information represented at different time scales, such as syllables and voice pitch in speech. However, it is not well understood how such amplitude fluctuations at different time scales are processed in the brain. In the present study we investigated the effect of the stimulus rate on the cortical evoked responses using magnetoencephalography (MEG). We used a two-tone complex sound, whose envelope fluctuated at the difference frequency and induced an acoustic beat sensation. When the beat rate was continuously swept between 3Hz and 60Hz, auditory evoked response showed distinct transient waves at slow rates, while at fast rates continuous sinusoidal oscillations similar to the auditory steady-state response (ASSR) were observed. We further derived temporal modulation transfer functions (TMTF) from amplitudes of the transient responses and from the ASSR. The results identified two critical rates of 12.5Hz and 25Hz, at which consecutive transient responses overlapped with each other. These stimulus rates roughly corresponded to the rates at which the perceptual quality of the sound envelope is known to change. Low rates (> 10Hz) are perceived as loudness fluctuation, medium rates as acoustical flutter, and rates above 25Hz as roughness. We conclude that these results reflect cortical processes that integrate successive acoustic events at different time scales for extracting complex features of natural sound. Copyright © 2013 Elsevier B.V. All rights reserved.
Auditory spatial processing in the human cortex.
Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C
2012-12-01
The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David
2015-02-01
Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.
Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David
2014-01-01
Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals - over a range of time scales from milliseconds to seconds - renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own ‚privileged‘ temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. PMID:24956028
Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru
2016-01-01
The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060
Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru
2017-03-01
The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.
Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.
Meredith, M Alex; Allman, Brian L
2015-03-01
The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Binaural speech processing in individuals with auditory neuropathy.
Rance, G; Ryan, M M; Carew, P; Corben, L A; Yiu, E; Tan, J; Delatycki, M B
2012-12-13
Auditory neuropathy disrupts the neural representation of sound and may therefore impair processes contingent upon inter-aural integration. The aims of this study were to investigate binaural auditory processing in individuals with axonal (Friedreich ataxia) and demyelinating (Charcot-Marie-Tooth disease type 1A) auditory neuropathy and to evaluate the relationship between the degree of auditory deficit and overall clinical severity in patients with neuropathic disorders. Twenty-three subjects with genetically confirmed Friedreich ataxia and 12 subjects with Charcot-Marie-Tooth disease type 1A underwent psychophysical evaluation of basic auditory processing (intensity discrimination/temporal resolution) and binaural speech perception assessment using the Listening in Spatialized Noise test. Age, gender and hearing-level-matched controls were also tested. Speech perception in noise for individuals with auditory neuropathy was abnormal for each listening condition, but was particularly affected in circumstances where binaural processing might have improved perception through spatial segregation. Ability to use spatial cues was correlated with temporal resolution suggesting that the binaural-processing deficit was the result of disordered representation of timing cues in the left and right auditory nerves. Spatial processing was also related to overall disease severity (as measured by the Friedreich Ataxia Rating Scale and Charcot-Marie-Tooth Neuropathy Score) suggesting that the degree of neural dysfunction in the auditory system accurately reflects generalized neuropathic changes. Measures of binaural speech processing show promise for application in the neurology clinic. In individuals with auditory neuropathy due to both axonal and demyelinating mechanisms the assessment provides a measure of functional hearing ability, a biomarker capable of tracking the natural history of progressive disease and a potential means of evaluating the effectiveness of interventions. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli. PMID:24194828
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.
Auditory Processing Testing: In the Booth versus Outside the Booth.
Lucker, Jay R
2017-09-01
Many audiologists believe that auditory processing testing must be carried out in a soundproof booth. This expectation is especially a problem in places such as elementary schools. Research comparing pure-tone thresholds obtained in sound booths compared to quiet test environments outside of these booths does not support that belief. Auditory processing testing is generally carried out at above threshold levels, and therefore may be even less likely to require a soundproof booth. The present study was carried out to compare test results in soundproof booths versus quiet rooms. The purpose of this study was to determine whether auditory processing tests can be administered in a quiet test room rather than in the soundproof test suite. The outcomes would identify that audiologists can provide auditory processing testing for children under various test conditions including quiet rooms at their school. A battery of auditory processing tests was administered at a test level equivalent to 50 dB HL through headphones. The same equipment was used for testing in both locations. Twenty participants identified with normal hearing were included in this study, ten having no auditory processing concerns and ten exhibiting auditory processing problems. All participants underwent a battery of tests, both inside the test booth and outside the booth in a quiet room. Order of testing (inside versus outside) was counterbalanced. Participants were first determined to have normal hearing thresholds for tones and speech. Auditory processing tests were recorded and presented from an HP EliteBook laptop computer with noise-canceling headphones attached to a y-cord that not only presented the test stimuli to the participants but also allowed monitor headphones to be worn by the evaluator. The same equipment was used inside as well as outside the booth. No differences were found for each auditory processing measure as a function of the test setting or the order in which testing was done, that is, in the booth or in the room. Results from the present study indicate that one can obtain the same results on auditory processing tests, regardless of whether testing is completed in a soundproof booth or in a quiet test environment. Therefore, audiologists should not be required to test for auditory processing in a soundproof booth. This study shows that audiologists can conduct testing in a quiet room so long as the background noise is sufficiently controlled. American Academy of Audiology
Kotchoubey, Boris; Pavlov, Yuri G; Kleber, Boris
2015-01-01
According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients' self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation.
Kotchoubey, Boris; Pavlov, Yuri G.; Kleber, Boris
2015-01-01
According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients’ self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation. PMID:26640445
Musical Experience, Sensorineural Auditory Processing, and Reading Subskills in Adults
Tichko, Parker; Skoe, Erika
2018-01-01
Developmental research suggests that sensorineural auditory processing, reading subskills (e.g., phonological awareness and rapid naming), and musical experience are related during early periods of reading development. Interestingly, recent work suggests that these relations may extend into adulthood, with indices of sensorineural auditory processing relating to global reading ability. However, it is largely unknown whether sensorineural auditory processing relates to specific reading subskills, such as phonological awareness and rapid naming, as well as musical experience in mature readers. To address this question, we recorded electrophysiological responses to a repeating click (auditory stimulus) in a sample of adult readers. We then investigated relations between electrophysiological responses to sound, reading subskills, and musical experience in this same set of adult readers. Analyses suggest that sensorineural auditory processing, reading subskills, and musical experience are related in adulthood, with faster neural conduction times and greater musical experience associated with stronger rapid-naming skills. These results are similar to the developmental findings that suggest reading subskills are related to sensorineural auditory processing and musical experience in children. PMID:29702572
Comorbidity of Auditory Processing, Language, and Reading Disorders
ERIC Educational Resources Information Center
Sharma, Mridula; Purdy, Suzanne C.; Kelly, Andrea S.
2009-01-01
Purpose: The authors assessed comorbidity of auditory processing disorder (APD), language impairment (LI), and reading disorder (RD) in school-age children. Method: Children (N = 68) with suspected APD and nonverbal IQ standard scores of 80 or more were assessed using auditory, language, reading, attention, and memory measures. Auditory processing…
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Farris, Hamilton E; Rand, A Stanley; Ryan, Michael J
2002-01-01
Numerous animals across disparate taxa must identify and locate complex acoustic signals imbedded in multiple overlapping signals and ambient noise. A requirement of this task is the ability to group sounds into auditory streams in which sounds are perceived as emanating from the same source. Although numerous studies over the past 50 years have examined aspects of auditory grouping in humans, surprisingly few assays have demonstrated auditory stream formation or the assignment of multicomponent signals to a single source in non-human animals. In our study, we present evidence for auditory grouping in female túngara frogs. In contrast to humans, in which auditory grouping may be facilitated by the cues produced when sounds arrive from the same location, we show that spatial cues play a limited role in grouping, as females group discrete components of the species' complex call over wide angular separations. Furthermore, we show that once grouped the separate call components are weighted differently in recognizing and locating the call, so called 'what' and 'where' decisions, respectively. Copyright 2002 S. Karger AG, Basel
Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.
Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D
2016-01-01
The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.
Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima
2016-01-01
Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits. PMID:28050213
Veltri, Theresa; Taroyan, Naira; Overton, Paul G
2017-07-01
Nicotine is a psychoactive substance that is commonly consumed in the context of music. However, the reason why music and nicotine are co-consumed is uncertain. One possibility is that nicotine affects cognitive processes relevant to aspects of music appreciation in a beneficial way. Here we investigated this possibility using Event-Related Potentials. Participants underwent a simple decision-making task (to maintain attentional focus), responses to which were signalled by auditory stimuli. Unlike previous research looking at the effects of nicotine on auditory processing, we used complex tones that varied in pitch, a fundamental element of music. In addition, unlike most other studies, we tested non-smoking subjects to avoid withdrawal-related complications. We found that nicotine (4.0 mg, administered as gum) increased P2 amplitude in the frontal region. Since a decrease in P2 amplitude and latency is related to habituation processes, and an enhanced ability to disengage from irrelevant stimuli, our findings suggest that nicotine may cause a reduction in habituation, resulting in non-smokers being less able to adapt to repeated stimuli. A corollary of that decrease in adaptation may be that nicotine extends the temporal window during which a listener is able and willing to engage with a piece of music.
Neural circuits in auditory and audiovisual memory.
Plakke, B; Romanski, L M
2016-06-01
Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Auditory processing theories of language disorders: past, present, and future.
Miller, Carol A
2011-07-01
The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory processing disorder (APD), specific language impairment (SLI), and dyslexia. The history of auditory processing theories of these 3 disorders is described, points of convergence and controversy within and among the different branches of research literature are considered, and the influence of research on practice is discussed. The theoretical and clinical contributions of neurophysiological methods are also reviewed, and suggested approaches for critical reading of the research literature are provided. Research on the role of auditory processing in communication disorders springs from a variety of theoretical perspectives and assumptions, and this variety, combined with controversies over the interpretation of research results, makes it difficult to draw clinical implications from the literature. Neurophysiological research methods are a promising route to better understanding of auditory processing. Progress in theory development and its clinical application is most likely to be made when researchers from different disciplines and theoretical perspectives communicate clearly and combine the strengths of their approaches.
Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance
ERIC Educational Resources Information Center
Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina
2013-01-01
Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…
ERIC Educational Resources Information Center
Stevens, Catherine; Gallagher, Melinda
2004-01-01
This experiment investigated relational complexity and relational shift in judgments of auditory patterns. Pitch and duration values were used to construct two-note perceptually similar sequences (unary relations) and four-note relationally similar sequences (binary relations). It was hypothesized that 5-, 8- and 11-year-old children would perform…
The Effect of Lexical Content on Dichotic Speech Recognition in Older Adults.
Findlen, Ursula M; Roup, Christina M
2016-01-01
Age-related auditory processing deficits have been shown to negatively affect speech recognition for older adult listeners. In contrast, older adults gain benefit from their ability to make use of semantic and lexical content of the speech signal (i.e., top-down processing), particularly in complex listening situations. Assessment of auditory processing abilities among aging adults should take into consideration semantic and lexical content of the speech signal. The purpose of this study was to examine the effects of lexical and attentional factors on dichotic speech recognition performance characteristics for older adult listeners. A repeated measures design was used to examine differences in dichotic word recognition as a function of lexical and attentional factors. Thirty-five older adults (61-85 yr) with sensorineural hearing loss participated in this study. Dichotic speech recognition was evaluated using consonant-vowel-consonant (CVC) word and nonsense CVC syllable stimuli administered in the free recall, directed recall right, and directed recall left response conditions. Dichotic speech recognition performance for nonsense CVC syllables was significantly poorer than performance for CVC words. Dichotic recognition performance varied across response condition for both stimulus types, which is consistent with previous studies on dichotic speech recognition. Inspection of individual results revealed that five listeners demonstrated an auditory-based left ear deficit for one or both stimulus types. Lexical content of stimulus materials affects performance characteristics for dichotic speech recognition tasks in the older adult population. The use of nonsense CVC syllable material may provide a way to assess dichotic speech recognition performance while potentially lessening the effects of lexical content on performance (i.e., measuring bottom-up auditory function both with and without top-down processing). American Academy of Audiology.
Visual and auditory perception in preschool children at risk for dyslexia.
Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina
2014-11-01
Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Jenstad, Lorienne M.; Souza, Pamela E.
2007-01-01
Purpose: When understanding speech in complex listening situations, older adults with hearing loss face the double challenge of cochlear hearing loss and deficits of the aging auditory system. Wide-dynamic range compression (WDRC) is used in hearing aids as remediation for the loss of audibility associated with hearing loss. WDRC processing has…
ERIC Educational Resources Information Center
Bomba, Marie D.; Singhal, Anthony
2010-01-01
Previous dual-task research pairing complex visual tasks involving non-spatial cognitive processes during dichotic listening have shown effects on the late component (Ndl) of the negative difference selective attention waveform but no effects on the early (Nde) response suggesting that the Ndl, but not the Nde, is affected by non-spatial…
Psychophysical and Neural Correlates of Auditory Attraction and Aversion
NASA Astrophysics Data System (ADS)
Patten, Kristopher Jakob
This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.
Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.
Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin
2018-02-21
In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Tuning in to the Voices: A Multisite fMRI Study of Auditory Hallucinations
Ford, Judith M.; Roach, Brian J.; Jorgensen, Kasper W.; Turner, Jessica A.; Brown, Gregory G.; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A.; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S.; Lim, Kelvin O.; Glover, Gary; Potkin, Steven G.; Mathalon, Daniel H.
2009-01-01
Introduction: Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically “tuned” to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. Methods: At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Results: Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Discussion: Although “voices” are the anticipated sensory experience, it appears that even primary auditory cortex is “turned on” and “tuned in” to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample. PMID:18987102
Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna
2017-11-01
Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Selective attention in normal and impaired hearing.
Shinn-Cunningham, Barbara G; Best, Virginia
2008-12-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.
Selective Attention in Normal and Impaired Hearing
Shinn-Cunningham, Barbara G.; Best, Virginia
2008-01-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202
Auditory Temporal Processing as a Specific Deficit among Dyslexic Readers
ERIC Educational Resources Information Center
Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit
2012-01-01
The present study focuses on examining the hypothesis that auditory temporal perception deficit is a basic cause for reading disabilities among dyslexics. This hypothesis maintains that reading impairment is caused by a fundamental perceptual deficit in processing rapid auditory or visual stimuli. Since the auditory perception involves a number of…
Neural circuits in Auditory and Audiovisual Memory
Plakke, B.; Romanski, L.M.
2016-01-01
Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. PMID:26656069
Enhanced attention-dependent activity in the auditory cortex of older musicians.
Zendel, Benjamin Rich; Alain, Claude
2014-01-01
Musical training improves auditory processing abilities, which correlates with neuro-plastic changes in exogenous (input-driven) and endogenous (attention-dependent) components of auditory event-related potentials (ERPs). Evidence suggests that musicians, compared to non-musicians, experience less age-related decline in auditory processing abilities. Here, we investigated whether lifelong musicianship mitigates exogenous or endogenous processing by measuring auditory ERPs in younger and older musicians and non-musicians while they either attended to auditory stimuli or watched a muted subtitled movie of their choice. Both age and musical training-related differences were observed in the exogenous components; however, the differences between musicians and non-musicians were similar across the lifespan. These results suggest that exogenous auditory ERPs are enhanced in musicians, but decline with age at the same rate. On the other hand, attention-related activity, modeled in the right auditory cortex using a discrete spatiotemporal source analysis, was selectively enhanced in older musicians. This suggests that older musicians use a compensatory strategy to overcome age-related decline in peripheral and exogenous processing of acoustic information. Copyright © 2014 Elsevier Inc. All rights reserved.
Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.
Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko
2017-08-15
During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.
Detecting changes in dynamic and complex acoustic environments
Boubenec, Yves; Lawlor, Jennifer; Górska, Urszula; Shamma, Shihab; Englitz, Bernhard
2017-01-01
Natural sounds such as wind or rain, are characterized by the statistical occurrence of their constituents. Despite their complexity, listeners readily detect changes in these contexts. We here address the neural basis of statistical decision-making using a combination of psychophysics, EEG and modelling. In a texture-based, change-detection paradigm, human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. The potential's amplitude scaled with the duration of pre-change exposure, suggesting a time-dependent decision threshold. Auditory cortex-related potentials showed no response to the change. A dual timescale, statistical estimation model accounted for subjects' performance. Furthermore, a decision-augmented auditory cortex model accounted for performance and reaction times, suggesting that the primary cortical representation requires little post-processing to enable change-detection in complex acoustic environments. DOI: http://dx.doi.org/10.7554/eLife.24910.001 PMID:28262095
Action planning and predictive coding when speaking
Wang, Jun; Mathalon, Daniel H.; Roach, Brian J.; Reilly, James; Keedy, Sarah; Sweeney, John A.; Ford, Judith M.
2014-01-01
Across the animal kingdom, sensations resulting from an animal's own actions are processed differently from sensations resulting from external sources, with self-generated sensations being suppressed. A forward model has been proposed to explain this process across sensorimotor domains. During vocalization, reduced processing of one's own speech is believed to result from a comparison of speech sounds to corollary discharges of intended speech production generated from efference copies of commands to speak. Until now, anatomical and functional evidence validating this model in humans has been indirect. Using EEG with anatomical MRI to facilitate source localization, we demonstrate that inferior frontal gyrus activity during the 300ms before speaking was associated with suppressed processing of speech sounds in auditory cortex around 100ms after speech onset (N1). These findings indicate that an efference copy from speech areas in prefrontal cortex is transmitted to auditory cortex, where it is used to suppress processing of anticipated speech sounds. About 100ms after N1, a subsequent auditory cortical component (P2) was not suppressed during talking. The combined N1 and P2 effects suggest that although sensory processing is suppressed as reflected in N1, perceptual gaps are filled as reflected in the lack of P2 suppression, explaining the discrepancy between sensory suppression and preserved sensory experiences. These findings, coupled with the coherence between relevant brain regions before and during speech, provide new mechanistic understanding of the complex interactions between action planning and sensory processing that provide for differentiated tagging and monitoring of one's own speech, processes disrupted in neuropsychiatric disorders. PMID:24423729
1981-07-10
Pohlmann, L. D. Some models of observer behavior in two-channel auditory signal detection. Perception and Psychophy- sics, 1973, 14, 101-109. Spelke...spatial), and processing modalities ( auditory versus visual input, vocal versus manual response). If validated, this configuration has both theoretical...conclusion that auditory and visual processes will compete, as will spatial and verbal (albeit to a lesser extent than auditory - auditory , visual-visual
ERIC Educational Resources Information Center
Kuppen, Sarah; Huss, Martina; Fosker, Tim; Fegan, Natasha; Goswami, Usha
2011-01-01
We explore the relationships between basic auditory processing, phonological awareness, vocabulary, and word reading in a sample of 95 children, 55 typically developing children, and 40 children with low IQ. All children received nonspeech auditory processing tasks, phonological processing and literacy measures, and a receptive vocabulary task.…
The influence of (central) auditory processing disorder in speech sound disorders.
Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein
2016-01-01
Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Neural Responses to Complex Auditory Rhythms: The Role of Attending
Chapin, Heather L.; Zanto, Theodore; Jantzen, Kelly J.; Kelso, Scott J. A.; Steinberg, Fred; Large, Edward W.
2010-01-01
The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse-related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory cortex, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus. PMID:21833279
Sensitivity and specificity of auditory steady‐state response testing
Rabelo, Camila Maia; Schochat, Eliane
2011-01-01
INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442
Dong, Xuebao; Suo, Puxia; Yuan, Xin; Yao, Xuefeng
2015-01-01
Auditory evoked potentials (AEPs) have been used as a measure of the depth of anesthesia during the intra-operative process. AEPs are classically divided, on the basis of their latency, into first, fast, middle, slow, and late components. The use of auditory evoked potential has been advocated for the assessment of Intra-operative awareness (IOA), but has not been considered seriously enough to universalize it. It is because we have not explored enough the impact of auditory perception and auditory processing on the IOA phenomena as well as on the subsequent psychological impact of IOA on the patient. More importantly, we have seldom tried to look at the phenomena of IOP from the perspective of consciousness itself. This perspective is especially important because many of IOA phenomena exist in the subconscious domain than they do in the conscious domain of explicit recall. Two important forms of these subconscious manifestations of IOA are the implicit recall phenomena and post-operative dreams related to the operation. Here, we present an integrated auditory consciousness-based model of IOA. We start with a brief description of auditory awareness and the factors affecting it. Further, we proceed to the evaluation of conscious and subconscious information processing by auditory modality and how they interact during and after intra-operative period. Further, we show that both conscious and subconscious auditory processing affect the IOA experience and both have serious psychological implications on the patient subsequently. These effects could be prevented by using auditory evoked potential during monitoring of anesthesia, especially the mid-latency auditory evoked potentials (MLAERs). To conclude our model with present hypothesis, we propose that the use of auditory evoked potential should be universal with general anesthesia use in order to prevent the occurrences of distressing outcomes resulting from both conscious and subconscious auditory processing during anesthesia.
Auditory Processing Disorder and Foreign Language Acquisition
ERIC Educational Resources Information Center
Veselovska, Ganna
2015-01-01
This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…
Auditory processing deficits in individuals with primary open-angle glaucoma.
Rance, Gary; O'Hare, Fleur; O'Leary, Stephen; Starr, Arnold; Ly, Anna; Cheng, Belinda; Tomlin, Dani; Graydon, Kelley; Chisari, Donella; Trounce, Ian; Crowston, Jonathan
2012-01-01
The high energy demand of the auditory and visual pathways render these sensory systems prone to diseases that impair mitochondrial function. Primary open-angle glaucoma, a neurodegenerative disease of the optic nerve, has recently been associated with a spectrum of mitochondrial abnormalities. This study sought to investigate auditory processing in individuals with open-angle glaucoma. DESIGN/STUDY SAMPLE: Twenty-seven subjects with open-angle glaucoma underwent electrophysiologic (auditory brainstem response), auditory temporal processing (amplitude modulation detection), and speech perception (monosyllabic words in quiet and background noise) assessment in each ear. A cohort of age, gender and hearing level matched control subjects was also tested. While the majority of glaucoma subjects in this study demonstrated normal auditory function, there were a significant number (6/27 subjects, 22%) who showed abnormal auditory brainstem responses and impaired auditory perception in one or both ears. The finding that a significant proportion of subjects with open-angle glaucoma presented with auditory dysfunction provides evidence of systemic neuronal susceptibility. Affected individuals may suffer significant communication difficulties in everyday listening situations.
The Perception of Concurrent Sound Objects in Harmonic Complexes Impairs Gap Detection
ERIC Educational Resources Information Center
Leung, Ada W. S.; Jolicoeur, Pierre; Vachon, Francois; Alain, Claude
2011-01-01
Since the introduction of the concept of auditory scene analysis, there has been a paucity of work focusing on the theoretical explanation of how attention is allocated within a complex auditory scene. Here we examined signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a…
Decoding power-spectral profiles from FMRI brain activities during naturalistic auditory experience.
Hu, Xintao; Guo, Lei; Han, Junwei; Liu, Tianming
2017-02-01
Recent studies have demonstrated a close relationship between computational acoustic features and neural brain activities, and have largely advanced our understanding of auditory information processing in the human brain. Along this line, we proposed a multidisciplinary study to examine whether power spectral density (PSD) profiles can be decoded from brain activities during naturalistic auditory experience. The study was performed on a high resolution functional magnetic resonance imaging (fMRI) dataset acquired when participants freely listened to the audio-description of the movie "Forrest Gump". Representative PSD profiles existing in the audio-movie were identified by clustering the audio samples according to their PSD descriptors. Support vector machine (SVM) classifiers were trained to differentiate the representative PSD profiles using corresponding fMRI brain activities. Based on PSD profile decoding, we explored how the neural decodability correlated to power intensity and frequency deviants. Our experimental results demonstrated that PSD profiles can be reliably decoded from brain activities. We also suggested a sigmoidal relationship between the neural decodability and power intensity deviants of PSD profiles. Our study in addition substantiates the feasibility and advantage of naturalistic paradigm for studying neural encoding of complex auditory information.
Neurobiology of Everyday Communication: What Have We Learned From Music?
Kraus, Nina; White-Schwoch, Travis
2016-06-09
Sound is an invisible but powerful force that is central to everyday life. Studies in the neurobiology of everyday communication seek to elucidate the neural mechanisms underlying sound processing, their stability, their plasticity, and their links to language abilities and disabilities. This sound processing lies at the nexus of cognitive, sensorimotor, and reward networks. Music provides a powerful experimental model to understand these biological foundations of communication, especially with regard to auditory learning. We review studies of music training that employ a biological approach to reveal the integrity of sound processing in the brain, the bearing these mechanisms have on everyday communication, and how these processes are shaped by experience. Together, these experiments illustrate that music works in synergistic partnerships with language skills and the ability to make sense of speech in complex, everyday listening environments. The active, repeated engagement with sound demanded by music making augments the neural processing of speech, eventually cascading to listening and language. This generalization from music to everyday communications illustrates both that these auditory brain mechanisms have a profound potential for plasticity and that sound processing is biologically intertwined with listening and language skills. A new wave of studies has pushed neuroscience beyond the traditional laboratory by revealing the effects of community music training in underserved populations. These community-based studies reinforce laboratory work highlight how the auditory system achieves a remarkable balance between stability and flexibility in processing speech. Moreover, these community studies have the potential to inform health care, education, and social policy by lending a neurobiological perspective to their efficacy. © The Author(s) 2016.
Auditory Processing Disorders: An Overview. ERIC Digest.
ERIC Educational Resources Information Center
Ciocci, Sandra R.
This digest presents an overview of children with auditory processing disorders (APDs), children who can typically hear information but have difficulty attending to, storing, locating, retrieving, and/or clarifying that information to make it useful for academic and social purposes. The digest begins by describing central auditory processing and…
Black, Emily; Stevenson, Jennifer L; Bish, Joel P
2017-08-01
The global precedence effect is a phenomenon in which global aspects of visual and auditory stimuli are processed before local aspects. Individuals with musical experience perform better on all aspects of auditory tasks compared with individuals with less musical experience. The hemispheric lateralization of this auditory processing is less well-defined. The present study aimed to replicate the global precedence effect with auditory stimuli and to explore the lateralization of global and local auditory processing in individuals with differing levels of musical experience. A total of 38 college students completed an auditory-directed attention task while electroencephalography was recorded. Individuals with low musical experience responded significantly faster and more accurately in global trials than in local trials regardless of condition, and significantly faster and more accurately when pitches traveled in the same direction (compatible condition) than when pitches traveled in two different directions (incompatible condition) consistent with a global precedence effect. In contrast, individuals with high musical experience showed less of a global precedence effect with regards to accuracy, but not in terms of reaction time, suggesting an increased ability to overcome global bias. Further, a difference in P300 latency between hemispheres was observed. These findings provide a preliminary neurological framework for auditory processing of individuals with differing degrees of musical experience.
Mismatch negativity to acoustical illusion of beat: how and where the change detection takes place?
Chakalov, Ivan; Paraskevopoulos, Evangelos; Wollbrink, Andreas; Pantev, Christo
2014-10-15
In case of binaural presentation of two tones with slightly different frequencies the structures of brainstem can no longer follow the interaural time differences (ITD) resulting in an illusionary perception of beat corresponding to frequency difference between the two prime tones. Hence, the beat-frequency does not exist in the prime tones presented to either ear. This study used binaural beats to explore the nature of acoustic deviance detection in humans by means of magnetoencephalography (MEG). Recent research suggests that the auditory change detection is a multistage process. To test this, we employed 26 Hz-binaural beats in a classical oddball paradigm. However, the prime tones (250 Hz and 276 Hz) were switched between the ears in the case of the deviant-beat. Consequently, when the deviant is presented, the cochleae and auditory nerves receive a "new afferent", although the standards and the deviants are heard identical (26 Hz-beats). This allowed us to explore the contribution of auditory periphery to change detection process, and furthermore, to evaluate its influence on beats-related auditory steady-state responses (ASSRs). LORETA-source current density estimates of the evoked fields in a typical mismatch negativity time-window (MMN) and the subsequent difference-ASSRs were determined and compared. The results revealed an MMN generated by a complex neural network including the right parietal lobe and the left middle frontal gyrus. Furthermore, difference-ASSR was generated in the paracentral gyrus. Additionally, psychophysical measures showed no perceptual difference between the standard- and deviant-beats when isolated by noise. These results suggest that the auditory periphery has an important contribution to novelty detection already at sub-cortical level. Overall, the present findings support the notion of hierarchically organized acoustic novelty detection system. Copyright © 2014 Elsevier Inc. All rights reserved.
"Change deafness" arising from inter-feature masking within a single auditory object.
Barascud, Nicolas; Griffiths, Timothy D; McAlpine, David; Chait, Maria
2014-03-01
Our ability to detect prominent changes in complex acoustic scenes depends not only on the ear's sensitivity but also on the capacity of the brain to process competing incoming information. Here, employing a combination of psychophysics and magnetoencephalography (MEG), we investigate listeners' sensitivity in situations when two features belonging to the same auditory object change in close succession. The auditory object under investigation is a sequence of tone pips characterized by a regularly repeating frequency pattern. Signals consisted of an initial, regularly alternating sequence of three short (60 msec) pure tone pips (in the form ABCABC…) followed by a long pure tone with a frequency that is either expected based on the on-going regular pattern ("LONG expected"-i.e., "LONG-expected") or constitutes a pattern violation ("LONG-unexpected"). The change in LONG-expected is manifest as a change in duration (when the long pure tone exceeds the established duration of a tone pip), whereas the change in LONG-unexpected is manifest as a change in both the frequency pattern and a change in the duration. Our results reveal a form of "change deafness," in that although changes in both the frequency pattern and the expected duration appear to be processed effectively by the auditory system-cortical signatures of both changes are evident in the MEG data-listeners often fail to detect changes in the frequency pattern when that change is closely followed by a change in duration. By systematically manipulating the properties of the changing features and measuring behavioral and MEG responses, we demonstrate that feature changes within the same auditory object, which occur close together in time, appear to compete for perceptual resources.
Auditory Reserve and the Legacy of Auditory Experience
Skoe, Erika; Kraus, Nina
2014-01-01
Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function. PMID:25405381
Phonological Processing in Human Auditory Cortical Fields
Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.
2011-01-01
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252
Plasticity in the adult human auditory brainstem following short-term linguistic training
Song, Judy H.; Skoe, Erika; Wong, Patrick C. M.; Kraus, Nina
2009-01-01
Peripheral and central structures along the auditory pathway contribute to speech processing and learning. However, because speech requires the use of functionally and acoustically complex sounds which necessitates high sensory and cognitive demands, long-term exposure and experience using these sounds is often attributed to the neocortex with little emphasis placed on subcortical structures. The present study examines changes in the auditory brainstem, specifically the frequency following response (FFR), as native English-speaking adults learn to incorporate foreign speech sounds (lexical pitch patterns) in word identification. The FFR presumably originates from the auditory midbrain, and can be elicited pre-attentively. We measured FFRs to the trained pitch patterns before and after training. Measures of pitch-tracking were then derived from the FFR signals. We found increased accuracy in pitch-tracking after training, including a decrease in the number of pitch-tracking errors and a refinement in the energy devoted to encoding pitch. Most interestingly, this change in pitch-tracking accuracy only occurred in the most acoustically complex pitch contour (dipping contour), which is also the least familiar to our English-speaking subjects. These results not only demonstrate the contribution of the brainstem in language learning and its plasticity in adulthood, but they also demonstrate the specificity of this contribution (i.e., changes in encoding only occurs in specific, least familiar stimuli, not all stimuli). Our findings complement existing data showing cortical changes after second language learning, and are consistent with models suggesting that brainstem changes resulting from perceptual learning are most apparent when acuity in encoding is most needed. PMID:18370594
Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G
2017-03-01
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
Brian hears: online auditory processing using vectorization over channels.
Fontaine, Bertrand; Goodman, Dan F M; Benichoux, Victor; Brette, Romain
2011-01-01
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in "Brian Hears," a library for the spiking neural network simulator package "Brian." This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.
Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M
2014-02-01
Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human ERP and psychophysical music listening studies. Copyright © 2013 Elsevier B.V. All rights reserved.
Double dissociation of 'what' and 'where' processing in auditory cortex.
Lomber, Stephen G; Malhotra, Shveta
2008-05-01
Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.
Linguistic processing in idiopathic generalized epilepsy: an auditory event-related potential study.
Henkin, Yael; Kishon-Rabin, Liat; Pratt, Hillel; Kivity, Sara; Sadeh, Michelle; Gadoth, Natan
2003-09-01
Auditory processing of increasing acoustic and linguistic complexity was assessed in children with idiopathic generalized epilepsy (IGE) by using auditory event-related potentials (AERPs) as well as reaction time and performance accuracy. Twenty-four children with IGE [12 with generalized tonic-clonic seizures (GTCSs), and 12 with absence seizures (ASs)] with average intelligence and age-appropriate scholastic skills, uniformly medicated with valproic acid (VPA), and 20 healthy controls, performed oddball discrimination tasks that consisted of the following stimuli: (a) pure tones; (b) nonmeaningful monosyllables that differed by their phonetic features (i.e., phonetic stimuli); and (c) meaningful monosyllabic words from two semantic categories (i.e., semantic stimuli). AERPs elicited by nonlinguistic stimuli were similar in healthy and epilepsy children, whereas those elicited by linguistic stimuli (i.e., phonetic and semantic) differed significantly in latency, amplitude, and scalp distribution. In children with GTCSs, phonetic and semantic processing were characterized by slower processing time, manifested by prolonged N2 and P3 latencies during phonetic processing, and prolongation of all AERPs latencies during semantic processing. In children with ASs, phonetic and semantic processing were characterized by increased allocation of attentional resources, manifested by enhanced N2 amplitudes. Semantic processing also was characterized by prolonged P3 latency. In both patient groups, processing of linguistic stimuli resulted in different patterns of brain-activity lateralization compared with that in healthy controls. Reaction time and performance accuracy did not differ among the study groups. AERPs exposed linguistic-processing deficits related to seizure type in children with IGE. Neurologic follow-up should therefore include evaluation of linguistic functions, and remedial intervention should be provided, accordingly.
NASA Astrophysics Data System (ADS)
Chen, Huaiyu; Cao, Li
2017-06-01
In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.
Niederleitner, Bertram; Gutierrez-Ibanez, Cristian; Krabichler, Quirin; Weigel, Stefan; Luksch, Harald
2017-02-15
Processing multimodal sensory information is vital for behaving animals in many contexts. The barn owl, an auditory specialist, is a classic model for studying multisensory integration. In the barn owl, spatial auditory information is conveyed to the optic tectum (TeO) by a direct projection from the external nucleus of the inferior colliculus (ICX). In contrast, evidence of an integration of visual and auditory information in auditory generalist avian species is completely lacking. In particular, it is not known whether in auditory generalist species the ICX projects to the TeO at all. Here we use various retrograde and anterograde tracing techniques both in vivo and in vitro, intracellular fillings of neurons in vitro, and whole-cell patch recordings to characterize the connectivity between ICX and TeO in the chicken. We found that there is a direct projection from ICX to the TeO in the chicken, although this is small and only to the deeper layers (layers 13-15) of the TeO. However, we found a relay area interposed among the IC, the TeO, and the isthmic complex that receives strong synaptic input from the ICX and projects broadly upon the intermediate and deep layers of the TeO. This area is an external portion of the formatio reticularis lateralis (FRLx). In addition to the projection to the TeO, cells in FRLx send, via collaterals, descending projections through tectopontine-tectoreticular pathways. This newly described connection from the inferior colliculus to the TeO provides a solid basis for visual-auditory integration in an auditory generalist bird. J. Comp. Neurol. 525:513-534, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Auditory perception in the aging brain: the role of inhibition and facilitation in early processing.
Stothart, George; Kazanina, Nina
2016-11-01
Aging affects the interplay between peripheral and cortical auditory processing. Previous studies have demonstrated that older adults are less able to regulate afferent sensory information and are more sensitive to distracting information. Using auditory event-related potentials we investigated the role of cortical inhibition on auditory and audiovisual processing in younger and older adults. Across puretone, auditory and audiovisual speech paradigms older adults showed a consistent pattern of inhibitory deficits, manifested as increased P50 and/or N1 amplitudes and an absent or significantly reduced N2. Older adults were still able to use congruent visual articulatory information to aid auditory processing but appeared to require greater neural effort to resolve conflicts generated by incongruent visual information. In combination, the results provide support for the Inhibitory Deficit Hypothesis of aging. They extend previous findings into the audiovisual domain and highlight older adults' ability to benefit from congruent visual information during speech processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Acetylcholinesterase Inhibition and Information Processing in the Auditory Cortex
1986-04-30
9,24,29,30), or for causing auditory hallucinations (2,23,31,32). Thus, compounds which alter cho- linergic transmission, in particular anticholinesterases...the upper auditory system. Thus, attending to and understanding verbal messages in humans, irrespective of the particular voice which speaks them, may...00, AD ACETYLCHOLINESTERASE INHIBITION AND INFORMATION PROCESSING IN THE AUDITORY CORTEX ANNUAL SUMMARY REPORT DTIC ELECTENORMAN M
Dlouha, Olga; Novak, Alexej; Vokral, Jan
2007-06-01
The aim of this project is to use central auditory tests for diagnosis of central auditory processing disorder (CAPD) in children with specific language impairment (SLI), in order to confirm relationship between speech-language impairment and central auditory processing. We attempted to establish special dichotic binaural tests in Czech language modified for younger children. Tests are based on behavioral audiometry using dichotic listening (different auditory stimuli that presented to each ear simultaneously). The experimental tasks consisted of three auditory measures (test 1-3)-dichotic listening of two-syllable words presented like binaural interaction tests. Children with SLI are unable to create simple sentences from two words that are heard separately but simultaneously. Results in our group of 90 pre-school children (6-7 years old) confirmed integration deficit and problems with quality of short-term memory. Average rate of success of children with specific language impairment was 56% in test 1, 64% in test 2 and 63% in test 3. Results of control group: 92% in test 1, 93% in test 2 and 92% in test 3 (p<0.001). Our results indicate the relationship between disorders of speech-language perception and central auditory processing disorders.
ERIC Educational Resources Information Center
Squires, Katie Ellen
2013-01-01
This study investigated the differential contribution of auditory-verbal and visuospatial working memory (WM) on decoding skills in second- and fifth-grade children identified with poor decoding. Thirty-two second-grade students and 22 fifth-grade students completed measures that assessed simple and complex auditory-verbal and visuospatial memory,…
Emotion modulates activity in the 'what' but not 'where' auditory processing pathway.
Kryklywy, James H; Macpherson, Ewan A; Greening, Steven G; Mitchell, Derek G V
2013-11-15
Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway. Copyright © 2013 Elsevier Inc. All rights reserved.
Perceptual Literacy and the Construction of Significant Meanings within Art Education
ERIC Educational Resources Information Center
Cerkez, Beatriz Tomsic
2014-01-01
In order to verify how important the ability to process visual images and sounds in a holistic way can be, we developed an experiment based on the production and reception of an art work that was conceived as a multi-sensorial experience and implied a complex understanding of visual and auditory information. We departed from the idea that to…
Rapid extraction of auditory feature contingencies.
Bendixen, Alexandra; Prinz, Wolfgang; Horváth, János; Trujillo-Barreto, Nelson J; Schröger, Erich
2008-07-01
Contingent relations between sensory events render the environment predictable and thus facilitate adaptive behavior. The human capacity to detect such relations has been comprehensively demonstrated in paradigms in which contingency rules were task-relevant or in which they applied to motor behavior. The extent to which contingencies can also be extracted from events that are unrelated to the current goals of the organism has remained largely unclear. The present study addressed the emergence of contingency-related effects for behaviorally irrelevant auditory stimuli and the cortical areas involved in the processing of such contingency rules. Contingent relations between different features of temporally separate events were embedded in a new dynamic protocol. Participants were presented with the auditory stimulus sequences while their attention was captured by a video. The mismatch negativity (MMN) component of the event-related brain potential (ERP) was employed as an electrophysiological correlate of contingency detection. MMN generators were localized by means of scalp current density (SCD) and primary current density (PCD) analyses with variable resolution electromagnetic tomography (VARETA). Results show that task-irrelevant contingencies can be extracted from about fifteen to twenty successive events conforming to the contingent relation. Topographic and tomographic analyses reveal the involvement of the auditory cortex in the processing of contingency violations. The present data provide evidence for the rapid encoding of complex extrapolative relations in sensory areas. This capacity is of fundamental importance for the organism in its attempt to model the sensory environment outside the focus of attention.
Encoding frequency contrast in primate auditory cortex
Scott, Brian H.; Semple, Malcolm N.
2014-01-01
Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM stimuli (e.g., modulation frequency, modulation depth, carrier frequency) were robustly encoded in the temporal dynamics of the spike trains. For example, changes in the carrier frequency produced highly reproducible changes in shapes of the modulation period histogram, consistent with the notion that the instantaneous probability of discharge mirrors the moment-by-moment spectrum at low modulation rates. The upper limit for phase locking was similar across SAM and SFM within neurons, suggesting shared biophysical constraints on temporal processing. Using spike train classification methods, we found that neural thresholds for modulation depth discrimination are typically far lower than would be predicted from frequency tuning to static tones. This “dynamic hyperacuity” suggests a substantial central enhancement of the neural representation of frequency changes relative to the auditory periphery. Spike timing information was superior to average rate information when discriminating among SFM signals, and even when discriminating among static tones varying in frequency. This finding held even when differences in total spike count across stimuli were normalized, indicating both the primacy and generality of temporal response dynamics in cortical auditory processing. PMID:24598525
Döge, Julia; Baumann, Uwe; Weissgerber, Tobias; Rader, Tobias
2017-12-01
To assess auditory localization accuracy and speech reception threshold (SRT) in complex noise conditions in adult patients with acquired single-sided deafness, after intervention with a cochlear implant (CI) in the deaf ear. Nonrandomized, open, prospective patient series. Tertiary referral university hospital. Eleven patients with late-onset single-sided deafness (SSD) and normal hearing in the unaffected ear, who received a CI. All patients were experienced CI users. Unilateral cochlear implantation. Speech perception was tested in a complex multitalker equivalent noise field consisting of multiple sound sources. Speech reception thresholds in noise were determined in aided (with CI) and unaided conditions. Localization accuracy was assessed in complete darkness. Acoustic stimuli were radiated by multiple loudspeakers distributed in the frontal horizontal plane between -60 and +60 degrees. In the aided condition, results show slightly improved speech reception scores compared with the unaided condition in most of the patients. For 8 of the 11 subjects, SRT was improved between 0.37 and 1.70 dB. Three of the 11 subjects showed deteriorations between 1.22 and 3.24 dB SRT. Median localization error decreased significantly by 12.9 degrees compared with the unaided condition. CI in single-sided deafness is an effective treatment to improve the auditory localization accuracy. Speech reception in complex noise conditions is improved to a lesser extent in 73% of the participating CI SSD patients. However, the absence of true binaural interaction effects (summation, squelch) impedes further improvements. The development of speech processing strategies that respect binaural interaction seems to be mandatory to advance speech perception in demanding listening situations in SSD patients.
ERIC Educational Resources Information Center
Fair, Lisl; Louw, Brenda; Hugo, Rene
2001-01-01
This study compiled a comprehensive early auditory processing skills assessment battery and evaluated the battery to toddlers with (n=8) and without (n=9) early recurrent otitis media. The assessment battery successfully distinguished between normal and deficient early auditory processing development in the subjects. The study also found parents…
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Forebrain pathway for auditory space processing in the barn owl.
Cohen, Y E; Miller, G L; Knudsen, E I
1998-02-01
The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.
Auditory biological marker of concussion in children
Kraus, Nina; Thompson, Elaine C.; Krizman, Jennifer; Cook, Katherine; White-Schwoch, Travis; LaBella, Cynthia R.
2016-01-01
Concussions carry devastating potential for cognitive, neurologic, and socio-emotional disease, but no objective test reliably identifies a concussion and its severity. A variety of neurological insults compromise sound processing, particularly in complex listening environments that place high demands on brain processing. The frequency-following response captures the high computational demands of sound processing with extreme granularity and reliably reveals individual differences. We hypothesize that concussions disrupt these auditory processes, and that the frequency-following response indicates concussion occurrence and severity. Specifically, we hypothesize that concussions disrupt the processing of the fundamental frequency, a key acoustic cue for identifying and tracking sounds and talkers, and, consequently, understanding speech in noise. Here we show that children who sustained a concussion exhibit a signature neural profile. They have worse representation of the fundamental frequency, and smaller and more sluggish neural responses. Neurophysiological responses to the fundamental frequency partially recover to control levels as concussion symptoms abate, suggesting a gain in biological processing following partial recovery. Neural processing of sound correctly identifies 90% of concussion cases and clears 95% of control cases, suggesting this approach has practical potential as a scalable biological marker for sports-related concussion and other types of mild traumatic brain injuries. PMID:28005070
Lount, Sarah A; Purdy, Suzanne C; Hand, Linda
2017-01-01
International evidence suggests youth offenders have greater difficulties with oral language than their nonoffending peers. This study examined the hearing, auditory processing, and language skills of male youth offenders and remandees (YORs) in New Zealand. Thirty-three male YORs, aged 14-17 years, were recruited from 2 youth justice residences, plus 39 similarly aged male students from local schools for comparison. Testing comprised tympanometry, self-reported hearing, pure-tone audiometry, 4 auditory processing tests, 2 standardized language tests, and a nonverbal intelligence test. Twenty-one (64%) of the YORs were identified as language impaired (LI), compared with 4 (10%) of the controls. Performance on all language measures was significantly worse in the YOR group, as were their hearing thresholds. Nine (27%) of the YOR group versus 7 (18%) of the control group fulfilled criteria for auditory processing disorder. Only 1 YOR versus 5 controls had an auditory processing disorder without LI. Language was an area of significant difficulty for YORs. Difficulties with auditory processing were more likely to be accompanied by LI in this group, compared with the controls. Provision of speech-language therapy services and awareness of auditory and language difficulties should be addressed in youth justice systems.
[Which colours can we hear?: light stimulation of the hearing system].
Wenzel, G I; Lenarz, T; Schick, B
2014-02-01
The success of conventional hearing aids and electrical auditory prostheses for hearing impaired patients is still limited in noisy environments and for sounds more complex than speech (e. g. music). This is partially due to the difficulty of frequency-specific activation of the auditory system using these devices. Stimulation of the auditory system using light pulses represents an alternative to mechanical and electrical stimulation. Light is a source of energy that can be very exactly focused and applied with little scattering, thus offering perspectives for optimal activation of the auditory system. Studies investigating light stimulation of sectors along the auditory pathway have shown stimulation of the auditory system is possible using light pulses. However, further studies and developments are needed before a new generation of light stimulation-based auditory prostheses can be made available for clinical application.
Effects of Long-Term Musical Training on Cortical Auditory Evoked Potentials.
Brown, Carolyn J; Jeon, Eun-Kyung; Driscoll, Virginia; Mussoi, Bruna; Deshpande, Shruti Balvalli; Gfeller, Kate; Abbas, Paul J
Evidence suggests that musicians, as a group, have superior frequency resolution abilities when compared with nonmusicians. It is possible to assess auditory discrimination using either behavioral or electrophysiologic methods. The purpose of this study was to determine if the acoustic change complex (ACC) is sensitive enough to reflect the differences in spectral processing exhibited by musicians and nonmusicians. Twenty individuals (10 musicians and 10 nonmusicians) participated in this study. Pitch and spectral ripple discrimination were assessed using both behavioral and electrophysiologic methods. Behavioral measures were obtained using a standard three interval, forced choice procedure. The ACC was recorded and used as an objective (i.e., nonbehavioral) measure of discrimination between two auditory signals. The same stimuli were used for both psychophysical and electrophysiologic testing. As a group, musicians were able to detect smaller changes in pitch than nonmusician. They also were able to detect a shift in the position of the peaks and valleys in a ripple noise stimulus at higher ripple densities than non-musicians. ACC responses recorded from musicians were larger than those recorded from non-musicians when the amplitude of the ACC response was normalized to the amplitude of the onset response in each stimulus pair. Visual detection thresholds derived from the evoked potential data were better for musicians than non-musicians regardless of whether the task was discrimination of musical pitch or detection of a change in the frequency spectrum of the ripple noise stimuli. Behavioral measures of discrimination were generally more sensitive than the electrophysiologic measures; however, the two metrics were correlated. Perhaps as a result of extensive training, musicians are better able to discriminate spectrally complex acoustic signals than nonmusicians. Those differences are evident not only in perceptual/behavioral tests but also in electrophysiologic measures of neural response at the level of the auditory cortex. While these results are based on observations made from normal-hearing listeners, they suggest that the ACC may provide a non-behavioral method of assessing auditory discrimination and as a result might prove useful in future studies that explore the efficacy of participation in a musically based, auditory training program perhaps geared toward pediatric or hearing-impaired listeners.
Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina
2016-02-01
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Effects of Long-Term Musical Training on Cortical Evoked Auditory Potentials
Brown, Carolyn J.; Jeon, Eun-Kyung; Driscoll, Virginia; Mussoi, Bruna; Deshpande, Shruti Balvalli; Gfeller, Kate; Abbas, Paul
2016-01-01
Objective Evidence suggests that musicians, as a group, have superior frequency resolution abilities when compared to non-musicians. It is possible to assess auditory discrimination using either behavioral or electrophysiologic methods. The purpose of this study was to determine if the auditory change complex (ACC) is sensitive enough to reflect the differences in spectral processing exhibited by musicians and non-musicians. Design Twenty individuals (10 musicians and 10 non-musicians) participated in this study. Pitch and spectral ripple discrimination were assessed using both behavioral and electrophysiologic methods. Behavioral measures were obtained using a standard three interval, forced choice procedure and the ACC was recorded and used as an objective (i.e. non-behavioral) measure of discrimination between two auditory signals. The same stimuli were used for both psychophysical and electrophysiologic testing. Results As a group, musicians were able to detect smaller changes in pitch than non-musicians. They also were able to detect a shift in the position of the peaks and valleys in a ripple noise stimulus at higher ripple densities than non-musicians. ACC responses recorded from musicians were larger than those recorded from non-musicians when the amplitude of the ACC response was normalized to the amplitude of the onset response in each stimulus pair. Visual detection thresholds derived from the evoked potential data were better for musicians than non-musicians regardless of whether the task was discrimination of musical pitch or detection of a change in the frequency spectrum of the rippled noise stimuli. Behavioral measures of discrimination were generally more sensitive than the electrophysiologic measures; however, the two metrics were correlated. Conclusions Perhaps as a result of extensive training, musicians are better able to discriminate spectrally complex acoustic signals than non-musicians. Those differences are evident not only in perceptual/behavioral tests, but also in electrophysiologic measures of neural response at the level of the auditory cortex. While these results are based on observations made from normal hearing listeners, they suggest that the ACC may provide a non-behavioral method of assessing auditory discrimination and as a result might prove useful in future studies that explore the efficacy of participation in a musically based, auditory training program perhaps geared toward pediatric and/or hearing-impaired listeners. PMID:28225736
Mohr, Robert A; Chang, Yiran; Bhandiwad, Ashwin A; Forlano, Paul M; Sisneros, Joseph A
2018-01-01
While the peripheral auditory system of fish has been well studied, less is known about how the fish's brain and central auditory system process complex social acoustic signals. The plainfin midshipman fish, Porichthys notatus, has become a good species for investigating the neural basis of acoustic communication because the production and reception of acoustic signals is paramount for this species' reproductive success. Nesting males produce long-duration advertisement calls that females detect and localize among the noise in the intertidal zone to successfully find mates and spawn. How female midshipman are able to discriminate male advertisement calls from environmental noise and other acoustic stimuli is unknown. Using the immediate early gene product cFos as a marker for neural activity, we quantified neural activation of the ascending auditory pathway in female midshipman exposed to conspecific advertisement calls, heterospecific white seabass calls, or ambient environment noise. We hypothesized that auditory hindbrain nuclei would be activated by general acoustic stimuli (ambient noise and other biotic acoustic stimuli) whereas auditory neurons in the midbrain and forebrain would be selectively activated by conspecific advertisement calls. We show that neural activation in two regions of the auditory hindbrain, i.e., the rostral intermediate division of the descending octaval nucleus and the ventral division of the secondary octaval nucleus, did not differ via cFos immunoreactive (cFos-ir) activity when exposed to different acoustic stimuli. In contrast, female midshipman exposed to conspecific advertisement calls showed greater cFos-ir in the nucleus centralis of the midbrain torus semicircularis compared to fish exposed only to ambient noise. No difference in cFos-ir was observed in the torus semicircularis of animals exposed to conspecific versus heterospecific calls. However, cFos-ir was greater in two forebrain structures that receive auditory input, i.e., the central posterior nucleus of the thalamus and the anterior tuberal hypothalamus, when exposed to conspecific calls versus either ambient noise or heterospecific calls. Our results suggest that higher-order neurons in the female midshipman midbrain torus semicircularis, thalamic central posterior nucleus, and hypothalamic anterior tuberal nucleus may be necessary for the discrimination of complex social acoustic signals. Furthermore, neurons in the central posterior and anterior tuberal nuclei are differentially activated by exposure to conspecific versus other acoustic stimuli. © 2018 S. Karger AG, Basel.
Engineer, C.T.; Centanni, T.M.; Im, K.W.; Borland, M.S.; Moreno, N.A.; Carraway, R.S.; Wilson, L.G.; Kilgard, M.P.
2014-01-01
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. PMID:24639033
Moore, Brian C J
2003-03-01
To review how the properties of sounds are "coded" in the normal auditory system and to discuss the extent to which cochlear implants can and do represent these codes. Data are taken from published studies of the response of the cochlea and auditory nerve to simple and complex stimuli, in both the normal and the electrically stimulated ear. REVIEW CONTENT: The review describes: 1) the coding in the normal auditory system of overall level (which partly determines perceived loudness), spectral shape (which partly determines perceived timbre and the identity of speech sounds), periodicity (which partly determines pitch), and sound location; 2) the role of the active mechanism in the cochlea, and particularly the fast-acting compression associated with that mechanism; 3) the neural response patterns evoked by cochlear implants; and 4) how the response patterns evoked by implants differ from those observed in the normal auditory system in response to sound. A series of specific issues is then discussed, including: 1) how to compensate for the loss of cochlear compression; 2) the effective number of independent channels in a normal ear and in cochlear implantees; 3) the importance of independence of responses across neurons; 4) the stochastic nature of normal neural responses; 5) the possible role of across-channel coincidence detection; and 6) potential benefits of binaural implantation. Current cochlear implants do not adequately reproduce several aspects of the neural coding of sound in the normal auditory system. Improved electrode arrays and coding systems may lead to improved coding and, it is hoped, to better performance.
Patel, Aniruddh D.; Iversen, John R.
2013-01-01
Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This “action simulation for auditory prediction” (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi. PMID:24860439
EEG phase reset due to auditory attention: an inverse time-scale approach.
Low, Yin Fen; Strauss, Daniel J
2009-08-01
We propose a novel tool to evaluate the electroencephalograph (EEG) phase reset due to auditory attention by utilizing an inverse analysis of the instantaneous phase for the first time. EEGs were acquired through auditory attention experiments with a maximum entropy stimulation paradigm. We examined single sweeps of auditory late response (ALR) with the complex continuous wavelet transform. The phase in the frequency band that is associated with auditory attention (6-10 Hz, termed as theta-alpha border) was reset to the mean phase of the averaged EEGs. The inverse transform was applied to reconstruct the phase-modified signal. We found significant enhancement of the N100 wave in the reconstructed signal. Analysis of the phase noise shows the effects of phase jittering on the generation of the N100 wave implying that a preferred phase is necessary to generate the event-related potential (ERP). Power spectrum analysis shows a remarkable increase of evoked power but little change of total power after stabilizing the phase of EEGs. Furthermore, by resetting the phase only at the theta border of no attention data to the mean phase of attention data yields a result that resembles attention data. These results show strong connections between EEGs and ERP, in particular, we suggest that the presentation of an auditory stimulus triggers the phase reset process at the theta-alpha border which leads to the emergence of the N100 wave. It is concluded that our study reinforces other studies on the importance of the EEG in ERP genesis.
Palmiero, Massimiliano; Di Matteo, Rosalia; Belardinelli, Marta Olivetti
2014-05-01
Two experiments comparing imaginative processing in different modalities and semantic processing were carried out to investigate the issue of whether conceptual knowledge can be represented in different format. Participants were asked to judge the similarity between visual images, auditory images, and olfactory images in the imaginative block, if two items belonged to the same category in the semantic block. Items were verbally cued in both experiments. The degree of similarity between the imaginative and semantic items was changed across experiments. Experiment 1 showed that the semantic processing was faster than the visual and the auditory imaginative processing, whereas no differentiation was possible between the semantic processing and the olfactory imaginative processing. Experiment 2 revealed that only the visual imaginative processing could be differentiated from the semantic processing in terms of accuracy. These results showed that the visual and auditory imaginative processing can be differentiated from the semantic processing, although both visual and auditory images strongly rely on semantic representations. On the contrary, no differentiation is possible within the olfactory domain. Results are discussed in the frame of the imagery debate.
Vahaba, Daniel M; Macedo-Lima, Matheus; Remage-Healey, Luke
2017-01-01
Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor's song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM's established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E 2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches ( Taeniopygia guttata ) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E 2 administration on sensory processing. In sensory-aged subjects, E 2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E 2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E 2 sensitivity that each precisely track a key neural "switch point" from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds.
2017-01-01
Abstract Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor’s song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM’s established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches (Taeniopygia guttata) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E2 administration on sensory processing. In sensory-aged subjects, E2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E2 sensitivity that each precisely track a key neural “switch point” from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds. PMID:29255797
Operator Performance Measures for Assessing Voice Communication Effectiveness
1989-07-01
performance and work- load assessment techniques have been based.I Broadbent (1958) described a limited capacity filter model of human information...INFORMATION PROCESSING 20 3.1.1. Auditory Attention 20 3.1.2. Auditory Memory 24 3.2. MODELS OF INFORMATION PROCESSING 24 3.2.1. Capacity Theories 25...Learning 0 Attention * Language Specialization • Decision Making• Problem Solving Auditory Information Processing Models of Processing Ooemtor
Effect of conductive hearing loss on central auditory function.
Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher
It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: p<0.001). Individuals with CHL had significantly lower correct responses than individuals with normal hearing for both sides (p<0.001). No correlation was found between GIN performance and degree of hearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Wagner, Monica; Shafer, Valerie L.; Martin, Brett; Steinschneider, Mitchell
2013-01-01
The influence of native-language experience on sensory-obligatory auditory-evoked potentials (AEPs) was investigated in native-English and native-Polish listeners. AEPs were recorded to the first word in nonsense word pairs, while participants performed a syllable identification task to the second word in the pairs. Nonsense words contained phoneme sequence onsets (i.e., /pt/, /pət/, /st/ and /sət/) that occur in the Polish and English languages, with the exception that /pt/ at syllable onset is an illegal phonotactic form in English. P1–N1–P2 waveforms from fronto-central electrode sites were comparable in English and Polish listeners, even though, these same English participants were unable to distinguish the nonsense words having /pt/ and /pət/ onsets. The P1–N1–P2 complex indexed the temporal characteristics of the word stimuli in the same manner for both language groups. Taken together, these findings suggest that the fronto-central P1–N1–P2 complex reflects acoustic feature processing of speech and is not significantly influenced by exposure to the phoneme sequences of the native-language. In contrast, the T-complex from bilateral posterior temporal sites was found to index phonological as well as acoustic feature processing to the nonsense word stimuli. An enhanced negativity for the /pt/ cluster relative to its contrast sequence (i.e., /pət/) occurred only for the Polish listeners, suggesting that neural networks within non-primary auditory cortex may be involved in early cortical phonological processing. PMID:23643857
Influence of Eye Movements, Auditory Perception, and Phonemic Awareness in the Reading Process
ERIC Educational Resources Information Center
Megino-Elvira, Laura; Martín-Lobo, Pilar; Vergara-Moragues, Esperanza
2016-01-01
The authors' aim was to analyze the relationship of eye movements, auditory perception, and phonemic awareness with the reading process. The instruments used were the King-Devick Test (saccade eye movements), the PAF test (auditory perception), the PFC (phonemic awareness), the PROLEC-R (lexical process), the Canals reading speed test, and the…
ERIC Educational Resources Information Center
Chonchaiya, Weerasak; Tardif, Twila; Mai, Xiaoqin; Xu, Lin; Li, Mingyan; Kaciroti, Niko; Kileny, Paul R.; Shao, Jie; Lozoff, Betsy
2013-01-01
Auditory processing capabilities at the subcortical level have been hypothesized to impact an individual's development of both language and reading abilities. The present study examined whether auditory processing capabilities relate to language development in healthy 9-month-old infants. Participants were 71 infants (31 boys and 40 girls) with…
Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquière, Pol
2007-04-09
This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school children at high-family risk for dyslexia, compared to a group of well-matched low-risk control children. Based on family risk status and first grade literacy achievement children were categorized in groups and pre-school data were retrospectively reanalyzed. On average, children showing both increased family risk and literacy-impairment at the end of first grade, presented significant pre-school deficits in phonological awareness, rapid automatized naming, speech-in-noise perception and frequency modulation detection. The concurrent presence of these deficits before receiving any formal reading instruction, might suggest a causal relation with problematic literacy development. However, a closer inspection of the individual data indicates that the core of the literacy problem is situated at the level of higher-order phonological processing. Although auditory and speech perception problems are relatively over-represented in literacy-impaired subjects and might possibly aggravate the phonological and literacy problem, it is unlikely that they would be at the basis of these problems. At a neurobiological level, results are interpreted as evidence for dysfunctional processing along the auditory-to-articulation stream that is implied in phonological processing, in combination with a relatively intact or inconsistently impaired functioning of the auditory-to-meaning stream that subserves auditory processing and speech perception.
Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Porges, Stephen W; Macellaio, Matthew; Stanfill, Shannon D; McCue, Kimberly; Lewis, Gregory F; Harden, Emily R; Handelman, Mika; Denver, John; Bazhenova, Olga V; Heilman, Keri J
2013-06-01
The current study evaluated processes underlying two common symptoms (i.e., state regulation problems and deficits in auditory processing) associated with a diagnosis of autism spectrum disorders. Although these symptoms have been treated in the literature as unrelated, when informed by the Polyvagal Theory, these symptoms may be viewed as the predictable consequences of depressed neural regulation of an integrated social engagement system, in which there is down regulation of neural influences to the heart (i.e., via the vagus) and to the middle ear muscles (i.e., via the facial and trigeminal cranial nerves). Respiratory sinus arrhythmia (RSA) and heart period were monitored to evaluate state regulation during a baseline and two auditory processing tasks (i.e., the SCAN tests for Filtered Words and Competing Words), which were used to evaluate auditory processing performance. Children with a diagnosis of autism spectrum disorders (ASD) were contrasted with aged matched typically developing children. The current study identified three features that distinguished the ASD group from a group of typically developing children: 1) baseline RSA, 2) direction of RSA reactivity, and 3) auditory processing performance. In the ASD group, the pattern of change in RSA during the attention demanding SCAN tests moderated the relation between performance on the Competing Words test and IQ. In addition, in a subset of ASD participants, auditory processing performance improved and RSA increased following an intervention designed to improve auditory processing. Copyright © 2012 Elsevier B.V. All rights reserved.
1988-09-01
ability to detect a change in spectral shape. This question also beats on that of how the auditory system codes intensity. There are, at laast, two...This prior experience with the diotic presentations. disparity leads us to speculate that the tasks of detecting an We also considered how binaural ...quite complex. One Colburn and Durlach, 1978), one prerequisite for binaural may not be able to simply extrapolate from one to the other. interaction
Visual activity predicts auditory recovery from deafness after adult cochlear implantation.
Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal
2013-12-01
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Wilkinson, Sam
2018-01-01
Two challenges that face popular self-monitoring theories (SMTs) of auditory verbal hallucination (AVH) are that they cannot account for the auditory phenomenology of AVHs and that they cannot account for their variety. In this paper I show that both challenges can be met by adopting a predictive processing framework (PPF), and by viewing AVHs as arising from abnormalities in predictive processing. I show how, within the PPF, both the auditory phenomenology of AVHs, and three subtypes of AVH, can be accounted for. PMID:25286243
Basic Auditory Processing and Developmental Dyslexia in Chinese
ERIC Educational Resources Information Center
Wang, Hsiao-Lan Sharon; Huss, Martina; Hamalainen, Jarmo A.; Goswami, Usha
2012-01-01
The present study explores the relationship between basic auditory processing of sound rise time, frequency, duration and intensity, phonological skills (onset-rime and tone awareness, sound blending, RAN, and phonological memory) and reading disability in Chinese. A series of psychometric, literacy, phonological, auditory, and character…
Temporal processing and long-latency auditory evoked potential in stutterers.
Prestes, Raquel; de Andrade, Adriana Neves; Santos, Renata Beatriz Fernandes; Marangoni, Andrea Tortosa; Schiefer, Ana Maria; Gil, Daniela
Stuttering is a speech fluency disorder, and may be associated with neuroaudiological factors linked to central auditory processing, including changes in auditory processing skills and temporal resolution. To characterize the temporal processing and long-latency auditory evoked potential in stutterers and to compare them with non-stutterers. The study included 41 right-handed subjects, aged 18-46 years, divided into two groups: stutterers (n=20) and non-stutters (n=21), compared according to age, education, and sex. All subjects were submitted to the duration pattern tests, random gap detection test, and long-latency auditory evoked potential. Individuals who stutter showed poorer performance on Duration Pattern and Random Gap Detection tests when compared with fluent individuals. In the long-latency auditory evoked potential, there was a difference in the latency of N2 and P3 components; stutterers had higher latency values. Stutterers have poor performance in temporal processing and higher latency values for N2 and P3 components. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Ito, T; Inoue, K; Takada, M
2015-12-03
Macaque monkeys use complex communication calls and are regarded as a model for studying the coding and decoding of complex sound in the auditory system. However, little is known about the distribution of excitatory and inhibitory neurons in the auditory system of macaque monkeys. In this study, we examined the overall distribution of cell bodies that expressed mRNAs for VGLUT1, and VGLUT2 (markers for glutamatergic neurons), GAD67 (a marker for GABAergic neurons), and GLYT2 (a marker for glycinergic neurons) in the auditory system of the Japanese macaque. In addition, we performed immunohistochemistry for VGLUT1, VGLUT2, and GAD67 in order to compare the distribution of proteins and mRNAs. We found that most of the excitatory neurons in the auditory brainstem expressed VGLUT2. In contrast, the expression of VGLUT1 mRNA was restricted to the auditory cortex (AC), periolivary nuclei, and cochlear nuclei (CN). The co-expression of GAD67 and GLYT2 mRNAs was common in the ventral nucleus of the lateral lemniscus (VNLL), CN, and superior olivary complex except for the medial nucleus of the trapezoid body, which expressed GLYT2 alone. In contrast, the dorsal nucleus of the lateral lemniscus, inferior colliculus, thalamus, and AC expressed GAD67 alone. The absence of co-expression of VGLUT1 and VGLUT2 in the medial geniculate, medial superior olive, and VNLL suggests that synaptic responses in the target neurons of these nuclei may be different between rodents and macaque monkeys. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Smith, Sherri L.; Pichora-Fuller, M. Kathleen
2015-01-01
Listeners with hearing loss commonly report having difficulty understanding speech, particularly in noisy environments. Their difficulties could be due to auditory and cognitive processing problems. Performance on speech-in-noise tests has been correlated with reading working memory span (RWMS), a measure often chosen to avoid the effects of hearing loss. If the goal is to assess the cognitive consequences of listeners’ auditory processing abilities, however, then listening working memory span (LWMS) could be a more informative measure. Some studies have examined the effects of different degrees and types of masking on working memory, but less is known about the demands placed on working memory depending on the linguistic complexity of the target speech or the task used to measure speech understanding in listeners with hearing loss. Compared to RWMS, LWMS measures using different speech targets and maskers may provide a more ecologically valid approach. To examine the contributions of RWMS and LWMS to speech understanding, we administered two working memory measures (a traditional RWMS measure and a new LWMS measure), and a battery of tests varying in the linguistic complexity of the speech materials, the presence of babble masking, and the task. Participants were a group of younger listeners with normal hearing and two groups of older listeners with hearing loss (n = 24 per group). There was a significant group difference and a wider range in performance on LWMS than on RWMS. There was a significant correlation between both working memory measures only for the oldest listeners with hearing loss. Notably, there were only few significant correlations among the working memory and speech understanding measures. These findings suggest that working memory measures reflect individual differences that are distinct from those tapped by these measures of speech understanding. PMID:26441769
Individual Differences in the Frequency-Following Response: Relation to Pitch Perception
Coffey, Emily B. J.; Colagrosso, Emilia M. G.; Lehmann, Alexandre; Schönwiesner, Marc; Zatorre, Robert J.
2016-01-01
The scalp-recorded frequency-following response (FFR) is a measure of the auditory nervous system’s representation of periodic sound, and may serve as a marker of training-related enhancements, behavioural deficits, and clinical conditions. However, FFRs of healthy normal subjects show considerable variability that remains unexplained. We investigated whether the FFR representation of the frequency content of a complex tone is related to the perception of the pitch of the fundamental frequency. The strength of the fundamental frequency in the FFR of 39 people with normal hearing was assessed when they listened to complex tones that either included or lacked energy at the fundamental frequency. We found that the strength of the fundamental representation of the missing fundamental tone complex correlated significantly with people's general tendency to perceive the pitch of the tone as either matching the frequency of the spectral components that were present, or that of the missing fundamental. Although at a group level the fundamental representation in the FFR did not appear to be affected by the presence or absence of energy at the same frequency in the stimulus, the two conditions were statistically distinguishable for some subjects individually, indicating that the neural representation is not linearly dependent on the stimulus content. In a second experiment using a within-subjects paradigm, we showed that subjects can learn to reversibly select between either fundamental or spectral perception, and that this is accompanied both by changes to the fundamental representation in the FFR and to cortical-based gamma activity. These results suggest that both fundamental and spectral representations coexist, and are available for later auditory processing stages, the requirements of which may also influence their relative strength and thus modulate FFR variability. The data also highlight voluntary mode perception as a new paradigm with which to study top-down vs bottom-up mechanisms that support the emerging view of the FFR as the outcome of integrated processing in the entire auditory system. PMID:27015271
Infant discrimination of rapid auditory cues predicts later language impairment.
Benasich, April A; Tallal, Paula
2002-10-17
The etiology and mechanisms of specific language impairment (SLI) in children are unknown. Differences in basic auditory processing abilities have been suggested to underlie their language deficits. Studies suggest that the neuropathology, such as atypical patterns of cerebral lateralization and cortical cellular anomalies, implicated in such impairments likely occur early in life. Such anomalies may play a part in the rapid processing deficits seen in this disorder. However, prospective, longitudinal studies in infant populations that are critical to examining these hypotheses have not been done. In the study described, performance on brief, rapidly-presented, successive auditory processing and perceptual-cognitive tasks were assessed in two groups of infants: normal control infants with no family history of language disorders and infants from families with a positive family history for language impairment. Initial assessments were obtained when infants were 6-9 months of age (M=7.5 months) and the sample was then followed through age 36 months. At the first visit, infants' processing of rapid auditory cues as well as global processing speed and memory were assessed. Significant differences in mean thresholds were seen in infants born into families with a history of SLI as compared with controls. Examination of relations between infant processing abilities and emerging language through 24 months-of-age revealed that threshold for rapid auditory processing at 7.5 months was the single best predictor of language outcome. At age 3, rapid auditory processing threshold and being male, together predicted 39-41% of the variance in language outcome. Thus, early deficits in rapid auditory processing abilities both precede and predict subsequent language delays. These findings support an essential role for basic nonlinguistic, central auditory processes, particularly rapid spectrotemporal processing, in early language development. Further, these findings provide a temporal diagnostic window during which future language impairments may be addressed.
Irsik, Vanessa C; Vanden Bosch der Nederlanden, Christina M; Snyder, Joel S
2016-11-01
Attention and other processing constraints limit the perception of objects in complex scenes, which has been studied extensively in the visual sense. We used a change deafness paradigm to examine how attention to particular objects helps and hurts the ability to notice changes within complex auditory scenes. In a counterbalanced design, we examined how cueing attention to particular objects affected performance in an auditory change-detection task through the use of valid or invalid cues and trials without cues (Experiment 1). We further examined how successful encoding predicted change-detection performance using an object-encoding task and we addressed whether performing the object-encoding task along with the change-detection task affected performance overall (Experiment 2). Participants had more error for invalid compared to valid and uncued trials, but this effect was reduced in Experiment 2 compared to Experiment 1. When the object-encoding task was present, listeners who completed the uncued condition first had less overall error than those who completed the cued condition first. All participants showed less change deafness when they successfully encoded change-relevant compared to irrelevant objects during valid and uncued trials. However, only participants who completed the uncued condition first also showed this effect during invalid cue trials, suggesting a broader scope of attention. These findings provide converging evidence that attention to change-relevant objects is crucial for successful detection of acoustic changes and that encouraging broad attention to multiple objects is the best way to reduce change deafness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Auditory Power-Law Activation Avalanches Exhibit a Fundamental Computational Ground State
NASA Astrophysics Data System (ADS)
Stoop, Ruedi; Gomez, Florian
2016-07-01
The cochlea provides a biological information-processing paradigm that we are only beginning to understand in its full complexity. Our work reveals an interacting network of strongly nonlinear dynamical nodes, on which even a simple sound input triggers subnetworks of activated elements that follow power-law size statistics ("avalanches"). From dynamical systems theory, power-law size distributions relate to a fundamental ground state of biological information processing. Learning destroys these power laws. These results strongly modify the models of mammalian sound processing and provide a novel methodological perspective for understanding how the brain processes information.
Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha
2016-12-01
Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Caplan, David; Waters, Gloria; Bertram, Julia; Ostrowski, Adam; Michaud, Jennifer
2016-01-01
The authors assessed 4,865 middle and high school students for the ability to recognize and understand written and spoken morphologically simple words, morphologically complex words, and the syntactic structure of sentences and for the ability to answer questions about facts presented in a written passage and to make inferences based on those…
ERIC Educational Resources Information Center
Iliadou, Vasiliki; Bamiou, Doris Eva
2012-01-01
Purpose: To investigate the clinical utility of the Children's Auditory Processing Performance Scale (CHAPPS; Smoski, Brunt, & Tannahill, 1992) to evaluate listening ability in 12-year-old children referred for auditory processing assessment. Method: This was a prospective case control study of 97 children (age range = 11;4 [years;months] to…
ERIC Educational Resources Information Center
Emerson, Maria F.; And Others
1997-01-01
The SCAN: A Screening Test for Auditory Processing Disorders was administered to 14 elementary children with a history of otitis media and 14 typical children, to evaluate the validity of the test in identifying children with central auditory processing disorder. Another experiment found that test results differed based on the testing environment…
Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment
Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru
2013-01-01
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873
Temporal factors affecting somatosensory–auditory interactions in speech processing
Ito, Takayuki; Gracco, Vincent L.; Ostry, David J.
2014-01-01
Speech perception is known to rely on both auditory and visual information. However, sound-specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study, we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory–auditory interaction in speech perception. We examined the changes in event-related potentials (ERPs) in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the ERP was reliably different from the two unisensory potentials. More importantly, the magnitude of the ERP difference varied as a function of the relative timing of the somatosensory–auditory stimulation. Event-related activity change due to stimulus timing was seen between 160 and 220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory–auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production. PMID:25452733
Maclin, Edward L; Mathewson, Kyle E; Low, Kathy A; Boot, Walter R; Kramer, Arthur F; Fabiani, Monica; Gratton, Gabriele
2011-09-01
Changes in attention allocation with complex task learning reflect processing automatization and more efficient control. We studied these changes using ERP and EEG spectral analyses in subjects playing Space Fortress, a complex video game comprising standard cognitive task components. We hypothesized that training would free up attentional resources for a secondary auditory oddball task. Both P3 and delta EEG showed a processing trade-off between game and oddball tasks, but only some game events showed reduced attention requirements with practice. Training magnified a transient increase in alpha power following both primary and secondary task events. This contrasted with alpha suppression observed when the oddball task was performed alone, suggesting that alpha may be related to attention switching. Hence, P3 and EEG spectral data are differentially sensitive to changes in attentional processing occurring with complex task training. Copyright © 2011 Society for Psychophysiological Research.
Schmithorst, Vincent J; Brown, Rhonda Douglas
2004-07-01
The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.
Neural coding strategies in auditory cortex.
Wang, Xiaoqin
2007-07-01
In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex can be characterized by three types: (1) isomorphic (faithful) representations of acoustic structures; (2) non-isomorphic transformations of acoustic features and (3) transformations from acoustical to perceptual dimensions. The challenge facing auditory neurophysiologists is to understand the nature of the latter two transformations. In this article, I will review recent studies from our laboratory regarding temporal discharge patterns in auditory cortex of awake marmosets and cortical representations of time-varying signals. Findings from these studies show that (1) firing patterns of neurons in auditory cortex are dependent on stimulus optimality and context and (2) the auditory cortex forms internal representations of sounds that are no longer faithful replicas of their acoustic structures.
Estradiol-dependent modulation of auditory processing and selectivity in songbirds
Maney, Donna; Pinaud, Raphael
2011-01-01
The steroid hormone estradiol plays an important role in reproductive development and behavior and modulates a wide array of physiological and cognitive processes. Recently, reports from several research groups have converged to show that estradiol also powerfully modulates sensory processing, specifically, the physiology of central auditory circuits in songbirds. These investigators have discovered that (1) behaviorally-relevant auditory experience rapidly increases estradiol levels in the auditory forebrain; (2) estradiol instantaneously enhances the responsiveness and coding efficiency of auditory neurons; (3) these changes are mediated by a non-genomic effect of brain-generated estradiol on the strength of inhibitory neurotransmission; and (4) estradiol regulates biochemical cascades that induce the expression of genes involved in synaptic plasticity. Together, these findings have established estradiol as a central regulator of auditory function and intensified the need to consider brain-based mechanisms, in addition to peripheral organ dysfunction, in hearing pathologies associated with estrogen deficiency. PMID:21146556
Demopoulos, Carly; Yu, Nina; Tripp, Jennifer; Mota, Nayara; Brandes-Aitken, Anne N.; Desai, Shivani S.; Hill, Susanna S.; Antovich, Ashley D.; Harris, Julia; Honma, Susanne; Mizuiri, Danielle; Nagarajan, Srikantan S.; Marco, Elysa J.
2017-01-01
This study compared magnetoencephalographic (MEG) imaging-derived indices of auditory and somatosensory cortical processing in children aged 8–12 years with autism spectrum disorder (ASD; N = 18), those with sensory processing dysfunction (SPD; N = 13) who do not meet ASD criteria, and typically developing control (TDC; N = 19) participants. The magnitude of responses to both auditory and tactile stimulation was comparable across all three groups; however, the M200 latency response from the left auditory cortex was significantly delayed in the ASD group relative to both the TDC and SPD groups, whereas the somatosensory response of the ASD group was only delayed relative to TDC participants. The SPD group did not significantly differ from either group in terms of somatosensory latency, suggesting that participants with SPD may have an intermediate phenotype between ASD and TDC with regard to somatosensory processing. For the ASD group, correlation analyses indicated that the left M200 latency delay was significantly associated with performance on the WISC-IV Verbal Comprehension Index as well as the DSTP Acoustic-Linguistic index. Further, these cortical auditory response delays were not associated with somatosensory cortical response delays or cognitive processing speed in the ASD group, suggesting that auditory delays in ASD are domain specific rather than associated with generalized processing delays. The specificity of these auditory delays to the ASD group, in addition to their correlation with verbal abilities, suggests that auditory sensory dysfunction may be implicated in communication symptoms in ASD, motivating further research aimed at understanding the impact of sensory dysfunction on the developing brain. PMID:28603492
Barone, Pascal; Chambaudie, Laure; Strelnikov, Kuzma; Fraysse, Bernard; Marx, Mathieu; Belin, Pascal; Deguine, Olivier
2016-10-01
Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ludersdorfer, Philipp; Wimmer, Heinz; Richlan, Fabio; Schurz, Matthias; Hutzler, Florian; Kronbichler, Martin
2016-01-01
The present fMRI study investigated the hypothesis that activation of the left ventral occipitotemporal cortex (vOT) in response to auditory words can be attributed to lexical orthographic rather than lexico-semantic processing. To this end, we presented auditory words in both an orthographic ("three or four letter word?") and a semantic ("living or nonliving?") task. In addition, a auditory control condition presented tones in a pitch evaluation task. The results showed that the left vOT exhibited higher activation for orthographic relative to semantic processing of auditory words with a peak in the posterior part of vOT. Comparisons to the auditory control condition revealed that orthographic processing of auditory words elicited activation in a large vOT cluster. In contrast, activation for semantic processing was only weak and restricted to the middle part vOT. We interpret our findings as speaking for orthographic processing in left vOT. In particular, we suggest that activation in left middle vOT can be attributed to accessing orthographic whole-word representations. While activation of such representations was experimentally ascertained in the orthographic task, it might have also occurred automatically in the semantic task. Activation in the more posterior vOT region, on the other hand, may reflect the generation of explicit images of word-specific letter sequences required by the orthographic but not the semantic task. In addition, based on cross-modal suppression, the finding of marked deactivations in response to the auditory tones is taken to reflect the visual nature of representations and processes in left vOT. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Whittaker, Christopher A
2012-01-01
In the search for 'pure autism', non-verbal children labeled aloof, Severely Autistic with Developmental Disabilities (ASA/DD), are routinely excluded from psychological research. This exclusion is predicated on the claim that they are indistinguishable from those with SLD/PMLD, which is refuted through a discussion of the extant literature. A novel, falsifiable, speech aversion hypothesis is proposed: "aloof, non-verbal young children (<7 years), with severe autism (CARS≥37), but without significant dysmorphic features, will show aversive reactions to complex speech (>2-3 words), but not to a silent interlocutor, or one imitating their vocalizations, in proximal encounters." Implications are examined by deconstructing the presenting symptoms of ASA/DD in response to the hypothesis. Supporting evidence is drawn from: Minimal Speech Approach (MSA) research showing high levels of spontaneous requests for social routines; a reinterpretation of still-face research as a still-(silent)-face paradigm; auditory processing MMN data employing EEG/MEG; and possible links to epileptiform activity and verbal auditory agnosia. Guidelines are established for future research. This hypothesis, if corroborated, would add to the auditory processing anomalies seen in severe autism and lead to synergies of existing and new areas of research, with significant theoretical, therapeutic, and educational implications. Copyright © 2011 Elsevier Ltd. All rights reserved.
Riva, Valentina; Cantiani, Chiara; Benasich, April A; Molteni, Massimo; Piazza, Caterina; Giorda, Roberto; Dionne, Ginette; Marino, Cecilia
2018-06-01
Although it is clear that early language acquisition can be a target of CNTNAP2, the pathway between gene and language is still largely unknown. This research focused on the mediation role of rapid auditory processing (RAP). We tested RAP at 6 months of age by the use of event-related potentials, as a mediator between common variants of the CNTNAP2 gene (rs7794745 and rs2710102) and 20-month-old language outcome in a prospective longitudinal study of 96 Italian infants. The mediation model examines the hypothesis that language outcome is explained by a sequence of effects involving RAP and CNTNAP2. The ability to discriminate spectrotemporally complex auditory frequency changes at 6 months of age mediates the contribution of rs2710102 to expressive vocabulary at 20 months. The indirect effect revealed that rs2710102 C/C was associated with lower P3 amplitude in the right hemisphere, which, in turn, predicted poorer expressive vocabulary at 20 months of age. These findings add to a growing body of literature implicating RAP as a viable marker in genetic studies of language development. The results demonstrate a potential developmental cascade of effects, whereby CNTNAP2 drives RAP functioning that, in turn, contributes to early expressive outcome.
Auditory Cortex Processes Variation in Our Own Speech
Sitek, Kevin R.; Mathalon, Daniel H.; Roach, Brian J.; Houde, John F.; Niziolek, Caroline A.; Ford, Judith M.
2013-01-01
As we talk, we unconsciously adjust our speech to ensure it sounds the way we intend it to sound. However, because speech production involves complex motor planning and execution, no two utterances of the same sound will be exactly the same. Here, we show that auditory cortex is sensitive to natural variations in self-produced speech from utterance to utterance. We recorded event-related potentials (ERPs) from ninety-nine subjects while they uttered “ah” and while they listened to those speech sounds played back. Subjects' utterances were sorted based on their formant deviations from the previous utterance. Typically, the N1 ERP component is suppressed during talking compared to listening. By comparing ERPs to the least and most variable utterances, we found that N1 was less suppressed to utterances that differed greatly from their preceding neighbors. In contrast, an utterance's difference from the median formant values did not affect N1. Trial-to-trial pitch (f0) deviation and pitch difference from the median similarly did not affect N1. We discuss mechanisms that may underlie the change in N1 suppression resulting from trial-to-trial formant change. Deviant utterances require additional auditory cortical processing, suggesting that speaking-induced suppression mechanisms are optimally tuned for a specific production. PMID:24349399
Auditory processing and morphological anomalies in medial geniculate nucleus of Cntnap2 mutant mice.
Truong, Dongnhu T; Rendall, Amanda R; Castelluccio, Brian C; Eigsti, Inge-Marie; Fitch, R Holly
2015-12-01
Genetic epidemiological studies support a role for CNTNAP2 in developmental language disorders such as autism spectrum disorder, specific language impairment, and dyslexia. Atypical language development and function represent a core symptom of autism spectrum disorder (ASD), with evidence suggesting that aberrant auditory processing-including impaired spectrotemporal processing and enhanced pitch perception-may both contribute to an anomalous language phenotype. Investigation of gene-brain-behavior relationships in social and repetitive ASD symptomatology have benefited from experimentation on the Cntnap2 knockout (KO) mouse. However, auditory-processing behavior and effects on neural structures within the central auditory pathway have not been assessed in this model. Thus, this study examined whether auditory-processing abnormalities were associated with mutation of the Cntnap2 gene in mice. Cntnap2 KO mice were assessed on auditory-processing tasks including silent gap detection, embedded tone detection, and pitch discrimination. Cntnap2 knockout mice showed deficits in silent gap detection but a surprising superiority in pitch-related discrimination as compared with controls. Stereological analysis revealed a reduction in the number and density of neurons, as well as a shift in neuronal size distribution toward smaller neurons, in the medial geniculate nucleus of mutant mice. These findings are consistent with a central role for CNTNAP2 in the ontogeny and function of neural systems subserving auditory processing and suggest that developmental disruption of these neural systems could contribute to the atypical language phenotype seen in autism spectrum disorder. (c) 2015 APA, all rights reserved).
Processing of pitch and location in human auditory cortex during visual and auditory tasks.
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
2015-01-01
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand.
Processing of pitch and location in human auditory cortex during visual and auditory tasks
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
2015-01-01
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand. PMID:26594185
Persistent Thalamic Sound Processing Despite Profound Cochlear Denervation.
Chambers, Anna R; Salazar, Juan J; Polley, Daniel B
2016-01-01
Neurons at higher stages of sensory processing can partially compensate for a sudden drop in peripheral input through a homeostatic plasticity process that increases the gain on weak afferent inputs. Even after a profound unilateral auditory neuropathy where >95% of afferent synapses between auditory nerve fibers and inner hair cells have been eliminated with ouabain, central gain can restore cortical processing and perceptual detection of basic sounds delivered to the denervated ear. In this model of profound auditory neuropathy, auditory cortex (ACtx) processing and perception recover despite the absence of an auditory brainstem response (ABR) or brainstem acoustic reflexes, and only a partial recovery of sound processing at the level of the inferior colliculus (IC), an auditory midbrain nucleus. In this study, we induced a profound cochlear neuropathy with ouabain and asked whether central gain enabled a compensatory plasticity in the auditory thalamus comparable to the full recovery of function previously observed in the ACtx, the partial recovery observed in the IC, or something different entirely. Unilateral ouabain treatment in adult mice effectively eliminated the ABR, yet robust sound-evoked activity persisted in a minority of units recorded from the contralateral medial geniculate body (MGB) of awake mice. Sound driven MGB units could decode moderate and high-intensity sounds with accuracies comparable to sham-treated control mice, but low-intensity classification was near chance. Pure tone receptive fields and synchronization to broadband pulse trains also persisted, albeit with significantly reduced quality and precision, respectively. MGB decoding of temporally modulated pulse trains and speech tokens were both greatly impaired in ouabain-treated mice. Taken together, the absence of an ABR belied a persistent auditory processing at the level of the MGB that was likely enabled through increased central gain. Compensatory plasticity at the level of the auditory thalamus was less robust overall than previous observations in cortex or midbrain. Hierarchical differences in compensatory plasticity following sensorineural hearing loss may reflect differences in GABA circuit organization within the MGB, as compared to the ACtx or IC.
Brainstem Correlates of Temporal Auditory Processing in Children with Specific Language Impairment
ERIC Educational Resources Information Center
Basu, Madhavi; Krishnan, Ananthanarayan; Weber-Fox, Christine
2010-01-01
Deficits in identification and discrimination of sounds with short inter-stimulus intervals or short formant transitions in children with specific language impairment (SLI) have been taken to reflect an underlying temporal auditory processing deficit. Using the sustained frequency following response (FFR) and the onset auditory brainstem responses…
Positron Emission Tomography in Cochlear Implant and Auditory Brainstem Implant Recipients.
ERIC Educational Resources Information Center
Miyamoto, Richard T.; Wong, Donald
2001-01-01
Positron emission tomography imaging was used to evaluate the brain's response to auditory stimulation, including speech, in deaf adults (five with cochlear implants and one with an auditory brainstem implant). Functional speech processing was associated with activation in areas classically associated with speech processing. (Contains five…
Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith
ERIC Educational Resources Information Center
Bailey, Frank S.; Yocum, Russell G.
2015-01-01
The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…
The neural processing of hierarchical structure in music and speech at different timescales
Farbood, Morwaread M.; Heeger, David J.; Marcus, Gary; Hasson, Uri; Lerner, Yulia
2015-01-01
Music, like speech, is a complex auditory signal that contains structures at multiple timescales, and as such is a potentially powerful entry point into the question of how the brain integrates complex streams of information. Using an experimental design modeled after previous studies that used scrambled versions of a spoken story (Lerner et al., 2011) and a silent movie (Hasson et al., 2008), we investigate whether listeners perceive hierarchical structure in music beyond short (~6 s) time windows and whether there is cortical overlap between music and language processing at multiple timescales. Experienced pianists were presented with an extended musical excerpt scrambled at multiple timescales—by measure, phrase, and section—while measuring brain activity with functional magnetic resonance imaging (fMRI). The reliability of evoked activity, as quantified by inter-subject correlation of the fMRI responses, was measured. We found that response reliability depended systematically on musical structure coherence, revealing a topographically organized hierarchy of processing timescales. Early auditory areas (at the bottom of the hierarchy) responded reliably in all conditions. For brain areas at the top of the hierarchy, the original (unscrambled) excerpt evoked more reliable responses than any of the scrambled excerpts, indicating that these brain areas process long-timescale musical structures, on the order of minutes. The topography of processing timescales was analogous with that reported previously for speech, but the timescale gradients for music and speech overlapped with one another only partially, suggesting that temporally analogous structures—words/measures, sentences/musical phrases, paragraph/sections—are processed separately. PMID:26029037
The neural processing of hierarchical structure in music and speech at different timescales.
Farbood, Morwaread M; Heeger, David J; Marcus, Gary; Hasson, Uri; Lerner, Yulia
2015-01-01
Music, like speech, is a complex auditory signal that contains structures at multiple timescales, and as such is a potentially powerful entry point into the question of how the brain integrates complex streams of information. Using an experimental design modeled after previous studies that used scrambled versions of a spoken story (Lerner et al., 2011) and a silent movie (Hasson et al., 2008), we investigate whether listeners perceive hierarchical structure in music beyond short (~6 s) time windows and whether there is cortical overlap between music and language processing at multiple timescales. Experienced pianists were presented with an extended musical excerpt scrambled at multiple timescales-by measure, phrase, and section-while measuring brain activity with functional magnetic resonance imaging (fMRI). The reliability of evoked activity, as quantified by inter-subject correlation of the fMRI responses, was measured. We found that response reliability depended systematically on musical structure coherence, revealing a topographically organized hierarchy of processing timescales. Early auditory areas (at the bottom of the hierarchy) responded reliably in all conditions. For brain areas at the top of the hierarchy, the original (unscrambled) excerpt evoked more reliable responses than any of the scrambled excerpts, indicating that these brain areas process long-timescale musical structures, on the order of minutes. The topography of processing timescales was analogous with that reported previously for speech, but the timescale gradients for music and speech overlapped with one another only partially, suggesting that temporally analogous structures-words/measures, sentences/musical phrases, paragraph/sections-are processed separately.
Ronald, Kelly L; Fernández-Juricic, Esteban; Lucas, Jeffrey R
2018-05-16
A common assumption in sexual selection studies is that receivers decode signal information similarly. However, receivers may vary in how they rank signallers if signal perception varies with an individual's sensory configuration. Furthermore, receivers may vary in their weighting of different elements of multimodal signals based on their sensory configuration. This could lead to complex levels of selection on signalling traits. We tested whether multimodal sensory configuration could affect preferences for multimodal signals. We used brown-headed cowbird ( Molothrus ater ) females to examine how auditory sensitivity and auditory filters, which influence auditory spectral and temporal resolution, affect song preferences, and how visual spatial resolution and visual temporal resolution, which influence resolution of a moving visual signal, affect visual display preferences. Our results show that multimodal sensory configuration significantly affects preferences for male displays: females with better auditory temporal resolution preferred songs that were shorter, with lower Wiener entropy, and higher frequency; and females with better visual temporal resolution preferred males with less intense visual displays. Our findings provide new insights into mate-choice decisions and receiver signal processing. Furthermore, our results challenge a long-standing assumption in animal communication which can affect how we address honest signalling, assortative mating and sensory drive. © 2018 The Author(s).
Fam, Justine; Holmes, Nathan; Delaney, Andrew; Crane, James; Westbrook, R Frederick
2018-06-14
Oxytocin (OT) is a neuropeptide which influences the expression of social behavior and regulates its distribution according to the social context - OT is associated with increased pro-social effects in the absence of social threat and defensive aggression when threats are present. The present experiments investigated the effects of OT beyond that of social behavior by using a discriminative Pavlovian fear conditioning protocol with rats. In Experiment 1, an OT receptor agonist (TGOT) microinjected into the basolateral amygdala facilitated the discrimination between an auditory cue that signaled shock and another auditory cue that signaled the absence of shock. This TGOT-facilitated discrimination was replicated in a second experiment where the shocked and non-shocked auditory cues were accompanied by a common visual cue. Conditioned responding on probe trials of the auditory and visual elements indicated that TGOT administration produced a qualitative shift in the learning mechanisms underlying the discrimination between the two compounds. This was confirmed by comparisons between the present results and simulated predictions of elemental and configural associative learning models. Overall, the present findings demonstrate that the neuromodulatory effects of OT influence behavior outside of the social domain. Copyright © 2018 Elsevier Ltd. All rights reserved.
Trainor, Laurel J
2015-03-19
Whether music was an evolutionary adaptation that conferred survival advantages or a cultural creation has generated much debate. Consistent with an evolutionary hypothesis, music is unique to humans, emerges early in development and is universal across societies. However, the adaptive benefit of music is far from obvious. Music is highly flexible, generative and changes rapidly over time, consistent with a cultural creation hypothesis. In this paper, it is proposed that much of musical pitch and timing structure adapted to preexisting features of auditory processing that evolved for auditory scene analysis (ASA). Thus, music may have emerged initially as a cultural creation made possible by preexisting adaptations for ASA. However, some aspects of music, such as its emotional and social power, may have subsequently proved beneficial for survival and led to adaptations that enhanced musical behaviour. Ontogenetic and phylogenetic evidence is considered in this regard. In particular, enhanced auditory-motor pathways in humans that enable movement entrainment to music and consequent increases in social cohesion, and pathways enabling music to affect reward centres in the brain should be investigated as possible musical adaptations. It is concluded that the origins of music are complex and probably involved exaptation, cultural creation and evolutionary adaptation.
Da Costa, Sandra; Bourquin, Nathalie M.-P.; Knebel, Jean-François; Saenz, Melissa; van der Zwaag, Wietske; Clarke, Stephanie
2015-01-01
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds. PMID:25938430
Auditory conflict and congruence in frontotemporal dementia.
Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D
2017-09-01
Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
When music is salty: The crossmodal associations between sound and taste.
Guetta, Rachel; Loui, Psyche
2017-01-01
Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population.
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
Grose, John H; Buss, Emily; Hall, Joseph W
2017-01-01
The purpose of this study was to test the hypothesis that listeners with frequent exposure to loud music exhibit deficits in suprathreshold auditory performance consistent with cochlear synaptopathy. Young adults with normal audiograms were recruited who either did ( n = 31) or did not ( n = 30) have a history of frequent attendance at loud music venues where the typical sound levels could be expected to result in temporary threshold shifts. A test battery was administered that comprised three sets of procedures: (a) electrophysiological tests including distortion product otoacoustic emissions, auditory brainstem responses, envelope following responses, and the acoustic change complex evoked by an interaural phase inversion; (b) psychoacoustic tests including temporal modulation detection, spectral modulation detection, and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and speech-in-noise recognition. The results demonstrated that a history of loud music exposure can lead to a profile of peripheral auditory function that is consistent with an interpretation of cochlear synaptopathy in humans, namely, modestly abnormal auditory brainstem response Wave I/Wave V ratios in the presence of normal distortion product otoacoustic emissions and normal audiometric thresholds. However, there were no other electrophysiological, psychophysical, or speech perception effects. The absence of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synaptopathy is a valid pathophysiological condition in humans, its perceptual sequelae are either too diffuse or too inconsequential to permit a simple differential diagnosis of hidden hearing loss.
How visual cues for when to listen aid selective auditory attention.
Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G
2012-06-01
Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.
Functional Topography of Human Auditory Cortex
Rauschecker, Josef P.
2016-01-01
Functional and anatomical studies have clearly demonstrated that auditory cortex is populated by multiple subfields. However, functional characterization of those fields has been largely the domain of animal electrophysiology, limiting the extent to which human and animal research can inform each other. In this study, we used high-resolution functional magnetic resonance imaging to characterize human auditory cortical subfields using a variety of low-level acoustic features in the spectral and temporal domains. Specifically, we show that topographic gradients of frequency preference, or tonotopy, extend along two axes in human auditory cortex, thus reconciling historical accounts of a tonotopic axis oriented medial to lateral along Heschl's gyrus and more recent findings emphasizing tonotopic organization along the anterior–posterior axis. Contradictory findings regarding topographic organization according to temporal modulation rate in acoustic stimuli, or “periodotopy,” are also addressed. Although isolated subregions show a preference for high rates of amplitude-modulated white noise (AMWN) in our data, large-scale “periodotopic” organization was not found. Organization by AM rate was correlated with dominant pitch percepts in AMWN in many regions. In short, our data expose early auditory cortex chiefly as a frequency analyzer, and spectral frequency, as imposed by the sensory receptor surface in the cochlea, seems to be the dominant feature governing large-scale topographic organization across human auditory cortex. SIGNIFICANCE STATEMENT In this study, we examine the nature of topographic organization in human auditory cortex with fMRI. Topographic organization by spectral frequency (tonotopy) extended in two directions: medial to lateral, consistent with early neuroimaging studies, and anterior to posterior, consistent with more recent reports. Large-scale organization by rates of temporal modulation (periodotopy) was correlated with confounding spectral content of amplitude-modulated white-noise stimuli. Together, our results suggest that the organization of human auditory cortex is driven primarily by its response to spectral acoustic features, and large-scale periodotopy spanning across multiple regions is not supported. This fundamental information regarding the functional organization of early auditory cortex will inform our growing understanding of speech perception and the processing of other complex sounds. PMID:26818527
ERIC Educational Resources Information Center
Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquiere, Pol
2006-01-01
In this project, the hypothesis of an auditory temporal processing deficit in dyslexia was tested by examining auditory processing in relation to phonological skills in two contrasting groups of five-year-old preschool children, a familial high risk and a familial low risk group. Participants were individually matched for gender, age, non-verbal…
Wilkinson, Sam
2014-11-01
Two challenges that face popular self-monitoring theories (SMTs) of auditory verbal hallucination (AVH) are that they cannot account for the auditory phenomenology of AVHs and that they cannot account for their variety. In this paper I show that both challenges can be met by adopting a predictive processing framework (PPF), and by viewing AVHs as arising from abnormalities in predictive processing. I show how, within the PPF, both the auditory phenomenology of AVHs, and three subtypes of AVH, can be accounted for. Copyright © 2014 The Author. Published by Elsevier Inc. All rights reserved.
Tsai, Min-Lan; Hung, Kun-Long; Tsan, Ying-Ying; Tung, William Tao-Hsin
2015-06-01
Whether prolonged or complex febrile seizures (FS) produce long-term injury to the hippocampus is a critical question concerning the neurocognitive outcome of these seizures. Long-term event-related evoked potential (ERP) recording from the scalp is a noninvasive technique reflecting the sensory and cognitive processes associated with attention tasks. This study aimed to investigate the long-term outcome of neurocognitive and attention functions and evaluated auditory event-related potentials in children who have experienced complex FS in comparison with other types of FS. One hundred and forty-seven children aged more than 6 years who had experienced complex FS, simple single FS, simple recurrent FS, or afebrile seizures (AFS) after FS and age-matched healthy controls were enrolled. Patients were evaluated with Wechsler Intelligence Scale for Children (WISC; Chinese WISC-IV) scores, behavior test scores (Chinese version of Conners' continuous performance test, CPT II V.5), and behavior rating scales. Auditory ERPs were recorded in each patient. Patients who had experienced complex FS exhibited significantly lower full-scale intelligence quotient (FSIQ), perceptual reasoning index, and working memory index scores than did the control group but did not show significant differences in CPT scores, behavior rating scales, or ERP latencies and amplitude compared with the other groups with FS. We found a significant decrease in the FSIQ and four indices of the WISC-IV, higher behavior rating scales, a trend of increased CPT II scores, and significantly delayed P300 latency and reduced P300 amplitude in the patients with AFS after FS. We conclude that there is an effect on cognitive function in children who have experienced complex FS and patients who developed AFS after FS. The results indicated that the WISC-IV is more sensitive in detecting cognitive abnormality than ERP. Cognition impairment, including perceptual reasoning and working memory defects, was identified in patients with prolonged, multiple, or focal FS. These results may have implications for the pathogenesis of complex FS. Further comprehensive psychological evaluation and educational programs are suggested. Copyright © 2015 Elsevier Inc. All rights reserved.
Meta-analysis of mismatch negativity to simple versus complex deviants in schizophrenia.
Avissar, Michael; Xie, Shanghong; Vail, Blair; Lopez-Calderon, Javier; Wang, Yuanjia; Javitt, Daniel C
2018-01-01
Mismatch negativity (MMN) deficits in schizophrenia (SCZ) have been studied extensively since the early 1990s, with the vast majority of studies using simple auditory oddball task deviants that vary in a single acoustic dimension such as pitch or duration. There has been a growing interest in using more complex deviants that violate more abstract rules to probe higher order cognitive deficits. It is still unclear how sensory processing deficits compare to and contribute to higher order cognitive dysfunction, which can be investigated with later attention-dependent auditory event-related potential (ERP) components such as a subcomponent of P300, P3b. In this meta-analysis, we compared MMN deficits in SCZ using simple deviants to more complex deviants. We also pooled studies that measured MMN and P3b in the same study sample and examined the relationship between MMN and P3b deficits within study samples. Our analysis reveals that, to date, studies using simple deviants demonstrate larger deficits than those using complex deviants, with effect sizes in the range of moderate to large. The difference in effect sizes between deviant types was reduced significantly when accounting for magnitude of MMN measured in healthy controls. P3b deficits, while large, were only modestly greater than MMN deficits (d=0.21). Taken together, our findings suggest that MMN to simple deviants may still be optimal as a biomarker for SCZ and that sensory processing dysfunction contributes significantly to MMN deficit and disease pathophysiology. Copyright © 2017 Elsevier B.V. All rights reserved.
Cell-assembly coding in several memory processes.
Sakurai, Y
1998-01-01
The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.
Thalamic and cortical pathways supporting auditory processing
Lee, Charles C.
2012-01-01
The neural processing of auditory information engages pathways that begin initially at the cochlea and that eventually reach forebrain structures. At these higher levels, the computations necessary for extracting auditory source and identity information rely on the neuroanatomical connections between the thalamus and cortex. Here, the general organization of these connections in the medial geniculate body (thalamus) and the auditory cortex is reviewed. In addition, we consider two models organizing the thalamocortical pathways of the non-tonotopic and multimodal auditory nuclei. Overall, the transfer of information to the cortex via the thalamocortical pathways is complemented by the numerous intracortical and corticocortical pathways. Although interrelated, the convergent interactions among thalamocortical, corticocortical, and commissural pathways enable the computations necessary for the emergence of higher auditory perception. PMID:22728130
Developmental changes in automatic rule-learning mechanisms across early childhood.
Mueller, Jutta L; Friederici, Angela D; Männel, Claudia
2018-06-27
Infants' ability to learn complex linguistic regularities from early on has been revealed by electrophysiological studies indicating that 3-month-olds, but not adults, can automatically detect non-adjacent dependencies between syllables. While different ERP responses in adults and infants suggest that both linguistic rule learning and its link to basic auditory processing undergo developmental changes, systematic investigations of the developmental trajectories are scarce. In the present study, we assessed 2- and 4-year-olds' ERP indicators of pitch discrimination and linguistic rule learning in a syllable-based oddball design. To test for the relation between auditory discrimination and rule learning, ERP responses to pitch changes were used as predictor for potential linguistic rule-learning effects. Results revealed that 2-year-olds, but not 4-year-olds, showed ERP markers of rule learning. Although, 2-year-olds' rule learning was not dependent on differences in pitch perception, 4-year-old children demonstrated a dependency, such that those children who showed more pronounced responses to pitch changes still showed an effect of rule learning. These results narrow down the developmental decline of the ability for automatic linguistic rule learning to the age between 2 and 4 years, and, moreover, point towards a strong modification of this change by auditory processes. At an age when the ability of automatic linguistic rule learning phases out, rule learning can still be observed in children with enhanced auditory responses. The observed interrelations are plausible causes for age-of-acquisition effects and inter-individual differences in language learning. © 2018 John Wiley & Sons Ltd.
Giraudet, L; Imbert, J-P; Bérenger, M; Tremblay, S; Causse, M
2015-11-01
The Air Traffic Control (ATC) environment is complex and safety-critical. Whilst exchanging information with pilots, controllers must also be alert to visual notifications displayed on the radar screen (e.g., warning which indicates a loss of minimum separation between aircraft). Under the assumption that attentional resources are shared between vision and hearing, the visual interface design may also impact the ability to process these auditory stimuli. Using a simulated ATC task, we compared the behavioral and neural responses to two different visual notification designs--the operational alarm that involves blinking colored "ALRT" displayed around the label of the notified plane ("Color-Blink"), and the more salient alarm involving the same blinking text plus four moving yellow chevrons ("Box-Animation"). Participants performed a concurrent auditory task with the requirement to react to rare pitch tones. P300 from the occurrence of the tones was taken as an indicator of remaining attentional resources. Participants who were presented with the more salient visual design showed better accuracy than the group with the suboptimal operational design. On a physiological level, auditory P300 amplitude in the former group was greater than that observed in the latter group. One potential explanation is that the enhanced visual design freed up attentional resources which, in turn, improved the cerebral processing of the auditory stimuli. These results suggest that P300 amplitude can be used as a valid estimation of the efficiency of interface designs, and of cognitive load more generally. Copyright © 2015 Elsevier B.V. All rights reserved.
Listening to Another Sense: Somatosensory Integration in the Auditory System
Wu, Calvin; Stefanescu, Roxana A.; Martel, David T.
2014-01-01
Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems, and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body, and the auditory cortex. In this review, we explore the process of multisensory integration from 1) anatomical (inputs and connections), 2) physiological (cellular responses), 3) functional, and 4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing, and offers a multisensory perspective regarding the understanding of sensory disorders. PMID:25526698
Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults
Tusch, Erich S.; Alperin, Brittany R.; Holcomb, Phillip J.; Daffner, Kirk R.
2016-01-01
The inhibitory deficit hypothesis of cognitive aging posits that older adults’ inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1) observed under an auditory-ignore, but not auditory-attend condition, 2) attenuated in individuals with high executive capacity (EC), and 3) augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend) task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study’s findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts. PMID:27806081
Speech Evoked Auditory Brainstem Response in Stuttering
Tahaei, Ali Akbar; Ashayeri, Hassan; Pourbakht, Akram; Kamali, Mohammad
2014-01-01
Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS) at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency. PMID:25215262
Disbergen, Niels R.; Valente, Giancarlo; Formisano, Elia; Zatorre, Robert J.
2018-01-01
Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects (N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also participated in Experiment 2, showing a main effect of instrument timbre distance, even though within attention-condition timbre-distance contrasts did not demonstrate any timbre effect. Correlation of overall scores with morph-distance effects, computed by subtracting the largest from the smallest timbre distance scores, showed an influence of general task difficulty on the timbre distance effect. Comparison of laboratory and fMRI data showed scanner noise had no adverse effect on task performance. These Experimental paradigms enable to study both bottom-up and top-down contributions to auditory stream segregation and integration within psychophysical and neuroimaging experiments. PMID:29563861
Pinniped Hearing in Complex Acoustic Environments
2013-09-30
published] Mulsow, J. & Reichmuth, C. (2013). The binaural click-evoked auditory brainstem response of the California sea lion (Zalophus...California sea lion can keep the beat : Motor entrainment to rhythmic auditory stimuli in a non vocal mimic. Journal of Comparative Psychology, online first. [published
Anatomical Substrates of Visual and Auditory Miniature Second-language Learning
Newman-Norlund, Roger D.; Frey, Scott H.; Petitto, Laura-Ann; Grafton, Scott T.
2007-01-01
Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca’s area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca’s area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the ‘‘critical period.’’ The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance. PMID:17129186
Hertz, Uri; Amedi, Amir
2015-01-01
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756
Hertz, Uri; Amedi, Amir
2015-08-01
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. © The Author 2014. Published by Oxford University Press.
P50 Suppression in Children with Selective Mutism: A Preliminary Report
ERIC Educational Resources Information Center
Henkin, Yael; Feinholz, Maya; Arie, Miri; Bar-Haim, Yair
2010-01-01
Evidence suggests that children with selective mutism (SM) display significant aberrations in auditory efferent activity at the brainstem level that may underlie inefficient auditory processing during vocalization, and lead to speech avoidance. The objective of the present study was to explore auditory filtering processes at the cortical level in…
The Diagnosis and Management of Auditory Processing Disorder
ERIC Educational Resources Information Center
Moore, David R.
2011-01-01
Purpose: To provide a personal perspective on auditory processing disorder (APD), with reference to the recent clinical forum on APD and the needs of clinical speech-language pathologists and audiologists. Method: The Medical Research Council-Institute of Hearing Research (MRC-IHR) has been engaged in research into APD and auditory learning for 8…
Auditory and Linguistic Processes in the Perception of Intonation Contours.
ERIC Educational Resources Information Center
Studdert-Kennedy, Michael; Hadding, Kerstin
By examining the relations among sections of the fundamental frequency contour used in judging an utterance as a question or statement, the experiment described in this report seeks a more detailed understanding of auditory-linguistic interaction in the perception of intonation contours. The perceptual process may be divided into stages (auditory,…
Directional Effects between Rapid Auditory Processing and Phonological Awareness in Children
ERIC Educational Resources Information Center
Johnson, Erin Phinney; Pennington, Bruce F.; Lee, Nancy Raitano; Boada, Richard
2009-01-01
Background: Deficient rapid auditory processing (RAP) has been associated with early language impairment and dyslexia. Using an auditory masking paradigm, children with language disabilities perform selectively worse than controls at detecting a tone in a backward masking (BM) condition (tone followed by white noise) compared to a forward masking…
ERIC Educational Resources Information Center
Fey, Marc E.; Richard, Gail J.; Geffner, Donna; Kamhi, Alan G.; Medwetsky, Larry; Paul, Diane; Ross-Swain, Deborah; Wallach, Geraldine P.; Frymark, Tobi; Schooling, Tracy
2011-01-01
Purpose: In this systematic review, the peer-reviewed literature on the efficacy of interventions for school-age children with auditory processing disorder (APD) is critically evaluated. Method: Searches of 28 electronic databases yielded 25 studies for analysis. These studies were categorized by research phase (e.g., exploratory, efficacy) and…
ERIC Educational Resources Information Center
Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.
2017-01-01
Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…
Visual and Auditory Input in Second-Language Speech Processing
ERIC Educational Resources Information Center
Hardison, Debra M.
2010-01-01
The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…
Cardin, Jessica A; Raksin, Jonathan N; Schmidt, Marc F
2005-04-01
Sensorimotor integration in the avian song system is crucial for both learning and maintenance of song, a vocal motor behavior. Although a number of song system areas demonstrate both sensory and motor characteristics, their exact roles in auditory and premotor processing are unclear. In particular, it is unknown whether input from the forebrain nucleus interface of the nidopallium (NIf), which exhibits both sensory and premotor activity, is necessary for both auditory and premotor processing in its target, HVC. Here we show that bilateral NIf lesions result in long-term loss of HVC auditory activity but do not impair song production. NIf is thus a major source of auditory input to HVC, but an intact NIf is not necessary for motor output in adult zebra finches.
Responses of auditory-cortex neurons to structural features of natural sounds.
Nelken, I; Rotman, Y; Bar Yosef, O
1999-01-14
Sound-processing strategies that use the highly non-random structure of natural sounds may confer evolutionary advantage to many species. Auditory processing of natural sounds has been studied almost exclusively in the context of species-specific vocalizations, although these form only a small part of the acoustic biotope. To study the relationships between properties of natural soundscapes and neuronal processing mechanisms in the auditory system, we analysed sound from a range of different environments. Here we show that for many non-animal sounds and background mixtures of animal sounds, energy in different frequency bands is coherently modulated. Co-modulation of different frequency bands in background noise facilitates the detection of tones in noise by humans, a phenomenon known as co-modulation masking release (CMR). We show that co-modulation also improves the ability of auditory-cortex neurons to detect tones in noise, and we propose that this property of auditory neurons may underlie behavioural CMR. This correspondence may represent an adaptation of the auditory system for the use of an attribute of natural sounds to facilitate real-world processing tasks.
Visser, Eelke; Zwiers, Marcel P; Kan, Cornelis C; Hoekstra, Liesbeth; van Opstal, A John; Buitelaar, Jan K
2013-11-01
Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.
Pre-Attentive Auditory Processing of Lexicality
ERIC Educational Resources Information Center
Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan
2004-01-01
The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…
Zhang, Guang-Wei; Sun, Wen-Jian; Zingg, Brian; Shen, Li; He, Jufang; Xiong, Ying; Tao, Huizhong W; Zhang, Li I
2018-01-17
In the mammalian brain, auditory information is known to be processed along a central ascending pathway leading to auditory cortex (AC). Whether there exist any major pathways beyond this canonical auditory neuraxis remains unclear. In awake mice, we found that auditory responses in entorhinal cortex (EC) cannot be explained by a previously proposed relay from AC based on response properties. By combining anatomical tracing and optogenetic/pharmacological manipulations, we discovered that EC received auditory input primarily from the medial septum (MS), rather than AC. A previously uncharacterized auditory pathway was then revealed: it branched from the cochlear nucleus, and via caudal pontine reticular nucleus, pontine central gray, and MS, reached EC. Neurons along this non-canonical auditory pathway responded selectively to high-intensity broadband noise, but not pure tones. Disruption of the pathway resulted in an impairment of specifically noise-cued fear conditioning. This reticular-limbic pathway may thus function in processing aversive acoustic signals. Copyright © 2017 Elsevier Inc. All rights reserved.
Auditory Scene Analysis: An Attention Perspective
2017-01-01
Purpose This review article provides a new perspective on the role of attention in auditory scene analysis. Method A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity. Conclusions A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601618 PMID:29049599
Brian Hears: Online Auditory Processing Using Vectorization Over Channels
Fontaine, Bertrand; Goodman, Dan F. M.; Benichoux, Victor; Brette, Romain
2011-01-01
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in “Brian Hears,” a library for the spiking neural network simulator package “Brian.” This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations. PMID:21811453
Human brain regions involved in recognizing environmental sounds.
Lewis, James W; Wightman, Frederic L; Brefczynski, Julie A; Phinney, Raymond E; Binder, Jeffrey R; DeYoe, Edgar A
2004-09-01
To identify the brain regions preferentially involved in environmental sound recognition (comprising portions of a putative auditory 'what' pathway), we collected functional imaging data while listeners attended to a wide range of sounds, including those produced by tools, animals, liquids and dropped objects. These recognizable sounds, in contrast to unrecognizable, temporally reversed control sounds, evoked activity in a distributed network of brain regions previously associated with semantic processing, located predominantly in the left hemisphere, but also included strong bilateral activity in posterior portions of the middle temporal gyri (pMTG). Comparisons with earlier studies suggest that these bilateral pMTG foci partially overlap cortex implicated in high-level visual processing of complex biological motion and recognition of tools and other artifacts. We propose that the pMTG foci process multimodal (or supramodal) information about objects and object-associated motion, and that this may represent 'action' knowledge that can be recruited for purposes of recognition of familiar environmental sound-sources. These data also provide a functional and anatomical explanation for the symptoms of pure auditory agnosia for environmental sounds reported in human lesion studies.
Filipe, Marisa G; Watson, Linda; Vicente, Selene G; Frota, Sónia
2018-01-01
Autism spectrum disorders (ASD) refer to a complex group of neurodevelopmental disorders causing difficulties with communication and interpersonal relationships, as well as restricted and repetitive behaviours and interests. As early identification, diagnosis, and intervention provide better long-term outcomes, early markers of ASD have gained increased research attention. This review examines evidence that auditory processing enhanced by social interest, in particular auditory preference of speech directed towards infants and young children (i.e. infant-directed speech - IDS), may be an early marker of risk for ASD. Although this review provides evidence for IDS preference as, indeed, a potential early marker of ASD, the explanation for differences in IDS processing among children with ASD versus other children remains unclear, as are the implications of these impairments for later social-communicative development. Therefore, it is crucial to explore atypicalities in IDS processing early on development and to understand whether preferential listening to specific types of speech sounds in the first years of life may help to predict the impairments in social and language development.
Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M
2000-01-01
Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.
Test of the neurolinguistic programming hypothesis that eye-movements relate to processing imagery.
Wertheim, E H; Habib, C; Cumming, G
1986-04-01
Bandler and Grinder's hypothesis that eye-movements reflect sensory processing was examined. 28 volunteers first memorized and then recalled visual, auditory, and kinesthetic stimuli. Changes in eye-positions during recall were videotaped and categorized by two raters into positions hypothesized by Bandler and Grinder's model to represent visual, auditory, and kinesthetic recall. Planned contrast analyses suggested that visual stimulus items, when recalled, elicited significantly more upward eye-positions and stares than auditory and kinesthetic items. Auditory and kinesthetic items, however, did not elicit more changes in eye-position hypothesized by the model to represent auditory and kinesthetic recall, respectively.
Dicke, Ulrike; Ewert, Stephan D; Dau, Torsten; Kollmeier, Birger
2007-01-01
Periodic amplitude modulations (AMs) of an acoustic stimulus are presumed to be encoded in temporal activity patterns of neurons in the cochlear nucleus. Physiological recordings indicate that this temporal AM code is transformed into a rate-based periodicity code along the ascending auditory pathway. The present study suggests a neural circuit for the transformation from the temporal to the rate-based code. Due to the neural connectivity of the circuit, bandpass shaped rate modulation transfer functions are obtained that correspond to recorded functions of inferior colliculus (IC) neurons. In contrast to previous modeling studies, the present circuit does not employ a continuously changing temporal parameter to obtain different best modulation frequencies (BMFs) of the IC bandpass units. Instead, different BMFs are yielded from varying the number of input units projecting onto different bandpass units. In order to investigate the compatibility of the neural circuit with a linear modulation filterbank analysis as proposed in psychophysical studies, complex stimuli such as tones modulated by the sum of two sinusoids, narrowband noise, and iterated rippled noise were processed by the model. The model accounts for the encoding of AM depth over a large dynamic range and for modulation frequency selective processing of complex sounds.
Analysis of stimulus-related activity in rat auditory cortex using complex spectral coefficients
Krause, Bryan M.
2013-01-01
The neural mechanisms of sensory responses recorded from the scalp or cortical surface remain controversial. Evoked vs. induced response components (i.e., changes in mean vs. variance) are associated with bottom-up vs. top-down processing, but trial-by-trial response variability can confound this interpretation. Phase reset of ongoing oscillations has also been postulated to contribute to sensory responses. In this article, we present evidence that responses under passive listening conditions are dominated by variable evoked response components. We measured the mean, variance, and phase of complex time-frequency coefficients of epidurally recorded responses to acoustic stimuli in rats. During the stimulus, changes in mean, variance, and phase tended to co-occur. After the stimulus, there was a small, low-frequency offset response in the mean and modest, prolonged desynchronization in the alpha band. Simulations showed that trial-by-trial variability in the mean can account for most of the variance and phase changes observed during the stimulus. This variability was state dependent, with smallest variability during periods of greatest arousal. Our data suggest that cortical responses to auditory stimuli reflect variable inputs to the cortical network. These analyses suggest that caution should be exercised when interpreting variance and phase changes in terms of top-down cortical processing. PMID:23657279
Is auditory perceptual timing a core deficit of developmental coordination disorder?
Trainor, Laurel J; Chang, Andrew; Cairney, John; Li, Yao-Chuen
2018-05-09
Time is an essential dimension for perceiving and processing auditory events, and for planning and producing motor behaviors. Developmental coordination disorder (DCD) is a neurodevelopmental disorder affecting 5-6% of children that is characterized by deficits in motor skills. Studies show that children with DCD have motor timing and sensorimotor timing deficits. We suggest that auditory perceptual timing deficits may also be core characteristics of DCD. This idea is consistent with evidence from several domains, (1) motor-related brain regions are often involved in auditory timing process; (2) DCD has high comorbidity with dyslexia and attention deficit hyperactivity, which are known to be associated with auditory timing deficits; (3) a few studies report deficits in auditory-motor timing among children with DCD; and (4) our preliminary behavioral and neuroimaging results show that children with DCD at age 6 and 7 have deficits in auditory time discrimination compared to typically developing children. We propose directions for investigating auditory perceptual timing processing in DCD that use various behavioral and neuroimaging approaches. From a clinical perspective, research findings can potentially benefit our understanding of the etiology of DCD, identify early biomarkers of DCD, and can be used to develop evidence-based interventions for DCD involving auditory-motor training. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of The New York Academy of Sciences.
Auditory temporal processing in healthy aging: a magnetoencephalographic study
Sörös, Peter; Teismann, Inga K; Manemann, Elisabeth; Lütkenhöner, Bernd
2009-01-01
Background Impaired speech perception is one of the major sequelae of aging. In addition to peripheral hearing loss, central deficits of auditory processing are supposed to contribute to the deterioration of speech perception in older individuals. To test the hypothesis that auditory temporal processing is compromised in aging, auditory evoked magnetic fields were recorded during stimulation with sequences of 4 rapidly recurring speech sounds in 28 healthy individuals aged 20 – 78 years. Results The decrement of the N1m amplitude during rapid auditory stimulation was not significantly different between older and younger adults. The amplitudes of the middle-latency P1m wave and of the long-latency N1m, however, were significantly larger in older than in younger participants. Conclusion The results of the present study do not provide evidence for the hypothesis that auditory temporal processing, as measured by the decrement (short-term habituation) of the major auditory evoked component, the N1m wave, is impaired in aging. The differences between these magnetoencephalographic findings and previously published behavioral data might be explained by differences in the experimental setting between the present study and previous behavioral studies, in terms of speech rate, attention, and masking noise. Significantly larger amplitudes of the P1m and N1m waves suggest that the cortical processing of individual sounds differs between younger and older individuals. This result adds to the growing evidence that brain functions, such as sensory processing, motor control and cognitive processing, can change during healthy aging, presumably due to experience-dependent neuroplastic mechanisms. PMID:19351410
Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam
2011-01-01
To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.
Aging effects on functional auditory and visual processing using fMRI with variable sensory loading.
Cliff, Michael; Joyce, Dan W; Lamar, Melissa; Dannhauser, Thomas; Tracy, Derek K; Shergill, Sukhwinder S
2013-05-01
Traditionally, studies investigating the functional implications of age-related structural brain alterations have focused on higher cognitive processes; by increasing stimulus load, these studies assess behavioral and neurophysiological performance. In order to understand age-related changes in these higher cognitive processes, it is crucial to examine changes in visual and auditory processes that are the gateways to higher cognitive functions. This study provides evidence for age-related functional decline in visual and auditory processing, and regional alterations in functional brain processing, using non-invasive neuroimaging. Using functional magnetic resonance imaging (fMRI), younger (n=11; mean age=31) and older (n=10; mean age=68) adults were imaged while observing flashing checkerboard images (passive visual stimuli) and hearing word lists (passive auditory stimuli) across varying stimuli presentation rates. Younger adults showed greater overall levels of temporal and occipital cortical activation than older adults for both auditory and visual stimuli. The relative change in activity as a function of stimulus presentation rate showed differences between young and older participants. In visual cortex, the older group showed a decrease in fMRI blood oxygen level dependent (BOLD) signal magnitude as stimulus frequency increased, whereas the younger group showed a linear increase. In auditory cortex, the younger group showed a relative increase as a function of word presentation rate, while older participants showed a relatively stable magnitude of fMRI BOLD response across all rates. When analyzing participants across all ages, only the auditory cortical activation showed a continuous, monotonically decreasing BOLD signal magnitude as a function of age. Our preliminary findings show an age-related decline in demand-related, passive early sensory processing. As stimulus demand increases, visual and auditory cortex do not show increases in activity in older compared to younger people. This may negatively impact on the fidelity of information available to higher cognitive processing. Such evidence may inform future studies focused on cognitive decline in aging. Copyright © 2012 Elsevier Ltd. All rights reserved.
Stavrinos, Georgios; Iliadou, Vassiliki-Maria; Edwards, Lindsey; Sirimanna, Tony; Bamiou, Doris-Eva
2018-01-01
Measures of attention have been found to correlate with specific auditory processing tests in samples of children suspected of Auditory Processing Disorder (APD), but these relationships have not been adequately investigated. Despite evidence linking auditory attention and deficits/symptoms of APD, measures of attention are not routinely used in APD diagnostic protocols. The aim of the study was to examine the relationship between auditory and visual attention tests and auditory processing tests in children with APD and to assess whether a proposed diagnostic protocol for APD, including measures of attention, could provide useful information for APD management. A pilot study including 27 children, aged 7–11 years, referred for APD assessment was conducted. The validated test of everyday attention for children, with visual and auditory attention tasks, the listening in spatialized noise sentences test, the children's communication checklist questionnaire and tests from a standard APD diagnostic test battery were administered. Pearson's partial correlation analysis examining the relationship between these tests and Cochrane's Q test analysis comparing proportions of diagnosis under each proposed battery were conducted. Divided auditory and divided auditory-visual attention strongly correlated with the dichotic digits test, r = 0.68, p < 0.05, and r = 0.76, p = 0.01, respectively, in a sample of 20 children with APD diagnosis. The standard APD battery identified a larger proportion of participants as having APD, than an attention battery identified as having Attention Deficits (ADs). The proposed APD battery excluding AD cases did not have a significantly different diagnosis proportion than the standard APD battery. Finally, the newly proposed diagnostic battery, identifying an inattentive subtype of APD, identified five children who would have otherwise been considered not having ADs. The findings show that a subgroup of children with APD demonstrates underlying sustained and divided attention deficits. Attention deficits in children with APD appear to be centred around the auditory modality but further examination of types of attention in both modalities is required. Revising diagnostic criteria to incorporate attention tests and the inattentive type of APD in the test battery, provides additional useful data to clinicians to ensure careful interpretation of APD assessments. PMID:29441033
Perceptual Plasticity for Auditory Object Recognition
Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.
2017-01-01
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed. PMID:28588524
Modulating Human Auditory Processing by Transcranial Electrical Stimulation
Heimrath, Kai; Fiene, Marina; Rufener, Katharina S.; Zaehle, Tino
2016-01-01
Transcranial electrical stimulation (tES) has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation, transcranial alternating current stimulation, and transcranial random noise stimulation has emerged. While the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders. PMID:27013969
Multimodal lexical processing in auditory cortex is literacy skill dependent.
McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R
2014-09-01
Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Spatio-temporal Dynamics of Audiovisual Speech Processing
Bernstein, Lynne E.; Auer, Edward T.; Wagner, Michael; Ponton, Curtis W.
2007-01-01
The cortical processing of auditory-alone, visual-alone, and audiovisual speech information is temporally and spatially distributed, and functional magnetic resonance imaging (fMRI) cannot adequately resolve its temporal dynamics. In order to investigate a hypothesized spatio-temporal organization for audiovisual speech processing circuits, event-related potentials (ERPs) were recorded using electroencephalography (EEG). Stimuli were congruent audiovisual /bα/, incongruent auditory /bα/ synchronized with visual /gα/, auditory-only /bα/, and visual-only /bα/ and /gα/. Current density reconstructions (CDRs) of the ERP data were computed across the latency interval of 50-250 milliseconds. The CDRs demonstrated complex spatio-temporal activation patterns that differed across stimulus conditions. The hypothesized circuit that was investigated here comprised initial integration of audiovisual speech by the middle superior temporal sulcus (STS), followed by recruitment of the intraparietal sulcus (IPS), followed by activation of Broca's area (Miller and d'Esposito, 2005). The importance of spatio-temporally sensitive measures in evaluating processing pathways was demonstrated. Results showed, strikingly, early (< 100 msec) and simultaneous activations in areas of the supramarginal and angular gyrus (SMG/AG), the IPS, the inferior frontal gyrus, and the dorsolateral prefrontal cortex. Also, emergent left hemisphere SMG/AG activation, not predicted based on the unisensory stimulus conditions was observed at approximately 160 to 220 msec. The STS was neither the earliest nor most prominent activation site, although it is frequently considered the sine qua non of audiovisual speech integration. As discussed here, the relatively late activity of the SMG/AG solely under audiovisual conditions is a possible candidate audiovisual speech integration response. PMID:17920933
Yogev-Seligmann, Galit; Oren, Noga; Ash, Elissa L; Hendler, Talma; Giladi, Nir; Lerner, Yulia
2016-05-03
The ability to store, integrate, and manipulate information declines with aging. These changes occur earlier, faster, and to a greater degree as a result of neurodegeneration. One of the most common and early characteristics of cognitive decline is difficulty with comprehension of information. The neural mechanisms underlying this breakdown of information processing are poorly understood. Using functional MRI and natural stimuli (e.g., stories), we mapped the neural mechanisms by which the human brain accumulates and processes information with increasing duration and complexity in participants with amnestic mild cognitive impairment (aMCI) and healthy older adults. To explore the mechanisms of information processing, we measured the reliability of brain responses elicited by listening to different versions of a narrated story created by segmenting the story into words, sentences, and paragraphs and then scrambling the segments. Comparing healthy older adults and participants with aMCI revealed that in both groups, all types of stimuli similarly recruited primary auditory areas. However, prominent differences between groups were found at the level of processing long and complex stimuli. In healthy older adults, parietal and frontal regions demonstrated highly synchronized responses in both the paragraph and full story conditions, as has been previously reported in young adults. Participants with aMCI, however, exhibited a robust functional shift of long time scale processing to the pre- and post-central sulci. Our results suggest that participants with aMCI experienced a functional shift of higher order auditory information processing, possibly reflecting a functional response to concurrent or impending neuronal or synaptic loss. This observation might assist in understanding mechanisms of cognitive decline in aMCI.
Meltzer, Benjamin; Reichenbach, Chagit S.; Braiman, Chananel; Schiff, Nicholas D.; Hudspeth, A. J.; Reichenbach, Tobias
2015-01-01
The brain’s analyses of speech and music share a range of neural resources and mechanisms. Music displays a temporal structure of complexity similar to that of speech, unfolds over comparable timescales, and elicits cognitive demands in tasks involving comprehension and attention. During speech processing, synchronized neural activity of the cerebral cortex in the delta and theta frequency bands tracks the envelope of a speech signal, and this neural activity is modulated by high-level cortical functions such as speech comprehension and attention. It remains unclear, however, whether the cortex also responds to the natural rhythmic structure of music and how the response, if present, is influenced by higher cognitive processes. Here we employ electroencephalography to show that the cortex responds to the beat of music and that this steady-state response reflects musical comprehension and attention. We show that the cortical response to the beat is weaker when subjects listen to a familiar tune than when they listen to an unfamiliar, non-sensical musical piece. Furthermore, we show that in a task of intermodal attention there is a larger neural response at the beat frequency when subjects attend to a musical stimulus than when they ignore the auditory signal and instead focus on a visual one. Our findings may be applied in clinical assessments of auditory processing and music cognition as well as in the construction of auditory brain-machine interfaces. PMID:26300760
Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.
Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M
1991-06-01
An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.
Neural correlates of short-term memory in primate auditory cortex
Bigelow, James; Rossi, Breein; Poremba, Amy
2014-01-01
Behaviorally-relevant sounds such as conspecific vocalizations are often available for only a brief amount of time; thus, goal-directed behavior frequently depends on auditory short-term memory (STM). Despite its ecological significance, the neural processes underlying auditory STM remain poorly understood. To investigate the role of the auditory cortex in STM, single- and multi-unit activity was recorded from the primary auditory cortex (A1) of two monkeys performing an auditory STM task using simple and complex sounds. Each trial consisted of a sample and test stimulus separated by a 5-s retention interval. A brief wait period followed the test stimulus, after which subjects pressed a button if the sounds were identical (match trials) or withheld button presses if they were different (non-match trials). A number of units exhibited significant changes in firing rate for portions of the retention interval, although these changes were rarely sustained. Instead, they were most frequently observed during the early and late portions of the retention interval, with inhibition being observed more frequently than excitation. At the population level, responses elicited on match trials were briefly suppressed early in the sound period relative to non-match trials. However, during the latter portion of the sound, firing rates increased significantly for match trials and remained elevated throughout the wait period. Related patterns of activity were observed in prior experiments from our lab in the dorsal temporal pole (dTP) and prefrontal cortex (PFC) of the same animals. The data suggest that early match suppression occurs in both A1 and the dTP, whereas later match enhancement occurs first in the PFC, followed by A1 and later in dTP. Because match enhancement occurs first in the PFC, we speculate that enhancement observed in A1 and dTP may reflect top–down feedback. Overall, our findings suggest that A1 forms part of the larger neural system recruited during auditory STM. PMID:25177266
Johnsen, Erik; Hugdahl, Kenneth; Fusar-Poli, Paolo; Kroken, Rune A; Kompus, Kristiina
2013-01-01
Experiencing auditory verbal hallucinations is a prominent symptom in schizophrenia that also occurs in subjects at enhanced risk for psychosis and in the general population. Drug treatment of auditory hallucinations is challenging, because the current understanding is limited with respect to the neural mechanisms involved, as well as how CNS drugs, such as antipsychotics, influence the subjective experience and neurophysiology of hallucinations. In this article, the authors review studies of the effect of antipsychotic medication on brain activation as measured with functional MRI in patients with auditory verbal hallucinations. First, the authors examine the neural correlates of ongoing auditory hallucinations. Then, the authors critically discuss studies addressing the antipsychotic effect on the neural correlates of complex cognitive tasks. Current evidence suggests that blood oxygen level-dependant effects of antipsychotic drugs reflect specific, regional effects but studies on the neuropharmacology of auditory hallucinations are scarce. Future directions for pharmacological neuroimaging of auditory hallucinations are discussed.
ERIC Educational Resources Information Center
Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.
2017-01-01
This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…
Teaching Turkish as a Foreign Language: Extrapolating from Experimental Psychology
ERIC Educational Resources Information Center
Erdener, Dogu
2017-01-01
Speech perception is beyond the auditory domain and a multimodal process, specifically, an auditory-visual one--we process lip and face movements during speech. In this paper, the findings in cross-language studies of auditory-visual speech perception in the past two decades are interpreted to the applied domain of second language (L2)…
Utilizing Oral-Motor Feedback in Auditory Conceptualization.
ERIC Educational Resources Information Center
Howard, Marilyn
The Auditory Discrimination in Depth (ADD) program, an oral-motor approach to beginning reading instruction, trains first grade children in auditory skills by a process in which language and oral-motor feedback are used to integrate auditory properties with visual properties. This emphasis of the ADD program makes the child's perceptual…
Intertrial auditory neural stability supports beat synchronization in preschoolers
Carr, Kali Woodruff; Tierney, Adam; White-Schwoch, Travis; Kraus, Nina
2016-01-01
The ability to synchronize motor movements along with an auditory beat places stringent demands on the temporal processing and sensorimotor integration capabilities of the nervous system. Links between millisecond-level precision of auditory processing and the consistency of sensorimotor beat synchronization implicate fine auditory neural timing as a mechanism for forming stable internal representations of, and behavioral reactions to, sound. Here, for the first time, we demonstrate a systematic relationship between consistency of beat synchronization and trial-by-trial stability of subcortical speech processing in preschoolers (ages 3 and 4 years old). We conclude that beat synchronization might provide a useful window into millisecond-level neural precision for encoding sound in early childhood, when speech processing is especially important for language acquisition and development. PMID:26760457
NASA Astrophysics Data System (ADS)
Mozaffarilegha, Marjan; Esteki, Ali; Ahadi, Mohsen; Nazeri, Ahmadreza
The speech-evoked auditory brainstem response (sABR) shows how complex sounds such as speech and music are processed in the auditory system. Speech-ABR could be used to evaluate particular impairments and improvements in auditory processing system. Many researchers used linear approaches for characterizing different components of sABR signal, whereas nonlinear techniques are not applied so commonly. The primary aim of the present study is to examine the underlying dynamics of normal sABR signals. The secondary goal is to evaluate whether some chaotic features exist in this signal. We have presented a methodology for determining various components of sABR signals, by performing Ensemble Empirical Mode Decomposition (EEMD) to get the intrinsic mode functions (IMFs). Then, composite multiscale entropy (CMSE), the largest Lyapunov exponent (LLE) and deterministic nonlinear prediction are computed for each extracted IMF. EEMD decomposes sABR signal into five modes and a residue. The CMSE results of sABR signals obtained from 40 healthy people showed that 1st, and 2nd IMFs were similar to the white noise, IMF-3 with synthetic chaotic time series and 4th, and 5th IMFs with sine waveform. LLE analysis showed positive values for 3rd IMFs. Moreover, 1st, and 2nd IMFs showed overlaps with surrogate data and 3rd, 4th and 5th IMFs showed no overlap with corresponding surrogate data. Results showed the presence of noisy, chaotic and deterministic components in the signal which respectively corresponded to 1st, and 2nd IMFs, IMF-3, and 4th and 5th IMFs. While these findings provide supportive evidence of the chaos conjecture for the 3rd IMF, they do not confirm any such claims. However, they provide a first step towards an understanding of nonlinear behavior of auditory system dynamics in brainstem level.
Kantrowitz, J T; Hoptman, M J; Leitman, D I; Silipo, G; Javitt, D C
2014-01-01
Intact sarcasm perception is a crucial component of social cognition and mentalizing (the ability to understand the mental state of oneself and others). In sarcasm, tone of voice is used to negate the literal meaning of an utterance. In particular, changes in pitch are used to distinguish between sincere and sarcastic utterances. Schizophrenia patients show well-replicated deficits in auditory function and functional connectivity (FC) within and between auditory cortical regions. In this study we investigated the contributions of auditory deficits to sarcasm perception in schizophrenia. Auditory measures including pitch processing, auditory emotion recognition (AER) and sarcasm detection were obtained from 76 patients with schizophrenia/schizo-affective disorder and 72 controls. Resting-state FC (rsFC) was obtained from a subsample and was analyzed using seeds placed in both auditory cortex and meta-analysis-defined core-mentalizing regions relative to auditory performance. Patients showed large effect-size deficits across auditory measures. Sarcasm deficits correlated significantly with general functioning and impaired pitch processing both across groups and within the patient group alone. Patients also showed reduced sensitivity to alterations in mean pitch and variability. For patients, sarcasm discrimination correlated exclusively with the level of rsFC within primary auditory regions whereas for controls, correlations were observed exclusively within core-mentalizing regions (the right posterior superior temporal gyrus, anterior superior temporal sulcus and insula, and left posterior medial temporal gyrus). These findings confirm the contribution of auditory deficits to theory of mind (ToM) impairments in schizophrenia, and demonstrate that FC within auditory, but not core-mentalizing, regions is rate limiting with respect to sarcasm detection in schizophrenia.
Auditory Task Irrelevance: A Basis for Inattentional Deafness
Scheer, Menja; Bülthoff, Heinrich H.; Chuang, Lewis L.
2018-01-01
Objective This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality. Background Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one’s capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings. Method Forty-eight participants performed a visuomotor tracking task while auditory stimuli were presented: a frequent pure tone, an infrequent pure tone, and infrequent environmental sounds. Participants were required either to respond to the presentation of the infrequent pure tone (auditory task-relevant) or not (auditory task-irrelevant). We recorded and compared the event-related potentials (ERPs) that were generated by environmental sounds, which were always task-irrelevant for both groups. These ERPs served as an index for our participants’ awareness of the task-irrelevant auditory scene. Results Manipulation of auditory task relevance influenced the brain’s response to task-irrelevant environmental sounds. Specifically, the late novelty-P3 to irrelevant environmental sounds, which underlies working memory updating, was found to be selectively enhanced by auditory task relevance independent of visuomotor workload. Conclusion Task irrelevance in the auditory modality selectively reduces our brain’s responses to unexpected and irrelevant sounds regardless of visuomotor workload. Application Presenting relevant auditory information more often could mitigate the risk of inattentional deafness. PMID:29578754
1984-08-01
90de It noce..etrv wnd identify by block numberl .’-- This work reviews the areas of monaural and binaural signal detection, auditory discrimination And...AUDITORY DISPLAYS This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to...pertaining to the major areas of auditory processing in humans. The areas covered in the reviews presented here are monaural and binaural siqnal detection
The right inferior frontal gyrus processes nested non-local dependencies in music.
Cheung, Vincent K M; Meyer, Lars; Friederici, Angela D; Koelsch, Stefan
2018-02-28
Complex auditory sequences known as music have often been described as hierarchically structured. This permits the existence of non-local dependencies, which relate elements of a sequence beyond their temporal sequential order. Previous studies in music have reported differential activity in the inferior frontal gyrus (IFG) when comparing regular and irregular chord-transitions based on theories in Western tonal harmony. However, it is unclear if the observed activity reflects the interpretation of hierarchical structure as the effects are confounded by local irregularity. Using functional magnetic resonance imaging (fMRI), we found that violations to non-local dependencies in nested sequences of three-tone musical motifs in musicians elicited increased activity in the right IFG. This is in contrast to similar studies in language which typically report the left IFG in processing grammatical syntax. Effects of increasing auditory working demands are moreover reflected by distributed activity in frontal and parietal regions. Our study therefore demonstrates the role of the right IFG in processing non-local dependencies in music, and suggests that hierarchical processing in different cognitive domains relies on similar mechanisms that are subserved by domain-selective neuronal subpopulations.