Auditory Processing of Amplitude Envelope Rise Time in Adults Diagnosed with Developmental Dyslexia
ERIC Educational Resources Information Center
Pasquini, Elisabeth S.; Corriveau, Kathleen H.; Goswami, Usha
2007-01-01
Studies of basic (nonspeech) auditory processing in adults thought to have developmental dyslexia have yielded a variety of data. Yet there has been little consensus regarding the explanatory value of auditory processing in accounting for reading difficulties. Recently, however, a number of studies of basic auditory processing in children with…
ERIC Educational Resources Information Center
Kuppen, Sarah; Huss, Martina; Fosker, Tim; Fegan, Natasha; Goswami, Usha
2011-01-01
We explore the relationships between basic auditory processing, phonological awareness, vocabulary, and word reading in a sample of 95 children, 55 typically developing children, and 40 children with low IQ. All children received nonspeech auditory processing tasks, phonological processing and literacy measures, and a receptive vocabulary task.…
Basic Auditory Processing and Developmental Dyslexia in Chinese
ERIC Educational Resources Information Center
Wang, Hsiao-Lan Sharon; Huss, Martina; Hamalainen, Jarmo A.; Goswami, Usha
2012-01-01
The present study explores the relationship between basic auditory processing of sound rise time, frequency, duration and intensity, phonological skills (onset-rime and tone awareness, sound blending, RAN, and phonological memory) and reading disability in Chinese. A series of psychometric, literacy, phonological, auditory, and character…
Auditory temporal processing skills in musicians with dyslexia.
Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha
2014-08-01
The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.
Auditory Temporal Processing as a Specific Deficit among Dyslexic Readers
ERIC Educational Resources Information Center
Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit
2012-01-01
The present study focuses on examining the hypothesis that auditory temporal perception deficit is a basic cause for reading disabilities among dyslexics. This hypothesis maintains that reading impairment is caused by a fundamental perceptual deficit in processing rapid auditory or visual stimuli. Since the auditory perception involves a number of…
ERIC Educational Resources Information Center
Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquiere, Pol
2007-01-01
This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school…
Demopoulos, Carly; Hopkins, Joyce; Kopald, Brandon E; Paulson, Kim; Doyle, Lauren; Andrews, Whitney E; Lewine, Jeffrey David
2015-11-01
The primary aim of this study was to examine whether there is an association between magnetoencephalography-based (MEG) indices of basic cortical auditory processing and vocal affect recognition (VAR) ability in individuals with autism spectrum disorder (ASD). MEG data were collected from 25 children/adolescents with ASD and 12 control participants using a paired-tone paradigm to measure quality of auditory physiology, sensory gating, and rapid auditory processing. Group differences were examined in auditory processing and vocal affect recognition ability. The relationship between differences in auditory processing and vocal affect recognition deficits was examined in the ASD group. Replicating prior studies, participants with ASD showed longer M1n latencies and impaired rapid processing compared with control participants. These variables were significantly related to VAR, with the linear combination of auditory processing variables accounting for approximately 30% of the variability after controlling for age and language skills in participants with ASD. VAR deficits in ASD are typically interpreted as part of a core, higher order dysfunction of the "social brain"; however, these results suggest they also may reflect basic deficits in auditory processing that compromise the extraction of socially relevant cues from the auditory environment. As such, they also suggest that therapeutic targeting of sensory dysfunction in ASD may have additional positive implications for other functional deficits. (c) 2015 APA, all rights reserved).
Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G
2017-03-01
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu
2016-10-01
The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.
... Dyscalculia is defined as difficulty performing mathematical calculations. Math is problematic for many students, but dyscalculia may prevent a teenager from grasping even basic math concepts. Auditory Memory and Processing Disabilities Auditory memory ...
Infant discrimination of rapid auditory cues predicts later language impairment.
Benasich, April A; Tallal, Paula
2002-10-17
The etiology and mechanisms of specific language impairment (SLI) in children are unknown. Differences in basic auditory processing abilities have been suggested to underlie their language deficits. Studies suggest that the neuropathology, such as atypical patterns of cerebral lateralization and cortical cellular anomalies, implicated in such impairments likely occur early in life. Such anomalies may play a part in the rapid processing deficits seen in this disorder. However, prospective, longitudinal studies in infant populations that are critical to examining these hypotheses have not been done. In the study described, performance on brief, rapidly-presented, successive auditory processing and perceptual-cognitive tasks were assessed in two groups of infants: normal control infants with no family history of language disorders and infants from families with a positive family history for language impairment. Initial assessments were obtained when infants were 6-9 months of age (M=7.5 months) and the sample was then followed through age 36 months. At the first visit, infants' processing of rapid auditory cues as well as global processing speed and memory were assessed. Significant differences in mean thresholds were seen in infants born into families with a history of SLI as compared with controls. Examination of relations between infant processing abilities and emerging language through 24 months-of-age revealed that threshold for rapid auditory processing at 7.5 months was the single best predictor of language outcome. At age 3, rapid auditory processing threshold and being male, together predicted 39-41% of the variance in language outcome. Thus, early deficits in rapid auditory processing abilities both precede and predict subsequent language delays. These findings support an essential role for basic nonlinguistic, central auditory processes, particularly rapid spectrotemporal processing, in early language development. Further, these findings provide a temporal diagnostic window during which future language impairments may be addressed.
ERIC Educational Resources Information Center
Hämäläinen, Jarmo A.; Salminen, Hanne K.; Leppänen, Paavo H. T.
2013-01-01
A review of research that uses behavioral, electroencephalographic, and/or magnetoencephalographic methods to investigate auditory processing deficits in individuals with dyslexia is presented. Findings show that measures of frequency, rise time, and duration discrimination as well as amplitude modulation and frequency modulation detection were…
Slevc, L Robert; Shell, Alison R
2015-01-01
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Verhulst, Sarah; Altoè, Alessandro; Vasilkov, Viacheslav
2018-03-01
Models of the human auditory periphery range from very basic functional descriptions of auditory filtering to detailed computational models of cochlear mechanics, inner-hair cell (IHC), auditory-nerve (AN) and brainstem signal processing. It is challenging to include detailed physiological descriptions of cellular components into human auditory models because single-cell data stems from invasive animal recordings while human reference data only exists in the form of population responses (e.g., otoacoustic emissions, auditory evoked potentials). To embed physiological models within a comprehensive human auditory periphery framework, it is important to capitalize on the success of basic functional models of hearing and render their descriptions more biophysical where possible. At the same time, comprehensive models should capture a variety of key auditory features, rather than fitting their parameters to a single reference dataset. In this study, we review and improve existing models of the IHC-AN complex by updating their equations and expressing their fitting parameters into biophysical quantities. The quality of the model framework for human auditory processing is evaluated using recorded auditory brainstem response (ABR) and envelope-following response (EFR) reference data from normal and hearing-impaired listeners. We present a model with 12 fitting parameters from the cochlea to the brainstem that can be rendered hearing impaired to simulate how cochlear gain loss and synaptopathy affect human population responses. The model description forms a compromise between capturing well-described single-unit IHC and AN properties and human population response features. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Tobe, Russell H; Corcoran, Cheryl M; Breland, Melissa; MacKay-Brandt, Anna; Klim, Casimir; Colcombe, Stanley J; Leventhal, Bennett L; Javitt, Daniel C
2016-08-01
Impairment in social cognition, including emotion recognition, has been extensively studied in both Autism Spectrum Disorders (ASD) and Schizophrenia (SZ). However, the relative patterns of deficit between disorders have been studied to a lesser degree. Here, we applied a social cognition battery incorporating both auditory (AER) and visual (VER) emotion recognition measures to a group of 19 high-functioning individuals with ASD relative to 92 individuals with SZ, and 73 healthy control adult participants. We examined group differences and correlates of basic auditory processing and processing speed. Individuals with SZ were impaired in both AER and VER while ASD individuals were impaired in VER only. In contrast to SZ participants, those with ASD showed intact basic auditory function. Our finding of a dissociation between AER and VER deficits in ASD relative to Sz support modality-specific theories of emotion recognition dysfunction. Future studies should focus on visual system-specific contributions to social cognitive impairment in ASD. Copyright © 2016 Elsevier Ltd. All rights reserved.
Exploring the role of auditory analysis in atypical compared to typical language development.
Grube, Manon; Cooper, Freya E; Kumar, Sukhbinder; Kelly, Tom; Griffiths, Timothy D
2014-02-01
The relationship between auditory processing and language skills has been debated for decades. Previous findings have been inconsistent, both in typically developing and impaired subjects, including those with dyslexia or specific language impairment. Whether correlations between auditory and language skills are consistent between different populations has hardly been addressed at all. The present work presents an exploratory approach of testing for patterns of correlations in a range of measures of auditory processing. In a recent study, we reported findings from a large cohort of eleven-year olds on a range of auditory measures and the data supported a specific role for the processing of short sequences in pitch and time in typical language development. Here we tested whether a group of individuals with dyslexic traits (DT group; n = 28) from the same year group would show the same pattern of correlations between auditory and language skills as the typically developing group (TD group; n = 173). Regarding the raw scores, the DT group showed a significantly poorer performance on the language but not the auditory measures, including measures of pitch, time and rhythm, and timbre (modulation). In terms of correlations, there was a tendency to decrease in correlations between short-sequence processing and language skills, contrasted by a significant increase in correlation for basic, single-sound processing, in particular in the domain of modulation. The data support the notion that the fundamental relationship between auditory and language skills might differ in atypical compared to typical language development, with the implication that merging data or drawing inference between populations might be problematic. Further examination of the relationship between both basic sound feature analysis and music-like sound analysis and language skills in impaired populations might allow the development of appropriate training strategies. These might include types of musical training to augment language skills via their common bases in sound sequence analysis. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Auditory and Motor Rhythm Awareness in Adults with Dyslexia
ERIC Educational Resources Information Center
Thomson, Jennifer M.; Fryer, Ben; Maltby, James; Goswami, Usha
2006-01-01
Children with developmental dyslexia appear to be insensitive to basic auditory cues to speech rhythm and stress. For example, they experience difficulties in processing duration and amplitude envelope onset cues. Here we explored the sensitivity of adults with developmental dyslexia to the same cues. In addition, relations with expressive and…
Information Processing in Auditory-Visual Conflict.
ERIC Educational Resources Information Center
Henker, Barbara A.; Whalen, Carol K.
1972-01-01
The present study used a set of bimodal (auditory-visual) conflict designed specifically for the preschool child. The basic component was a match-to-sample sequence designed to reduce the often-found contaminating factors in studies with young children: failure to understand or remember instructions, inability to perform the indicator response, or…
Modulating Human Auditory Processing by Transcranial Electrical Stimulation
Heimrath, Kai; Fiene, Marina; Rufener, Katharina S.; Zaehle, Tino
2016-01-01
Transcranial electrical stimulation (tES) has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation, transcranial alternating current stimulation, and transcranial random noise stimulation has emerged. While the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders. PMID:27013969
Unraveling the principles of auditory cortical processing: can we learn from the visual system?
King, Andrew J; Nelken, Israel
2013-01-01
Studies of auditory cortex are often driven by the assumption, derived from our better understanding of visual cortex, that basic physical properties of sounds are represented there before being used by higher-level areas for determining sound-source identity and location. However, we only have a limited appreciation of what the cortex adds to the extensive subcortical processing of auditory information, which can account for many perceptual abilities. This is partly because of the approaches that have dominated the study of auditory cortical processing to date, and future progress will unquestionably profit from the adoption of methods that have provided valuable insights into the neural basis of visual perception. At the same time, we propose that there are unique operating principles employed by the auditory cortex that relate largely to the simultaneous and sequential processing of previously derived features and that therefore need to be studied and understood in their own right. PMID:19471268
Fundamental deficits of auditory perception in Wernicke's aphasia.
Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen
2013-01-01
This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
Binaural speech processing in individuals with auditory neuropathy.
Rance, G; Ryan, M M; Carew, P; Corben, L A; Yiu, E; Tan, J; Delatycki, M B
2012-12-13
Auditory neuropathy disrupts the neural representation of sound and may therefore impair processes contingent upon inter-aural integration. The aims of this study were to investigate binaural auditory processing in individuals with axonal (Friedreich ataxia) and demyelinating (Charcot-Marie-Tooth disease type 1A) auditory neuropathy and to evaluate the relationship between the degree of auditory deficit and overall clinical severity in patients with neuropathic disorders. Twenty-three subjects with genetically confirmed Friedreich ataxia and 12 subjects with Charcot-Marie-Tooth disease type 1A underwent psychophysical evaluation of basic auditory processing (intensity discrimination/temporal resolution) and binaural speech perception assessment using the Listening in Spatialized Noise test. Age, gender and hearing-level-matched controls were also tested. Speech perception in noise for individuals with auditory neuropathy was abnormal for each listening condition, but was particularly affected in circumstances where binaural processing might have improved perception through spatial segregation. Ability to use spatial cues was correlated with temporal resolution suggesting that the binaural-processing deficit was the result of disordered representation of timing cues in the left and right auditory nerves. Spatial processing was also related to overall disease severity (as measured by the Friedreich Ataxia Rating Scale and Charcot-Marie-Tooth Neuropathy Score) suggesting that the degree of neural dysfunction in the auditory system accurately reflects generalized neuropathic changes. Measures of binaural speech processing show promise for application in the neurology clinic. In individuals with auditory neuropathy due to both axonal and demyelinating mechanisms the assessment provides a measure of functional hearing ability, a biomarker capable of tracking the natural history of progressive disease and a potential means of evaluating the effectiveness of interventions. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
Behavioral and subcortical signatures of musical expertise in Mandarin Chinese speakers
Tervaniemi, Mari; Aalto, Daniel
2018-01-01
Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for fo or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers. PMID:29300756
Auditory training improves auditory performance in cochlear implanted children.
Roman, Stephane; Rochette, Françoise; Triglia, Jean-Michel; Schön, Daniele; Bigand, Emmanuel
2016-07-01
While the positive benefits of pediatric cochlear implantation on language perception skills are now proven, the heterogeneity of outcomes remains high. The understanding of this heterogeneity and possible strategies to minimize it is of utmost importance. Our scope here is to test the effects of an auditory training strategy, "sound in Hands", using playful tasks grounded on the theoretical and empirical findings of cognitive sciences. Indeed, several basic auditory operations, such as auditory scene analysis (ASA) are not trained in the usual therapeutic interventions in deaf children. However, as they constitute a fundamental basis in auditory cognition, their development should imply general benefit in auditory processing and in turn enhance speech perception. The purpose of the present study was to determine whether cochlear implanted children could improve auditory performances in trained tasks and whether they could develop a transfer of learning to a phonetic discrimination test. Nineteen prelingually unilateral cochlear implanted children without additional handicap (4-10 year-olds) were recruited. The four main auditory cognitive processing (identification, discrimination, ASA and auditory memory) were stimulated and trained in the Experimental Group (EG) using Sound in Hands. The EG followed 20 training weekly sessions of 30 min and the untrained group was the control group (CG). Two measures were taken for both groups: before training (T1) and after training (T2). EG showed a significant improvement in the identification, discrimination and auditory memory tasks. The improvement in the ASA task did not reach significance. CG did not show any significant improvement in any of the tasks assessed. Most importantly, improvement was visible in the phonetic discrimination test for EG only. Moreover, younger children benefited more from the auditory training program to develop their phonetic abilities compared to older children, supporting the idea that rehabilitative care is most efficient when it takes place early on during childhood. These results are important to pinpoint the auditory deficits in CI children, to gather a better understanding of the links between basic auditory skills and speech perception which will in turn allow more efficient rehabilitative programs. Copyright © 2016 Elsevier B.V. All rights reserved.
Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina
2016-02-01
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Hearing loss and the central auditory system: Implications for hearing aids
NASA Astrophysics Data System (ADS)
Frisina, Robert D.
2003-04-01
Hearing loss can result from disorders or damage to the ear (peripheral auditory system) or the brain (central auditory system). Here, the basic structure and function of the central auditory system will be highlighted as relevant to cases of permanent hearing loss where assistive devices (hearing aids) are called for. The parts of the brain used for hearing are altered in two basic ways in instances of hearing loss: (1) Damage to the ear can reduce the number and nature of input channels that the brainstem receives from the ear, causing plasticity of the central auditory system. This plasticity may partially compensate for the peripheral loss, or add new abnormalities such as distorted speech processing or tinnitus. (2) In some situations, damage to the brain can occur independently of the ear, as may occur in cases of head trauma, tumors or aging. Implications of deficits to the central auditory system for speech perception in noise, hearing aid use and future innovative circuit designs will be provided to set the stage for subsequent presentations in this special educational session. [Work supported by NIA-NIH Grant P01 AG09524 and the International Center for Hearing & Speech Research, Rochester, NY.
Goswami, Usha; Fosker, Tim; Huss, Martina; Mead, Natasha; Szucs, Dénes
2011-01-01
Across languages, children with developmental dyslexia have a specific difficulty with the neural representation of the sound structure (phonological structure) of speech. One likely cause of their difficulties with phonology is a perceptual difficulty in auditory temporal processing (Tallal, 1980). Tallal (1980) proposed that basic auditory processing of brief, rapidly successive acoustic changes is compromised in dyslexia, thereby affecting phonetic discrimination (e.g. discriminating /b/ from /d/) via impaired discrimination of formant transitions (rapid acoustic changes in frequency and intensity). However, an alternative auditory temporal hypothesis is that the basic auditory processing of the slower amplitude modulation cues in speech is compromised (Goswami et al., 2002). Here, we contrast children's perception of a synthetic speech contrast (ba/wa) when it is based on the speed of the rate of change of frequency information (formant transition duration) versus the speed of the rate of change of amplitude modulation (rise time). We show that children with dyslexia have excellent phonetic discrimination based on formant transition duration, but poor phonetic discrimination based on envelope cues. The results explain why phonetic discrimination may be allophonic in developmental dyslexia (Serniclaes et al., 2004), and suggest new avenues for the remediation of developmental dyslexia. © 2010 Blackwell Publishing Ltd.
Auditory sequence analysis and phonological skill
Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D.
2012-01-01
This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739
Laterality of basic auditory perception.
Sininger, Yvonne S; Bhatara, Anjali
2012-01-01
Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.
Laterality of Basic Auditory Perception
Sininger, Yvonne S.; Bhatara, Anjali
2010-01-01
Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: 1) gap detection 2) frequency discrimination and 3) intensity discrimination. Stimuli included tones (500, 1000 and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was: processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by 1) spectral width, a narrow band noise (NBN) of 450 Hz bandwidth was evaluated using intensity discrimination and 2) stimulus duration, 200, 500 and 1000 ms duration tones were evaluated using frequency discrimination. Results A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterized as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli. PMID:22385138
Stevens, Courtney; Paulsen, David; Yasen, Alia; Neville, Helen
2015-02-01
Previous neuroimaging studies indicate that lower socio-economic status (SES) is associated with reduced effects of selective attention on auditory processing. Here, we investigated whether lower SES is also associated with differences in a stimulus-driven aspect of auditory processing: the neural refractory period, or reduced amplitude response at faster rates of stimulus presentation. Thirty-two children aged 3 to 8 years participated, and were divided into two SES groups based on maternal education. Event-related brain potentials were recorded to probe stimuli presented at interstimulus intervals (ISIs) of 200, 500, or 1000 ms. These probes were superimposed on story narratives when attended and ignored, permitting a simultaneous experimental manipulation of selective attention. Results indicated that group differences in refractory periods differed as a function of attention condition. Children from higher SES backgrounds showed full neural recovery by 500 ms for attended stimuli, but required at least 1000 ms for unattended stimuli. In contrast, children from lower SES backgrounds showed similar refractory effects to attended and unattended stimuli, with full neural recovery by 500 ms. Thus, in higher SES children only, one functional consequence of selective attention is attenuation of the response to unattended stimuli, particularly at rapid ISIs, altering basic properties of the auditory refractory period. Together, these data indicate that differences in selective attention impact basic aspects of auditory processing in children from lower SES backgrounds. Copyright © 2013 Elsevier B.V. All rights reserved.
Cell-assembly coding in several memory processes.
Sakurai, Y
1998-01-01
The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.
Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquière, Pol
2007-04-09
This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school children at high-family risk for dyslexia, compared to a group of well-matched low-risk control children. Based on family risk status and first grade literacy achievement children were categorized in groups and pre-school data were retrospectively reanalyzed. On average, children showing both increased family risk and literacy-impairment at the end of first grade, presented significant pre-school deficits in phonological awareness, rapid automatized naming, speech-in-noise perception and frequency modulation detection. The concurrent presence of these deficits before receiving any formal reading instruction, might suggest a causal relation with problematic literacy development. However, a closer inspection of the individual data indicates that the core of the literacy problem is situated at the level of higher-order phonological processing. Although auditory and speech perception problems are relatively over-represented in literacy-impaired subjects and might possibly aggravate the phonological and literacy problem, it is unlikely that they would be at the basis of these problems. At a neurobiological level, results are interpreted as evidence for dysfunctional processing along the auditory-to-articulation stream that is implied in phonological processing, in combination with a relatively intact or inconsistently impaired functioning of the auditory-to-meaning stream that subserves auditory processing and speech perception.
Metabotropic glutamate receptors in auditory processing
Lu, Yong
2014-01-01
As the major excitatory neurotransmitter used in the vertebrate brain, glutamate activates ionotropic and metabotropic glutamate receptors (mGluRs), which mediate fast and slow neuronal actions, respectively. Important modulatory roles of mGluRs have been shown in many brain areas, and drugs targeting mGluRs have been developed for treatment of brain disorders. Here, I review the studies on mGluRs in the auditory system. Anatomical expression of mGluRs in the cochlear nucleus has been well characterized, while data for other auditory nuclei await more systematic investigations at both the light and electron microscopy levels. The physiology of mGluRs has been extensively studied using in vitro brain slice preparations, with a focus on the lower auditory brainstem in both mammals and birds. These in vitro physiological studies have revealed that mGluRs participate in neurotransmission, regulate ionic homeostasis, induce synaptic plasticity, and maintain the balance between excitation and inhibition in a variety of auditory structures. However, very few in vivo physiological studies on mGluRs in auditory processing have been undertaken at the systems level. Many questions regarding the essential roles of mGluRs in auditory processing still remain unanswered and more rigorous basic research is warranted. PMID:24909898
Bidelman, Gavin M.; Hutka, Stefanie; Moreno, Sylvain
2013-01-01
Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language. PMID:23565267
Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia
Dale, Corby L.; Brown, Ethan G.; Fisher, Melissa; Herman, Alexander B.; Dowling, Anne F.; Hinkley, Leighton B.; Subramaniam, Karuna; Nagarajan, Srikantan S.; Vinogradov, Sophia
2016-01-01
Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668
Auditory cortex of bats and primates: managing species-specific calls for social communication
Kanwal, Jagmeet S.; Rauschecker, Josef P.
2014-01-01
Individuals of many animal species communicate with each other using sounds or “calls” that are made up of basic acoustic patterns and their combinations. We are interested in questions about the processing of communication calls and their representation within the mammalian auditory cortex. Our studies compare in particular two species for which a large body of data has accumulated: the mustached bat and the rhesus monkey. We conclude that the brains of both species share a number of functional and organizational principles, which differ only in the extent to which and how they are implemented. For instance, neurons in both species use “combination-sensitivity” (nonlinear spectral and temporal integration of stimulus components) as a basic mechanism to enable exquisite sensitivity to and selectivity for particular call types. Whereas combination-sensitivity is already found abundantly at the primary auditory cortical and also at subcortical levels in bats, it becomes prevalent only at the level of the lateral belt in the secondary auditory cortex of monkeys. A parallel-hierarchical framework for processing complex sounds up to the level of the auditory cortex in bats and an organization into parallel-hierarchical, cortico-cortical auditory processing streams in monkeys is another common principle. Response specialization of neurons seems to be more pronounced in bats than in monkeys, whereas a functional specialization into “what” and “where” streams in the cerebral cortex is more pronounced in monkeys than in bats. These differences, in part, are due to the increased number and larger size of auditory areas in the parietal and frontal cortex in primates. Accordingly, the computational prowess of neural networks and the functional hierarchy resulting in specializations is established early and accelerated across brain regions in bats. The principles proposed here for the neural “management” of species-specific calls in bats and primates can be tested by studying the details of call processing in additional species. Also, computational modeling in conjunction with coordinated studies in bats and monkeys can help to clarify the fundamental question of perceptual invariance (or “constancy”) in call recognition, which has obvious relevance for understanding speech perception and its disorders in humans. PMID:17485400
Some Viable Techniques for Assessing and Counselling Cognitive Processing Weakness
ERIC Educational Resources Information Center
Haruna, Abubakar Sadiq
2016-01-01
Cognitive Processing weakness (CPW) is a psychological problem that impedes students' ability to learn effectively in a normal school setting. Such weakness may include; auditory, visual, conceptual, sequential, speed and attention processing. This paper therefore examines the basic assessment or diagnostic approaches such as Diagnosis by…
Persistent Thalamic Sound Processing Despite Profound Cochlear Denervation.
Chambers, Anna R; Salazar, Juan J; Polley, Daniel B
2016-01-01
Neurons at higher stages of sensory processing can partially compensate for a sudden drop in peripheral input through a homeostatic plasticity process that increases the gain on weak afferent inputs. Even after a profound unilateral auditory neuropathy where >95% of afferent synapses between auditory nerve fibers and inner hair cells have been eliminated with ouabain, central gain can restore cortical processing and perceptual detection of basic sounds delivered to the denervated ear. In this model of profound auditory neuropathy, auditory cortex (ACtx) processing and perception recover despite the absence of an auditory brainstem response (ABR) or brainstem acoustic reflexes, and only a partial recovery of sound processing at the level of the inferior colliculus (IC), an auditory midbrain nucleus. In this study, we induced a profound cochlear neuropathy with ouabain and asked whether central gain enabled a compensatory plasticity in the auditory thalamus comparable to the full recovery of function previously observed in the ACtx, the partial recovery observed in the IC, or something different entirely. Unilateral ouabain treatment in adult mice effectively eliminated the ABR, yet robust sound-evoked activity persisted in a minority of units recorded from the contralateral medial geniculate body (MGB) of awake mice. Sound driven MGB units could decode moderate and high-intensity sounds with accuracies comparable to sham-treated control mice, but low-intensity classification was near chance. Pure tone receptive fields and synchronization to broadband pulse trains also persisted, albeit with significantly reduced quality and precision, respectively. MGB decoding of temporally modulated pulse trains and speech tokens were both greatly impaired in ouabain-treated mice. Taken together, the absence of an ABR belied a persistent auditory processing at the level of the MGB that was likely enabled through increased central gain. Compensatory plasticity at the level of the auditory thalamus was less robust overall than previous observations in cortex or midbrain. Hierarchical differences in compensatory plasticity following sensorineural hearing loss may reflect differences in GABA circuit organization within the MGB, as compared to the ACtx or IC.
Development of the auditory system
Litovsky, Ruth
2015-01-01
Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262
Revisiting the "enigma" of musicians with dyslexia: Auditory sequencing and speech abilities.
Zuk, Jennifer; Bishop-Liebler, Paula; Ozernov-Palchik, Ola; Moore, Emma; Overy, Katie; Welch, Graham; Gaab, Nadine
2017-04-01
Previous research has suggested a link between musical training and auditory processing skills. Musicians have shown enhanced perception of auditory features critical to both music and speech, suggesting that this link extends beyond basic auditory processing. It remains unclear to what extent musicians who also have dyslexia show these specialized abilities, considering often-observed persistent deficits that coincide with reading impairments. The present study evaluated auditory sequencing and speech discrimination in 52 adults comprised of musicians with dyslexia, nonmusicians with dyslexia, and typical musicians. An auditory sequencing task measuring perceptual acuity for tone sequences of increasing length was administered. Furthermore, subjects were asked to discriminate synthesized syllable continua varying in acoustic components of speech necessary for intraphonemic discrimination, which included spectral (formant frequency) and temporal (voice onset time [VOT] and amplitude envelope) features. Results indicate that musicians with dyslexia did not significantly differ from typical musicians and performed better than nonmusicians with dyslexia for auditory sequencing as well as discrimination of spectral and VOT cues within syllable continua. However, typical musicians demonstrated superior performance relative to both groups with dyslexia for discrimination of syllables varying in amplitude information. These findings suggest a distinct profile of speech processing abilities in musicians with dyslexia, with specific weaknesses in discerning amplitude cues within speech. Because these difficulties seem to remain persistent in adults with dyslexia despite musical training, this study only partly supports the potential for musical training to enhance the auditory processing skills known to be crucial for literacy in individuals with dyslexia. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Using Neuroplasticity-Based Auditory Training to Improve Verbal Memory in Schizophrenia
Fisher, Melissa; Holland, Christine; Merzenich, Michael M.; Vinogradov, Sophia
2009-01-01
Objective Impaired verbal memory in schizophrenia is a key rate-limiting factor for functional outcome, does not respond to currently available medications, and shows only modest improvement after conventional behavioral remediation. The authors investigated an innovative approach to the remediation of verbal memory in schizophrenia, based on principles derived from the basic neuroscience of learning-induced neuroplasticity. The authors report interim findings in this ongoing study. Method Fifty-five clinically stable schizophrenia subjects were randomly assigned to either 50 hours of computerized auditory training or a control condition using computer games. Those receiving auditory training engaged in daily computerized exercises that placed implicit, increasing demands on auditory perception through progressively more difficult auditory-verbal working memory and verbal learning tasks. Results Relative to the control group, subjects who received active training showed significant gains in global cognition, verbal working memory, and verbal learning and memory. They also showed reliable and significant improvement in auditory psychophysical performance; this improvement was significantly correlated with gains in verbal working memory and global cognition. Conclusions Intensive training in early auditory processes and auditory-verbal learning results in substantial gains in verbal cognitive processes relevant to psychosocial functioning in schizophrenia. These gains may be due to a training method that addresses the early perceptual impairments in the illness, that exploits intact mechanisms of repetitive practice in schizophrenia, and that uses an intensive, adaptive training approach. PMID:19448187
The ability to tap to a beat relates to cognitive, linguistic, and perceptual skills
Tierney, Adam T.; Kraus, Nina
2013-01-01
Reading-impaired children have difficulty tapping to a beat. Here we tested whether this relationship between reading ability and synchronized tapping holds in typically-developing adolescents. We also hypothesized that tapping relates to two other abilities. First, since auditory-motor synchronization requires monitoring of the relationship between motor output and auditory input, we predicted that subjects better able to tap to the beat would perform better on attention tests. Second, since auditory-motor synchronization requires fine temporal precision within the auditory system for the extraction of a sound’s onset time, we predicted that subjects better able to tap to the beat would be less affected by backward masking, a measure of temporal precision within the auditory system. As predicted, tapping performance related to reading, attention, and backward masking. These results motivate future research investigating whether beat synchronization training can improve not only reading ability, but potentially executive function and basic auditory processing as well. PMID:23400117
The auditory processing battery: survey of common practices.
Emanuel, Diana C
2002-02-01
A survey of auditory processing (AP) diagnostic practices was mailed to all licensed audiologists in the State of Maryland and sent as an electronic mail attachment to the American Speech-Language-Hearing Association and Educational Audiology Association Internet forums. Common AP protocols (25 from the Internet, 28 from audiologists in Maryland) included requiring basic audiologic testing, using questionnaires, and administering dichotic listening, monaural low-redundancy speech, temporal processing, and electrophysiologic tests. Some audiologists also administer binaural interaction, attention, memory, and speech-language/psychological/educational tests and incorporate a classroom observation. The various AP batteries presently administered appear to be based on the availability of AP tests with well-documented normative data. Resources for obtaining AP tests are listed.
Impey, Danielle; Knott, Verner
2015-08-01
Membrane potentials and brain plasticity are basic modes of cerebral information processing. Both can be externally (non-invasively) modulated by weak transcranial direct current stimulation (tDCS). Polarity-dependent tDCS-induced reversible circumscribed increases and decreases in cortical excitability and functional changes have been observed following stimulation of motor and visual cortices but relatively little research has been conducted with respect to the auditory cortex. The aim of this pilot study was to examine the effects of tDCS on auditory sensory discrimination in healthy participants (N = 12) assessed with the mismatch negativity (MMN) brain event-related potential (ERP). In a randomized, double-blind, sham-controlled design, participants received anodal tDCS over the primary auditory cortex (2 mA for 20 min) in one session and 'sham' stimulation (i.e., no stimulation except initial ramp-up for 30 s) in the other session. MMN elicited by changes in auditory pitch was found to be enhanced after receiving anodal tDCS compared to 'sham' stimulation, with the effects being evidenced in individuals with relatively reduced (vs. increased) baseline amplitudes and with relatively small (vs. large) pitch deviants. Additional studies are needed to further explore relationships between tDCS-related parameters, auditory stimulus features and individual differences prior to assessing the utility of this tool for treating auditory processing deficits in psychiatric and/or neurological disorders.
Audio-tactile integration and the influence of musical training.
Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo
2014-01-01
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.
Electrophysiological measurement of human auditory function
NASA Technical Reports Server (NTRS)
Galambos, R.
1975-01-01
Knowledge of the human auditory evoked response is reviewed, including methods of determining this response, the way particular changes in the stimulus are coupled to specific changes in the response, and how the state of mind of the listener will influence the response. Important practical applications of this basic knowledge are discussed. Measurement of the brainstem evoked response, for instance, can state unequivocally how well the peripheral auditory apparatus functions. It might then be developed into a useful hearing test, especially for infants and preverbal or nonverbal children. Clinical applications of measuring the brain waves evoked 100 msec and later after the auditory stimulus are undetermined. These waves are clearly related to brain events associated with cognitive processing of acoustic signals, since their properties depend upon where the listener directs his attention and whether how long he expects the signal.
Auditory dysfunction in schizophrenia: integrating clinical and basic features
Javitt, Daniel C.; Sweet, Robert A.
2015-01-01
Schizophrenia is a complex neuropsychiatric disorder that is associated with persistent psychosocial disability in affected individuals. Although studies of schizophrenia have traditionally focused on deficits in higher-order processes such as working memory and executive function, there is an increasing realization that, in this disorder, deficits can be found throughout the cortex and are manifest even at the level of early sensory processing. These deficits are highly amenable to translational investigation and represent potential novel targets for clinical intervention. Deficits, moreover, have been linked to specific structural abnormalities in post-mortem auditory cortex tissue from individuals with schizophrenia, providing unique insights into underlying pathophysiological mechanisms. PMID:26289573
Auditory Implant Research at the House Ear Institute 1989–2013
Shannon, Robert V.
2014-01-01
The House Ear Institute (HEI) had a long and distinguished history of auditory implant innovation and development. Early clinical innovations include being one of the first cochlear implant (CI) centers, being the first center to implant a child with a cochlear implant in the US, developing the auditory brainstem implant, and developing multiple surgical approaches and tools for Otology. This paper reviews the second stage of auditory implant research at House – in-depth basic research on perceptual capabilities and signal processing for both cochlear implants and auditory brainstem implants. Psychophysical studies characterized the loudness and temporal perceptual properties of electrical stimulation as a function of electrical parameters. Speech studies with the noise-band vocoder showed that only four bands of tonotopically arrayed information were sufficient for speech recognition, and that most implant users were receiving the equivalent of 8–10 bands of information. The noise-band vocoder allowed us to evaluate the effects of the manipulation of the number of bands, the alignment of the bands with the original tonotopic map, and distortions in the tonotopic mapping, including holes in the neural representation. Stimulation pulse rate was shown to have only a small effect on speech recognition. Electric fields were manipulated in position and sharpness, showing the potential benefit of improved tonotopic selectivity. Auditory training shows great promise for improving speech recognition for all patients. And the Auditory Brainstem Implant was developed and improved and its application expanded to new populations. Overall, the last 25 years of research at HEI helped increase the basic scientific understanding of electrical stimulation of hearing and contributed to the improved outcomes for patients with the CI and ABI devices. PMID:25449009
ERIC Educational Resources Information Center
Cormier, Damien C.; McGrew, Kevin S.; Bulut, Okan; Funamoto, Allyson
2017-01-01
This study examined associations between broad cognitive abilities (Fluid Reasoning [Gf], Short-Term Working Memory [Gwm], Long-Term Storage and Retrieval [Glr], Processing Speed [Gs], Comprehension-Knowledge [Gc], Visual Processing [Gv], and Auditory Processing [Ga]) and reading achievement (Basic Reading Skills, Reading Rate, Reading Fluency,…
Cantiani, Chiara; Choudhury, Naseem A; Yu, Yan H; Shafer, Valerie L; Schwartz, Richard G; Benasich, April A
2016-01-01
This study examines electrocortical activity associated with visual and auditory sensory perception and lexical-semantic processing in nonverbal (NV) or minimally-verbal (MV) children with Autism Spectrum Disorder (ASD). Currently, there is no agreement on whether these children comprehend incoming linguistic information and whether their perception is comparable to that of typically developing children. Event-related potentials (ERPs) of 10 NV/MV children with ASD and 10 neurotypical children were recorded during a picture-word matching paradigm. Atypical ERP responses were evident at all levels of processing in children with ASD. Basic perceptual processing was delayed in both visual and auditory domains but overall was similar in amplitude to typically-developing children. However, significant differences between groups were found at the lexical-semantic level, suggesting more atypical higher-order processes. The results suggest that although basic perception is relatively preserved in NV/MV children with ASD, higher levels of processing, including lexical- semantic functions, are impaired. The use of passive ERP paradigms that do not require active participant response shows significant potential for assessment of non-compliant populations such as NV/MV children with ASD.
Cantiani, Chiara; Choudhury, Naseem A.; Yu, Yan H.; Shafer, Valerie L.; Schwartz, Richard G.; Benasich, April A.
2016-01-01
This study examines electrocortical activity associated with visual and auditory sensory perception and lexical-semantic processing in nonverbal (NV) or minimally-verbal (MV) children with Autism Spectrum Disorder (ASD). Currently, there is no agreement on whether these children comprehend incoming linguistic information and whether their perception is comparable to that of typically developing children. Event-related potentials (ERPs) of 10 NV/MV children with ASD and 10 neurotypical children were recorded during a picture-word matching paradigm. Atypical ERP responses were evident at all levels of processing in children with ASD. Basic perceptual processing was delayed in both visual and auditory domains but overall was similar in amplitude to typically-developing children. However, significant differences between groups were found at the lexical-semantic level, suggesting more atypical higher-order processes. The results suggest that although basic perception is relatively preserved in NV/MV children with ASD, higher levels of processing, including lexical- semantic functions, are impaired. The use of passive ERP paradigms that do not require active participant response shows significant potential for assessment of non-compliant populations such as NV/MV children with ASD. PMID:27560378
Beta-Band Oscillations Represent Auditory Beat and Its Metrical Hierarchy in Perception and Imagery.
Fujioka, Takako; Ross, Bernhard; Trainor, Laurel J
2015-11-11
Dancing to music involves synchronized movements, which can be at the basic beat level or higher hierarchical metrical levels, as in a march (groups of two basic beats, one-two-one-two …) or waltz (groups of three basic beats, one-two-three-one-two-three …). Our previous human magnetoencephalography studies revealed that the subjective sense of meter influences auditory evoked responses phase locked to the stimulus. Moreover, the timing of metronome clicks was represented in periodic modulation of induced (non-phase locked) β-band (13-30 Hz) oscillation in bilateral auditory and sensorimotor cortices. Here, we further examine whether acoustically accented and subjectively imagined metric processing in march and waltz contexts during listening to isochronous beats were reflected in neuromagnetic β-band activity recorded from young adult musicians. First, we replicated previous findings of beat-related β-power decrease at 200 ms after the beat followed by a predictive increase toward the onset of the next beat. Second, we showed that the β decrease was significantly influenced by the metrical structure, as reflected by differences across beat type for both perception and imagery conditions. Specifically, the β-power decrease associated with imagined downbeats (the count "one") was larger than that for both the upbeat (preceding the count "one") in the march, and for the middle beat in the waltz. Moreover, beamformer source analysis for the whole brain volume revealed that the metric contrasts involved auditory and sensorimotor cortices; frontal, parietal, and inferior temporal lobes; and cerebellum. We suggest that the observed β-band activities reflect a translation of timing information to auditory-motor coordination. With magnetoencephalography, we examined β-band oscillatory activities around 20 Hz while participants listened to metronome beats and imagined musical meters such as a march and waltz. We demonstrated that β-band event-related desynchronization in the auditory cortex differentiates between beat positions, specifically between downbeats and the following beat. This is the first demonstration of β-band oscillations related to hierarchical and internalized timing information. Moreover, the meter representation in the β oscillations was widespread across the brain, including sensorimotor and premotor cortices, parietal lobe, and cerebellum. The results extend current understanding of the role of β oscillations in neural processing of predictive timing. Copyright © 2015 the authors 0270-6474/15/3515187-12$15.00/0.
Cumming, Ruth; Wilson, Angela; Goswami, Usha
2015-01-01
Children with specific language impairments (SLIs) show impaired perception and production of spoken language, and can also present with motor, auditory, and phonological difficulties. Recent auditory studies have shown impaired sensitivity to amplitude rise time (ART) in children with SLIs, along with non-speech rhythmic timing difficulties. Linguistically, these perceptual impairments should affect sensitivity to speech prosody and syllable stress. Here we used two tasks requiring sensitivity to prosodic structure, the DeeDee task and a stress misperception task, to investigate this hypothesis. We also measured auditory processing of ART, rising pitch and sound duration, in both speech (“ba”) and non-speech (tone) stimuli. Participants were 45 children with SLI aged on average 9 years and 50 age-matched controls. We report data for all the SLI children (N = 45, IQ varying), as well as for two independent SLI subgroupings with intact IQ. One subgroup, “Pure SLI,” had intact phonology and reading (N = 16), the other, “SLI PPR” (N = 15), had impaired phonology and reading. Problems with syllable stress and prosodic structure were found for all the group comparisons. Both sub-groups with intact IQ showed reduced sensitivity to ART in speech stimuli, but the PPR subgroup also showed reduced sensitivity to sound duration in speech stimuli. Individual differences in processing syllable stress were associated with auditory processing. These data support a new hypothesis, the “prosodic phrasing” hypothesis, which proposes that grammatical difficulties in SLI may reflect perceptual difficulties with global prosodic structure related to auditory impairments in processing amplitude rise time and duration. PMID:26217286
Stoppelman, Nadav; Harpaz, Tamar; Ben-Shachar, Michal
2013-05-01
Speech processing engages multiple cortical regions in the temporal, parietal, and frontal lobes. Isolating speech-sensitive cortex in individual participants is of major clinical and scientific importance. This task is complicated by the fact that responses to sensory and linguistic aspects of speech are tightly packed within the posterior superior temporal cortex. In functional magnetic resonance imaging (fMRI), various baseline conditions are typically used in order to isolate speech-specific from basic auditory responses. Using a short, continuous sampling paradigm, we show that reversed ("backward") speech, a commonly used auditory baseline for speech processing, removes much of the speech responses in frontal and temporal language regions of adult individuals. On the other hand, signal correlated noise (SCN) serves as an effective baseline for removing primary auditory responses while maintaining strong signals in the same language regions. We show that the response to reversed speech in left inferior frontal gyrus decays significantly faster than the response to speech, thus suggesting that this response reflects bottom-up activation of speech analysis followed up by top-down attenuation once the signal is classified as nonspeech. The results overall favor SCN as an auditory baseline for speech processing.
Psychophysics and Neuronal Bases of Sound Localization in Humans
Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.
2013-01-01
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698
Diminished n1 auditory evoked potentials to oddball stimuli in misophonia patients.
Schröder, Arjan; van Diepen, Rosanne; Mazaheri, Ali; Petropoulos-Petalas, Diamantis; Soto de Amesti, Vicente; Vulink, Nienke; Denys, Damiaan
2014-01-01
Misophonia (hatred of sound) is a newly defined psychiatric condition in which ordinary human sounds, such as breathing and eating, trigger impulsive aggression. In the current study, we investigated if a dysfunction in the brain's early auditory processing system could be present in misophonia. We screened 20 patients with misophonia with the diagnostic criteria for misophonia, and 14 matched healthy controls without misophonia, and investigated any potential deficits in auditory processing of misophonia patients using auditory event-related potentials (ERPs) during an oddball task. Subjects watched a neutral silent movie while being presented a regular frequency of beep sounds in which oddball tones of 250 and 4000 Hz were randomly embedded in a stream of repeated 1000 Hz standard tones. We examined the P1, N1, and P2 components locked to the onset of the tones. For misophonia patients, the N1 peak evoked by the oddball tones had smaller mean peak amplitude than the control group. However, no significant differences were found in P1 and P2 components evoked by the oddball tones. There were no significant differences between the misophonia patients and their controls in any of the ERP components to the standard tones. The diminished N1 component to oddball tones in misophonia patients suggests an underlying neurobiological deficit in misophonia patients. This reduction might reflect a basic impairment in auditory processing in misophonia patients.
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.
PROCRU: A model for analyzing crew procedures in approach to landing
NASA Technical Reports Server (NTRS)
Baron, S.; Muralidharan, R.; Lancraft, R.; Zacharias, G.
1980-01-01
A model for analyzing crew procedures in approach to landing is developed. The model employs the information processing structure used in the optimal control model and in recent models for monitoring and failure detection. Mechanisms are added to this basic structure to model crew decision making in this multi task environment. Decisions are based on probability assessments and potential mission impact (or gain). Sub models for procedural activities are included. The model distinguishes among external visual, instrument visual, and auditory sources of information. The external visual scene perception models incorporate limitations in obtaining information. The auditory information channel contains a buffer to allow for storage in memory until that information can be processed.
Gavrilescu, M; Rossell, S; Stuart, G W; Shea, T L; Innes-Brown, H; Henshall, K; McKay, C; Sergejew, A A; Copolov, D; Egan, G F
2010-07-01
Previous research has reported auditory processing deficits that are specific to schizophrenia patients with a history of auditory hallucinations (AH). One explanation for these findings is that there are abnormalities in the interhemispheric connectivity of auditory cortex pathways in AH patients; as yet this explanation has not been experimentally investigated. We assessed the interhemispheric connectivity of both primary (A1) and secondary (A2) auditory cortices in n=13 AH patients, n=13 schizophrenia patients without auditory hallucinations (non-AH) and n=16 healthy controls using functional connectivity measures from functional magnetic resonance imaging (fMRI) data. Functional connectivity was estimated from resting state fMRI data using regions of interest defined for each participant based on functional activation maps in response to passive listening to words. Additionally, stimulus-induced responses were regressed out of the stimulus data and the functional connectivity was estimated for the same regions to investigate the reliability of the estimates. AH patients had significantly reduced interhemispheric connectivity in both A1 and A2 when compared with non-AH patients and healthy controls. The latter two groups did not show any differences in functional connectivity. Further, this pattern of findings was similar across the two datasets, indicating the reliability of our estimates. These data have identified a trait deficit specific to AH patients. Since this deficit was characterized within both A1 and A2 it is expected to result in the disruption of multiple auditory functions, for example, the integration of basic auditory information between hemispheres (via A1) and higher-order language processing abilities (via A2).
Delayed Detection of Tonal Targets in Background Noise in Dyslexia
ERIC Educational Resources Information Center
Chait, Maria; Eden, Guinevere; Poeppel, David; Simon, Jonathan Z.; Hill, Deborah F.; Flowers, D. Lynn
2007-01-01
Individuals with developmental dyslexia are often impaired in their ability to process certain linguistic and even basic non-linguistic auditory signals. Recent investigations report conflicting findings regarding impaired low-level binaural detection mechanisms associated with dyslexia. Binaural impairment has been hypothesized to stem from a…
Assessment of cortical auditory evoked potentials in children with specific language impairment.
Włodarczyk, Elżbieta; Szkiełkowska, Agata; Pilka, Adam; Skarżyński, Henryk
2018-02-28
The proper course of speech development heavily influences the cognitive and personal development of children. It is a condition for achieving preschool and school successes - it facilitates socializing and expressing feelings and needs. Impairment of language and its development in children represents a major diagnostic and therapeutic challenge for physicians and therapists. Early diagnosis of coexisting deficits and starting the therapy influence the therapeutic success. One of the basic diagnostic tests for children suffering from specific language impairment (SLI) is audiometry, thus far referred to as a hearing test. Auditory processing is just as important as a proper hearing threshold. Therefore, diagnosis of central auditory disorder may be a valuable supplementation of diagnosis of language impairment. Early diagnosis and implementation of appropriate treatment may contribute to an effective language therapy.
Garcia-Pino, Elisabet; Gessele, Nikodemus; Koch, Ursula
2017-08-02
Hypersensitivity to sounds is one of the prevalent symptoms in individuals with Fragile X syndrome (FXS). It manifests behaviorally early during development and is often used as a landmark for treatment efficacy. However, the physiological mechanisms and circuit-level alterations underlying this aberrant behavior remain poorly understood. Using the mouse model of FXS ( Fmr1 KO ), we demonstrate that functional maturation of auditory brainstem synapses is impaired in FXS. Fmr1 KO mice showed a greatly enhanced excitatory synaptic input strength in neurons of the lateral superior olive (LSO), a prominent auditory brainstem nucleus, which integrates ipsilateral excitation and contralateral inhibition to compute interaural level differences. Conversely, the glycinergic, inhibitory input properties remained unaffected. The enhanced excitation was the result of an increased number of cochlear nucleus fibers converging onto one LSO neuron, without changing individual synapse properties. Concomitantly, immunolabeling of excitatory ending markers revealed an increase in the immunolabeled area, supporting abnormally elevated excitatory input numbers. Intrinsic firing properties were only slightly enhanced. In line with the disturbed development of LSO circuitry, auditory processing was also affected in adult Fmr1 KO mice as shown with single-unit recordings of LSO neurons. These processing deficits manifested as an increase in firing rate, a broadening of the frequency response area, and a shift in the interaural level difference function of LSO neurons. Our results suggest that this aberrant synaptic development of auditory brainstem circuits might be a major underlying cause of the auditory processing deficits in FXS. SIGNIFICANCE STATEMENT Fragile X Syndrome (FXS) is the most common inheritable form of intellectual impairment, including autism. A core symptom of FXS is extreme sensitivity to loud sounds. This is one reason why individuals with FXS tend to avoid social interactions, contributing to their isolation. Here, a mouse model of FXS was used to investigate the auditory brainstem where basic sound information is first processed. Loss of the Fragile X mental retardation protein leads to excessive excitatory compared with inhibitory inputs in neurons extracting information about sound levels. Functionally, this elevated excitation results in increased firing rates, and abnormal coding of frequency and binaural sound localization cues. Imbalanced early-stage sound level processing could partially explain the auditory processing deficits in FXS. Copyright © 2017 the authors 0270-6474/17/377403-17$15.00/0.
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256
Amblyaudia: Review of Pathophysiology, Clinical Presentation, and Treatment of a New Diagnosis.
Kaplan, Alyson B; Kozin, Elliott D; Remenschneider, Aaron; Eftekhari, Kian; Jung, David H; Polley, Daniel B; Lee, Daniel J
2016-02-01
Similar to amblyopia in the visual system, "amblyaudia" is a term used to describe persistent hearing difficulty experienced by individuals with a history of asymmetric hearing loss (AHL) during a critical window of brain development. Few clinical reports have described this phenomenon and its consequent effects on central auditory processing. We aim to (1) define the concept of amblyaudia and (2) review contemporary research on its pathophysiology and emerging clinical relevance. PubMed, Embase, and Cochrane databases. A systematic literature search was performed with combinations of search terms: "amblyaudia," "conductive hearing loss," "sensorineural hearing loss," "asymmetric," "pediatric," "auditory deprivation," and "auditory development." Relevant articles were considered for inclusion, including basic and clinical studies, case series, and major reviews. During critical periods of infant brain development, imbalanced auditory input associated with AHL may lead to abnormalities in binaural processing. Patients with amblyaudia can demonstrate long-term deficits in auditory perception even with correction or resolution of AHL. The greatest impact is in sound localization and hearing in noisy environments, both of which rely on bilateral auditory cues. Diagnosis and quantification of amblyaudia remain controversial and poorly defined. Prevention of amblyaudia may be possible through early identification and timely management of reversible causes of AHL. Otolaryngologists, audiologists, and pediatricians should be aware of emerging data supporting amblyaudia as a diagnostic entity and be cognizant of the potential for lasting consequences of AHL. Prevention of long-term auditory deficits may be possible through rapid identification and correction. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
Artificial organs: recent progress in artificial hearing and vision.
Ifukube, Tohru
2009-01-01
Artificial sensory organs are a prosthetic means of sending visual or auditory information to the brain by electrical stimulation of the optic or auditory nerves to assist visually impaired or hearing-impaired people. However, clinical application of artificial sensory organs, except for cochlear implants, is still a trial-and-error process. This is because how and where the information transmitted to the brain is processed is still unknown, and also because changes in brain function (plasticity) remain unknown, even though brain plasticity plays an important role in meaningful interpretation of new sensory stimuli. This article discusses some basic unresolved issues and potential solutions in the development of artificial sensory organs such as cochlear implants, brainstem implants, artificial vision, and artificial retinas.
Jerjos, Michael; Hohman, Baily; Lauterbur, M. Elise; Kistler, Logan
2017-01-01
Abstract Several taxonomically distinct mammalian groups—certain microbats and cetaceans (e.g., dolphins)—share both morphological adaptations related to echolocation behavior and strong signatures of convergent evolution at the amino acid level across seven genes related to auditory processing. Aye-ayes (Daubentonia madagascariensis) are nocturnal lemurs with a specialized auditory processing system. Aye-ayes tap rapidly along the surfaces of trees, listening to reverberations to identify the mines of wood-boring insect larvae; this behavior has been hypothesized to functionally mimic echolocation. Here we investigated whether there are signals of convergence in auditory processing genes between aye-ayes and known mammalian echolocators. We developed a computational pipeline (Basic Exon Assembly Tool) that produces consensus sequences for regions of interest from shotgun genomic sequencing data for nonmodel organisms without requiring de novo genome assembly. We reconstructed complete coding region sequences for the seven convergent echolocating bat–dolphin genes for aye-ayes and another lemur. We compared sequences from these two lemurs in a phylogenetic framework with those of bat and dolphin echolocators and appropriate nonecholocating outgroups. Our analysis reaffirms the existence of amino acid convergence at these loci among echolocating bats and dolphins; some methods also detected signals of convergence between echolocating bats and both mice and elephants. However, we observed no significant signal of amino acid convergence between aye-ayes and echolocating bats and dolphins, suggesting that aye-aye tap-foraging auditory adaptations represent distinct evolutionary innovations. These results are also consistent with a developing consensus that convergent behavioral ecology does not reliably predict convergent molecular evolution. PMID:28810710
Diminished N1 Auditory Evoked Potentials to Oddball Stimuli in Misophonia Patients
Schröder, Arjan; van Diepen, Rosanne; Mazaheri, Ali; Petropoulos-Petalas, Diamantis; Soto de Amesti, Vicente; Vulink, Nienke; Denys, Damiaan
2014-01-01
Misophonia (hatred of sound) is a newly defined psychiatric condition in which ordinary human sounds, such as breathing and eating, trigger impulsive aggression. In the current study, we investigated if a dysfunction in the brain’s early auditory processing system could be present in misophonia. We screened 20 patients with misophonia with the diagnostic criteria for misophonia, and 14 matched healthy controls without misophonia, and investigated any potential deficits in auditory processing of misophonia patients using auditory event-related potentials (ERPs) during an oddball task. Subjects watched a neutral silent movie while being presented a regular frequency of beep sounds in which oddball tones of 250 and 4000 Hz were randomly embedded in a stream of repeated 1000 Hz standard tones. We examined the P1, N1, and P2 components locked to the onset of the tones. For misophonia patients, the N1 peak evoked by the oddball tones had smaller mean peak amplitude than the control group. However, no significant differences were found in P1 and P2 components evoked by the oddball tones. There were no significant differences between the misophonia patients and their controls in any of the ERP components to the standard tones. The diminished N1 component to oddball tones in misophonia patients suggests an underlying neurobiological deficit in misophonia patients. This reduction might reflect a basic impairment in auditory processing in misophonia patients. PMID:24782731
Nolden, Sophie; Bermudez, Patrick; Alunni-Menichini, Kristelle; Lefebvre, Christine; Grimault, Stephan; Jolicoeur, Pierre
2013-11-01
We examined the electrophysiological correlates of retention in auditory short-term memory (ASTM) for sequences of one, two, or three tones differing in timbre but having the same pitch. We focused on event-related potentials (ERPs) during the retention interval and revealed a sustained fronto-central ERP component (most likely a sustained anterior negativity; SAN) that became more negative as memory load increased. Our results are consistent with recent ERP studies on the retention of pitch and suggest that the SAN reflects brain activity mediating the low-level retention of basic acoustic features in ASTM. The present work shows that the retention of timbre shares common features with the retention of pitch, hence supporting the notion that the retention of basic sensory features is an active process that recruits modality-specific brain areas. © 2013 Elsevier Ltd. All rights reserved.
Auditory interfaces: The human perceiver
NASA Technical Reports Server (NTRS)
Colburn, H. Steven
1991-01-01
A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.
Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory
ERIC Educational Resources Information Center
Agres, Kat; Abdallah, Samer; Pearce, Marcus
2018-01-01
A basic function of cognition is to detect regularities in sensory input to facilitate the prediction and recognition of future events. It has been proposed that these implicit expectations arise from an internal predictive coding model, based on knowledge acquired through processes such as statistical learning, but it is unclear how different…
ERIC Educational Resources Information Center
Thomson, Jennifer M.; Goswami, Usha
2010-01-01
Across languages, children with developmental dyslexia are known to have impaired lexical phonological representations. Here, we explore associations between learning new phonological representations, phonological awareness, and sensitivity to amplitude envelope onsets (rise time). We show that individual differences in learning novel phonological…
Auditory processing deficits in bipolar disorder with and without a history of psychotic features.
Zenisek, RyAnna; Thaler, Nicholas S; Sutton, Griffin P; Ringdahl, Erik N; Snyder, Joel S; Allen, Daniel N
2015-11-01
Auditory perception deficits have been identified in schizophrenia (SZ) and linked to dysfunction in the auditory cortex. Given that psychotic symptoms, including auditory hallucinations, are also seen in bipolar disorder (BD), it may be that individuals with BD who also exhibit psychotic symptoms demonstrate a similar impairment in auditory perception. Fifty individuals with SZ, 30 individuals with bipolar I disorder with a history of psychosis (BD+), 28 individuals with bipolar I disorder with no history of psychotic features (BD-), and 29 normal controls (NC) were administered a tone discrimination task and an emotion recognition task. Mixed-model analyses of covariance with planned comparisons indicated that individuals with BD+ performed at a level that was intermediate between those with BD- and those with SZ on the more difficult condition of the tone discrimination task and on the auditory condition of the emotion recognition task. There were no differences between the BD+ and BD- groups on the visual or auditory-visual affect recognition conditions. Regression analyses indicated that performance on the tone discrimination task predicted performance on all conditions of the emotion recognition task. Auditory hallucinations in BD+ were not related to performance on either task. Our findings suggested that, although deficits in frequency discrimination and emotion recognition are more severe in SZ, these impairments extend to BD+. Although our results did not support the idea that auditory hallucinations may be related to these deficits, they indicated that basic auditory deficits may be a marker for psychosis, regardless of SZ or BD diagnosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Rota-Donahue, Christine; Schwartz, Richard G.; Shafer, Valerie; Sussman, Elyse S.
2016-01-01
Background Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children’s auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. Purpose This study examined the perception of small frequency differences (Δf) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. Research Design An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of Δf from the 1000-Hz base frequency. Study Sample Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. Data Collection and Analysis Behavioral data collected using headphone delivery were analyzed using the sensitivity index d′, calculated for three Δf was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d′ and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. Results TD children and children with APD and/or SLI differed in the detection of small-tone Δf. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d′ showed different strengths of correlation based on the magnitudes of the Δf. Auditory processing scores showed stronger correlation to the sensitivity index d′ for the small Δf, while language scores showed stronger correlation to the sensitivity index d′ for the large Δf. Conclusion Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI. PMID:27310407
Rota-Donahue, Christine; Schwartz, Richard G; Shafer, Valerie; Sussman, Elyse S
2016-06-01
Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children's auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. : This study examined the perception of small frequency differences (∆ƒ) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of ∆ƒ from the 1000-Hz base frequency. Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. Behavioral data collected using headphone delivery were analyzed using the sensitivity index d', calculated for three ∆ƒ was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d' and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. TD children and children with APD and/or SLI differed in the detection of small-tone ∆ƒ. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d' showed different strengths of correlation based on the magnitudes of the ∆ƒ. Auditory processing scores showed stronger correlation to the sensitivity index d' for the small ∆ƒ, while language scores showed stronger correlation to the sensitivity index d' for the large ∆ƒ. Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI. American Academy of Audiology.
de Hoz, Livia; Gierej, Dorota; Lioudyno, Victoria; Jaworski, Jacek; Blazejczyk, Magda; Cruces-Solís, Hugo; Beroun, Anna; Lebitko, Tomasz; Nikolaev, Tomasz; Knapska, Ewelina; Nelken, Israel; Kaczmarek, Leszek
2018-05-01
The behavioral changes that comprise operant learning are associated with plasticity in early sensory cortices as well as with modulation of gene expression, but the connection between the behavioral, electrophysiological, and molecular changes is only partially understood. We specifically manipulated c-Fos expression, a hallmark of learning-induced synaptic plasticity, in auditory cortex of adult mice using a novel approach based on RNA interference. Locally blocking c-Fos expression caused a specific behavioral deficit in a sound discrimination task, in parallel with decreased cortical experience-dependent plasticity, without affecting baseline excitability or basic auditory processing. Thus, c-Fos-dependent experience-dependent cortical plasticity is necessary for frequency discrimination in an operant behavioral task. Our results connect behavioral, molecular and physiological changes and demonstrate a role of c-Fos in experience-dependent plasticity and learning.
Lelo-de-Larrea-Mancera, E Sebastian; Rodríguez-Agudelo, Yaneth; Solís-Vivanco, Rodolfo
2017-06-01
Music represents a complex form of human cognition. To what extent our auditory system is attuned to music is yet to be clearly understood. Our principal aim was to determine whether the neurophysiological operations underlying pre-attentive auditory change detection (N1 enhancement (N1e)/Mismatch Negativity (MMN)) and the subsequent involuntary attentional reallocation (P3a) towards infrequent sound omissions, are influenced by differences in musical content. Specifically, we intended to explore any interaction effects that rhythmic and pitch dimensions of musical organization may have over these processes. Results showed that both the N1e and MMN amplitudes were differentially influenced by rhythm and pitch dimensions. MMN latencies were shorter for musical structures containing both features. This suggests some neurocognitive independence between pitch and rhythm domains, but also calls for further address on possible interactions between both of them at the level of early, automatic auditory detection. Furthermore, results demonstrate that the N1e reflects basic sensory memory processes. Lastly, we show that the involuntary switch of attention associated with the P3a reflects a general-purpose mechanism not modulated by musical features. Altogether, the N1e/MMN/P3a complex elicited by infrequent sound omissions revealed evidence of musical influence over early stages of auditory perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hausfeld, Lars; Riecke, Lars; Formisano, Elia
2018-06-01
Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Bankoff, Richard J; Jerjos, Michael; Hohman, Baily; Lauterbur, M Elise; Kistler, Logan; Perry, George H
2017-07-01
Several taxonomically distinct mammalian groups-certain microbats and cetaceans (e.g., dolphins)-share both morphological adaptations related to echolocation behavior and strong signatures of convergent evolution at the amino acid level across seven genes related to auditory processing. Aye-ayes (Daubentonia madagascariensis) are nocturnal lemurs with a specialized auditory processing system. Aye-ayes tap rapidly along the surfaces of trees, listening to reverberations to identify the mines of wood-boring insect larvae; this behavior has been hypothesized to functionally mimic echolocation. Here we investigated whether there are signals of convergence in auditory processing genes between aye-ayes and known mammalian echolocators. We developed a computational pipeline (Basic Exon Assembly Tool) that produces consensus sequences for regions of interest from shotgun genomic sequencing data for nonmodel organisms without requiring de novo genome assembly. We reconstructed complete coding region sequences for the seven convergent echolocating bat-dolphin genes for aye-ayes and another lemur. We compared sequences from these two lemurs in a phylogenetic framework with those of bat and dolphin echolocators and appropriate nonecholocating outgroups. Our analysis reaffirms the existence of amino acid convergence at these loci among echolocating bats and dolphins; some methods also detected signals of convergence between echolocating bats and both mice and elephants. However, we observed no significant signal of amino acid convergence between aye-ayes and echolocating bats and dolphins, suggesting that aye-aye tap-foraging auditory adaptations represent distinct evolutionary innovations. These results are also consistent with a developing consensus that convergent behavioral ecology does not reliably predict convergent molecular evolution. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
ERIC Educational Resources Information Center
Goswami, Usha; Fosker, Tim; Huss, Martina; Mead, Natasha; Szucs, Denes
2011-01-01
Across languages, children with developmental dyslexia have a specific difficulty with the neural representation of the sound structure (phonological structure) of speech. One likely cause of their difficulties with phonology is a perceptual difficulty in auditory temporal processing (Tallal, 1980). Tallal (1980) proposed that basic auditory…
Gurnsey, Kate; Salisbury, Dean; Sweet, Robert A.
2016-01-01
Auditory refractoriness refers to the finding of smaller electroencephalographic (EEG) responses to tones preceded by shorter periods of silence. To date, its physiological mechanisms remain unclear, limiting the insights gained from findings of abnormal refractoriness in individuals with schizophrenia. To resolve this roadblock, we studied auditory refractoriness in the rhesus, one of the most important animal models of auditory function, using grids of up to 32 chronically implanted cranial EEG electrodes. Four macaques passively listened to sounds whose identity and timing was random, thus preventing animals from forming valid predictions about upcoming sounds. Stimulus onset asynchrony ranged between 0.2 and 12.8 s, thus encompassing the clinically relevant timescale of refractoriness. Our results show refractoriness in all 8 previously identified middle- and long-latency components that peaked between 14 and 170 ms after tone onset. Refractoriness may reflect the formation and gradual decay of a basic sensory memory trace that may be mirrored by the expenditure and gradual recovery of a limited physiological resource that determines generator excitability. For all 8 components, results were consistent with the assumption that processing of each tone expends ∼65% of the available resource. Differences between components are caused by how quickly the resource recovers. Recovery time constants of different components ranged between 0.5 and 2 s. This work provides a solid conceptual, methodological, and computational foundation to dissect the physiological mechanisms of auditory refractoriness in the rhesus. Such knowledge may, in turn, help develop novel pharmacological, mechanism-targeted interventions. PMID:27512021
Sacks, Stephanie; Fisher, Melissa; Garrett, Coleman; Alexander, Phillip; Holland, Christine; Rose, Demian; Hooker, Christine; Vinogradov, Sophia
2013-01-01
Social cognitive deficits are an important treatment target in schizophrenia, but it is unclear to what degree they require specialized interventions and which specific components of behavioral interventions are effective. In this pilot study, we explored the effects of a novel computerized neuroplasticity-based auditory training delivered in conjunction with computerized social cognition training (SCT) in patients with schizophrenia. Nineteen clinically stable schizophrenia subjects performed 50 hours of computerized exercises that place implicit, increasing demands on auditory perception, plus 12 hours of computerized training in emotion identification, social perception, and theory of mind tasks. All subjects were assessed with MATRICS-recommended measures of neurocognition and social cognition, plus a measure of self-referential source memory before and after the computerized training. Subjects showed significant improvements on multiple measures of neurocognition. Additionally, subjects showed significant gains on measures of social cognition, including the MSCEIT Perceiving Emotions, MSCEIT Managing Emotions, and self-referential source memory, plus a significant decrease in positive symptoms. Computerized training of auditory processing/verbal learning in schizophrenia results in significant basic neurocognitive gains. Further, addition of computerized social cognition training results in significant gains in several social cognitive outcome measures. Computerized cognitive training that directly targets social cognitive processes can drive improvements in these crucial functions.
Auditory-motor Mapping for Pitch Control in Singers and Nonsingers
Jones, Jeffery A.; Keough, Dwayne
2009-01-01
Little is known about the basic processes underlying the behavior of singing. This experiment was designed to examine differences in the representation of the mapping between fundamental frequency (F0) feedback and the vocal production system in singers and nonsingers. Auditory feedback regarding F0 was shifted down in frequency while participants sang the consonant-vowel /ta/. During the initial frequency-altered trials, singers compensated to a lesser degree than nonsingers, but this difference was reduced with continued exposure to frequency-altered feedback. After brief exposure to frequency altered auditory feedback, both singers and nonsingers suddenly heard their F0 unaltered. When participants received this unaltered feedback, only singers' F0 values were found to be significantly higher than their F0 values produced during baseline and control trials. These aftereffects in singers were replicated when participants sang a different note than the note they produced while hearing altered feedback. Together, these results suggest that singers rely more on internal models than nonsingers to regulate vocal productions rather than real time auditory feedback. PMID:18592224
Basic Auditory Processing Skills and Specific Language Impairment: A New Look at an Old Hypothesis
ERIC Educational Resources Information Center
Corriveau, Kathleen; Pasquini, Elizabeth; Goswami, Usha
2007-01-01
Purpose: To explore the sensitivity of children with specific language impairment (SLI) to amplitude-modulated and durational cues that are important for perceiving suprasegmental speech rhythm and stress patterns. Method: Sixty-three children between 7 and 11 years of age were tested, 21 of whom had a diagnosis of SLI, 21 of whom were matched for…
Tong, Xiuhong; Tong, Xiuli; King Yiu, Fung
Increasing evidence suggests that children with developmental dyslexia exhibit a deficit not only at the segmental level of phonological processing but also, by extension, at the suprasegmental level. However, it remains unclear whether such a suprasegmental phonological processing deficit is due to a difficulty in processing acoustic cues of speech rhythm, such as rise time and intensity. This study set out to investigate to what extent suprasegmental phonological processing (i.e., Cantonese lexical tone perception) and rise time sensitivity could distinguish Chinese children with dyslexia from typically developing children. Sixteen children with dyslexia and 44 age-matched controls were administered a Cantonese lexical tone perception task, psychoacoustic tasks, a nonverbal reasoning ability task, and word reading and dictation tasks. Children with dyslexia performed worse than controls on Cantonese lexical tone perception, rise time, and intensity. Furthermore, Cantonese lexical tone perception appeared to be a stable indicator that distinguishes children with dyslexia from controls, even after controlling for basic auditory processing skills. These findings suggest that suprasegmental phonological processing (i.e., lexical tone perception) is a potential factor that accounts for reading difficulty in Chinese.
From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging
Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell
2010-01-01
During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516
A basic study on universal design of auditory signals in automobiles.
Yamauchi, Katsuya; Choi, Jong-dae; Maiguma, Ryo; Takada, Masayuki; Iwamiya, Shin-ichiro
2004-11-01
In this paper, the impression of various kinds of auditory signals currently used in automobiles and a comprehensive evaluation were measured by a semantic differential method. The desirable acoustic characteristic was examined for each type of auditory signal. Sharp sounds with dominant high-frequency components were not suitable for auditory signals in automobiles. This trend is expedient for the aged whose auditory sensitivity in the high frequency region is lower. When intermittent sounds were used, a longer OFF time was suitable. Generally, "dull (not sharp)" and "calm" sounds were appropriate for auditory signals. Furthermore, the comparison between the frequency spectrum of interior noise in automobiles and that of suitable sounds for various auditory signals indicates that the suitable sounds are not easily masked. The suitable auditory signals for various purposes is a good solution from the viewpoint of universal design.
Developing Brain Vital Signs: Initial Framework for Monitoring Brain Function Changes Over Time
Ghosh Hajra, Sujoy; Liu, Careesa C.; Song, Xiaowei; Fickling, Shaun; Liu, Luke E.; Pawlowski, Gabriela; Jorgensen, Janelle K.; Smith, Aynsley M.; Schnaider-Beeri, Michal; Van Den Broek, Rudi; Rizzotti, Rowena; Fisher, Kirk; D'Arcy, Ryan C. N.
2016-01-01
Clinical assessment of brain function relies heavily on indirect behavior-based tests. Unfortunately, behavior-based assessments are subjective and therefore susceptible to several confounding factors. Event-related brain potentials (ERPs), derived from electroencephalography (EEG), are often used to provide objective, physiological measures of brain function. Historically, ERPs have been characterized extensively within research settings, with limited but growing clinical applications. Over the past 20 years, we have developed clinical ERP applications for the evaluation of functional status following serious injury and/or disease. This work has identified an important gap: the need for a clinically accessible framework to evaluate ERP measures. Crucially, this enables baseline measures before brain dysfunction occurs, and might enable the routine collection of brain function metrics in the future much like blood pressure measures today. Here, we propose such a framework for extracting specific ERPs as potential “brain vital signs.” This framework enabled the translation/transformation of complex ERP data into accessible metrics of brain function for wider clinical utilization. To formalize the framework, three essential ERPs were selected as initial indicators: (1) the auditory N100 (Auditory sensation); (2) the auditory oddball P300 (Basic attention); and (3) the auditory speech processing N400 (Cognitive processing). First step validation was conducted on healthy younger and older adults (age range: 22–82 years). Results confirmed specific ERPs at the individual level (86.81–98.96%), verified predictable age-related differences (P300 latency delays in older adults, p < 0.05), and demonstrated successful linear transformation into the proposed brain vital sign (BVS) framework (basic attention latency sub-component of BVS framework reflects delays in older adults, p < 0.05). The findings represent an initial critical step in developing, extracting, and characterizing ERPs as vital signs, critical for subsequent evaluation of dysfunction in conditions like concussion and/or dementia. PMID:27242415
Perceptual Bias and Loudness Change: An Investigation of Memory, Masking, and Psychophysiology
NASA Astrophysics Data System (ADS)
Olsen, Kirk N.
Loudness is a fundamental aspect of human auditory perception that is closely associated with a sound's physical acoustic intensity. The dynamic quality of intensity change is an inherent acoustic feature in real-world listening domains such as speech and music. However, perception of loudness change in response to continuous intensity increases (up-ramps) and decreases (down-ramps) has received relatively little empirical investigation. Overestimation of loudness change in response to up-ramps is said to be linked to an adaptive survival response associated with looming (or approaching) motion in the environment. The hypothesised 'perceptual bias' to looming auditory motion suggests why perceptual overestimation of up-ramps may occur; however it does not offer a causal explanation. It is concluded that post-stimulus judgements of perceived loudness change are significantly affected by a cognitive recency response bias that, until now, has been an artefact of experimental procedure. Perceptual end-level differences caused by duration specific sensory adaptation at peripheral and/or central stages of auditory processing may explain differences in post-stimulus judgements of loudness change. Experiments that investigate human responses to acoustic intensity dynamics, encompassing topics from basic auditory psychophysics (e.g., sensory adaptation) to cognitive-emotional appraisal of increasingly complex stimulus events such as music and auditory warnings, are proposed for future research.
Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram
2011-01-01
Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning. © 2010. Published by Elsevier B.V.
Enhanced perception of pitch changes in speech and music in early blind adults.
Arnaud, Laureline; Gracco, Vincent; Ménard, Lucie
2018-06-12
It is well known that congenitally blind adults have enhanced auditory processing for some tasks. For instance, they show supra-normal capacity to perceive accelerated speech. However, only a few studies have investigated basic auditory processing in this population. In this study, we investigated if pitch processing enhancement in the blind is a domain-general or domain-specific phenomenon, and if pitch processing shares the same properties as in the sighted regarding how scores from different domains are associated. Fifteen congenitally blind adults and fifteen sighted adults participated in the study. We first created a set of personalized native and non-native vowel stimuli using an identification and rating task. Then, an adaptive discrimination paradigm was used to determine the frequency difference limen for pitch direction identification of speech (native and non-native vowels) and non-speech stimuli (musical instruments and pure tones). The results show that the blind participants had better discrimination thresholds than controls for native vowels, music stimuli, and pure tones. Whereas within the blind group, the discrimination thresholds were smaller for musical stimuli than speech stimuli, replicating previous findings in sighted participants, we did not find this effect in the current control group. Further analyses indicate that older sighted participants show higher thresholds for instrument sounds compared to speech sounds. This effect of age was not found in the blind group. Moreover, the scores across domains were not associated to the same extent in the blind as they were in the sighted. In conclusion, in addition to providing further evidence of compensatory auditory mechanisms in early blind individuals, our results point to differences in how auditory processing is modulated in this population. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stress and tinnitus—from bedside to bench and back
Mazurek, Birgit; Haupt, Heidemarie; Olze, Heidi; Szczepek, Agnieszka J.
2012-01-01
The aim of this review is to focus the attention of clinicians and basic researchers on the association between psycho-social stress and tinnitus. Although tinnitus is an auditory symptom, its onset and progression often associates with emotional strain. Recent epidemiological studies have provided evidence for a direct relationship between the emotional status of subjects and tinnitus. In addition, studies of function, morphology, and gene and protein expression in the auditory system of animals exposed to stress support the notion that the emotional status can influence the auditory system. The data provided by clinical and basic research with use of animal stress models offers valuable clues for an improvement in diagnosis and more effective treatment of tinnitus. PMID:22701404
2017-01-01
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238
Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L
2017-12-13
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.
Procedures for central auditory processing screening in schoolchildren.
Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella
2018-03-22
Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that allow the selection of as many hearing skills as possible, validated by comparison with the battery of tests used in the diagnosis. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Development of a Pitch Discrimination Screening Test for Preschool Children.
Abramson, Maria Kulick; Lloyd, Peter J
2016-04-01
There is a critical need for tests of auditory discrimination for young children as this skill plays a fundamental role in the development of speaking, prereading, reading, language, and more complex auditory processes. Frequency discrimination is important with regard to basic sensory processing affecting phonological processing, dyslexia, measurements of intelligence, auditory memory, Asperger syndrome, and specific language impairment. This study was performed to determine the clinical feasibility of the Pitch Discrimination Test (PDT) to screen the preschool child's ability to discriminate some of the acoustic demands of speech perception, primarily pitch discrimination, without linguistic content. The PDT used brief speech frequency tones to gather normative data from preschool children aged 3 to 5 yrs. A cross-sectional study was used to gather data regarding the pitch discrimination abilities of a sample of typically developing preschool children, between 3 and 5 yrs of age. The PDT consists of ten trials using two pure tones of 100-msec duration each, and was administered in an AA or AB forced-choice response format. Data from 90 typically developing preschool children between the ages of 3 and 5 yrs were used to provide normative data. Nonparametric Mann-Whitney U-testing was used to examine the effects of age as a continuous variable on pitch discrimination. The Kruskal-Wallis test was used to determine the significance of age on performance on the PDT. Spearman rank was used to determine the correlation of age and performance on the PDT. Pitch discrimination of brief tones improved significantly from age 3 yrs to age 4 yrs, as well as from age 3 yrs to the age 4- and 5-yrs group. Results indicated that between ages 3 and 4 yrs, children's auditory discrimination of pitch improved on the PDT. The data showed that children can be screened for auditory discrimination of pitch beginning with age 4 yrs. The PDT proved to be a time efficient, feasible tool for a simple form of frequency discrimination screening in the preschool population before the age where other diagnostic tests of auditory processing disorders can be used. American Academy of Audiology.
Matching Teaching and Learning Styles.
ERIC Educational Resources Information Center
Caudill, Gil
1998-01-01
Outlines three basic learning modalities--auditory, visual, and tactile--and notes that technology can help incorporate multiple modalities within each lesson, to meet the needs of most students. Discusses the importance in multiple modality teaching of effectively assessing students. Presents visual, auditory and tactile activity suggestions.…
Auditory Learning. Dimensions in Early Learning Series.
ERIC Educational Resources Information Center
Zigmond, Naomi K.; Cicci, Regina
The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…
ERP evaluation of auditory sensory memory systems in adults with intellectual disability.
Ikeda, Kazunari; Hashimoto, Souichi; Hayashi, Akiko; Kanno, Atsushi
2009-01-01
Auditory sensory memory stage can be functionally divided into two subsystems; transient-detector system and permanent feature-detector system (Naatanen, 1992). We assessed these systems in persons with intellectual disability by measuring event-related potentials (ERPs) N1 and mismatch negativity (MMN), which reflect the two auditory subsystems, respectively. Added to these, P3a (an ERP reflecting stage after sensory memory) was evaluated. Either synthesized vowels or simple tones were delivered during a passive oddball paradigm to adults with and without intellectual disability. ERPs were recorded from midline scalp sites (Fz, Cz, and Pz). Relative to control group, participants with the disability exhibited greater N1 latency and less MMN amplitude. The results for N1 amplitude and MMN latency were basically comparable between both groups. IQ scores in participants with the disability revealed no significant relation with N1 and MMN measures, whereas the IQ scores tended to increase significantly as P3a latency reduced. These outcomes suggest that persons with intellectual disability might own discrete malfunctions for the two detector systems in auditory sensory-memory stage. Moreover, the processes following sensory memory might be partly related to a determinant of mental development.
Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de
2017-12-07
To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.
Pinto, Hyorrana Priscila Pereira; Carvalho, Vinícius Rezende; Medeiros, Daniel de Castro; Almeida, Ana Flávia Santos; Mendes, Eduardo Mazoni Andrade Marçal; Moraes, Márcio Flávio Dutra
2017-04-07
Epilepsy is a neurological disease related to the occurrence of pathological oscillatory activity, but the basic physiological mechanisms of seizure remain to be understood. Our working hypothesis is that specific sensory processing circuits may present abnormally enhanced predisposition for coordinated firing in the dysfunctional brain. Such facilitated entrainment could share a similar mechanistic process as those expediting the propagation of epileptiform activity throughout the brain. To test this hypothesis, we employed the Wistar audiogenic rat (WAR) reflex animal model, which is characterized by having seizures triggered reliably by sound. Sound stimulation was modulated in amplitude to produce an auditory steady-state-evoked response (ASSR; -53.71Hz) that covers bottom-up and top-down processing in a time scale compatible with the dynamics of the epileptic condition. Data from inferior colliculus (IC) c-Fos immunohistochemistry and electrographic recordings were gathered for both the control Wistar group and WARs. Under 85-dB SLP auditory stimulation, compared to controls, the WARs presented higher number of Fos-positive cells (at IC and auditory temporal lobe) and a significant increase in ASSR-normalized energy. Similarly, the 110-dB SLP sound stimulation also statistically increased ASSR-normalized energy during ictal and post-ictal periods. However, at the transition from the physiological to pathological state (pre-ictal period), the WAR ASSR analysis demonstrated a decline in normalized energy and a significant increase in circular variance values compared to that of controls. These results indicate an enhanced coordinated firing state for WARs, except immediately before seizure onset (suggesting pre-ictal neuronal desynchronization with external sensory drive). These results suggest a competing myriad of interferences among different networks that after seizure onset converge to a massive oscillatory circuit. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Sleep Disrupts High-Level Speech Parsing Despite Significant Basic Auditory Processing.
Makov, Shiri; Sharon, Omer; Ding, Nai; Ben-Shachar, Michal; Nir, Yuval; Zion Golumbic, Elana
2017-08-09
The extent to which the sleeping brain processes sensory information remains unclear. This is particularly true for continuous and complex stimuli such as speech, in which information is organized into hierarchically embedded structures. Recently, novel metrics for assessing the neural representation of continuous speech have been developed using noninvasive brain recordings that have thus far only been tested during wakefulness. Here we investigated, for the first time, the sleeping brain's capacity to process continuous speech at different hierarchical levels using a newly developed Concurrent Hierarchical Tracking (CHT) approach that allows monitoring the neural representation and processing-depth of continuous speech online. Speech sequences were compiled with syllables, words, phrases, and sentences occurring at fixed time intervals such that different linguistic levels correspond to distinct frequencies. This enabled us to distinguish their neural signatures in brain activity. We compared the neural tracking of intelligible versus unintelligible (scrambled and foreign) speech across states of wakefulness and sleep using high-density EEG in humans. We found that neural tracking of stimulus acoustics was comparable across wakefulness and sleep and similar across all conditions regardless of speech intelligibility. In contrast, neural tracking of higher-order linguistic constructs (words, phrases, and sentences) was only observed for intelligible speech during wakefulness and could not be detected at all during nonrapid eye movement or rapid eye movement sleep. These results suggest that, whereas low-level auditory processing is relatively preserved during sleep, higher-level hierarchical linguistic parsing is severely disrupted, thereby revealing the capacity and limits of language processing during sleep. SIGNIFICANCE STATEMENT Despite the persistence of some sensory processing during sleep, it is unclear whether high-level cognitive processes such as speech parsing are also preserved. We used a novel approach for studying the depth of speech processing across wakefulness and sleep while tracking neuronal activity with EEG. We found that responses to the auditory sound stream remained intact; however, the sleeping brain did not show signs of hierarchical parsing of the continuous stream of syllables into words, phrases, and sentences. The results suggest that sleep imposes a functional barrier between basic sensory processing and high-level cognitive processing. This paradigm also holds promise for studying residual cognitive abilities in a wide array of unresponsive states. Copyright © 2017 the authors 0270-6474/17/377772-10$15.00/0.
Auditory Processing of Older Adults with Probable Mild Cognitive Impairment
ERIC Educational Resources Information Center
Edwards, Jerri D.; Lister, Jennifer J.; Elias, Maya N.; Tetlow, Amber M.; Sardina, Angela L.; Sadeq, Nasreen A.; Brandino, Amanda D.; Bush, Aryn L. Harrison
2017-01-01
Purpose: Studies suggest that deficits in auditory processing predict cognitive decline and dementia, but those studies included limited measures of auditory processing. The purpose of this study was to compare older adults with and without probable mild cognitive impairment (MCI) across two domains of auditory processing (auditory performance in…
Escera, Carles; Leung, Sumie; Grimm, Sabine
2014-07-01
Detection of changes in the acoustic environment is critical for survival, as it prevents missing potentially relevant events outside the focus of attention. In humans, deviance detection based on acoustic regularity encoding has been associated with a brain response derived from the human EEG, the mismatch negativity (MMN) auditory evoked potential, peaking at about 100-200 ms from deviance onset. By its long latency and cerebral generators, the cortical nature of both the processes of regularity encoding and deviance detection has been assumed. Yet, intracellular, extracellular, single-unit and local-field potential recordings in rats and cats have shown much earlier (circa 20-30 ms) and hierarchically lower (primary auditory cortex, medial geniculate body, inferior colliculus) deviance-related responses. Here, we review the recent evidence obtained with the complex auditory brainstem response (cABR), the middle latency response (MLR) and magnetoencephalography (MEG) demonstrating that human auditory deviance detection based on regularity encoding-rather than on refractoriness-occurs at latencies and in neural networks comparable to those revealed in animals. Specifically, encoding of simple acoustic-feature regularities and detection of corresponding deviance, such as an infrequent change in frequency or location, occur in the latency range of the MLR, in separate auditory cortical regions from those generating the MMN, and even at the level of human auditory brainstem. In contrast, violations of more complex regularities, such as those defined by the alternation of two different tones or by feature conjunctions (i.e., frequency and location) fail to elicit MLR correlates but elicit sizable MMNs. Altogether, these findings support the emerging view that deviance detection is a basic principle of the functional organization of the auditory system, and that regularity encoding and deviance detection is organized in ascending levels of complexity along the auditory pathway expanding from the brainstem up to higher-order areas of the cerebral cortex.
Hsiao, Chun-Jen; Hsu, Chih-Hsiang; Lin, Ching-Lung; Wu, Chung-Hsin; Jen, Philip Hung-Sun
2016-08-17
Although echolocating bats and other mammals share the basic design of laryngeal apparatus for sound production and auditory system for sound reception, they have a specialized laryngeal mechanism for ultrasonic sound emissions as well as a highly developed auditory system for processing species-specific sounds. Because the sounds used by bats for echolocation and rodents for communication are quite different, there must be differences in the central nervous system devoted to producing and processing species-specific sounds between them. The present study examines the difference in the relative size of several brain structures and expression of auditory-related and vocal-related proteins in the central nervous system of echolocation bats and rodents. Here, we report that bats using constant frequency-frequency-modulated sounds (CF-FM bats) and FM bats for echolocation have a larger volume of midbrain nuclei (inferior and superior colliculi) and cerebellum relative to the size of the brain than rodents (mice and rats). However, the former have a smaller volume of the cerebrum and olfactory bulb, but greater expression of otoferlin and forkhead box protein P2 than the latter. Although the size of both midbrain colliculi is comparable in both CF-FM and FM bats, CF-FM bats have a larger cerebrum and greater expression of otoferlin and forkhead box protein P2 than FM bats. These differences in brain structure and protein expression are discussed in relation to their biologically relevant sounds and foraging behavior.
Music and speech listening enhance the recovery of early sensory processing after stroke.
Särkämö, Teppo; Pihko, Elina; Laitinen, Sari; Forsblom, Anita; Soinila, Seppo; Mikkonen, Mikko; Autti, Taina; Silvennoinen, Heli M; Erkkilä, Jaakko; Laine, Matti; Peretz, Isabelle; Hietanen, Marja; Tervaniemi, Mari
2010-12-01
Our surrounding auditory environment has a dramatic influence on the development of basic auditory and cognitive skills, but little is known about how it influences the recovery of these skills after neural damage. Here, we studied the long-term effects of daily music and speech listening on auditory sensory memory after middle cerebral artery (MCA) stroke. In the acute recovery phase, 60 patients who had middle cerebral artery stroke were randomly assigned to a music listening group, an audio book listening group, or a control group. Auditory sensory memory, as indexed by the magnetic MMN (MMNm) response to changes in sound frequency and duration, was measured 1 week (baseline), 3 months, and 6 months after the stroke with whole-head magnetoencephalography recordings. Fifty-four patients completed the study. Results showed that the amplitude of the frequency MMNm increased significantly more in both music and audio book groups than in the control group during the 6-month poststroke period. In contrast, the duration MMNm amplitude increased more in the audio book group than in the other groups. Moreover, changes in the frequency MMNm amplitude correlated significantly with the behavioral improvement of verbal memory and focused attention induced by music listening. These findings demonstrate that merely listening to music and speech after neural damage can induce long-term plastic changes in early sensory processing, which, in turn, may facilitate the recovery of higher cognitive functions. The neural mechanisms potentially underlying this effect are discussed.
Jao Keehn, R Joanne; Sanchez, Sandra S; Stewart, Claire R; Zhao, Weiqi; Grenesko-Stevens, Emily L; Keehn, Brandon; Müller, Ralph-Axel
2017-01-01
Autism spectrum disorders (ASD) are pervasive developmental disorders characterized by impairments in language development and social interaction, along with restricted and stereotyped behaviors. These behaviors often include atypical responses to sensory stimuli; some children with ASD are easily overwhelmed by sensory stimuli, while others may seem unaware of their environment. Vision and audition are two sensory modalities important for social interactions and language, and are differentially affected in ASD. In the present study, 16 children and adolescents with ASD and 16 typically developing (TD) participants matched for age, gender, nonverbal IQ, and handedness were tested using a mixed event-related/blocked functional magnetic resonance imaging paradigm to examine basic perceptual processes that may form the foundation for later-developing cognitive abilities. Auditory (high or low pitch) and visual conditions (dot located high or low in the display) were presented, and participants indicated whether the stimuli were "high" or "low." Results for the auditory condition showed downregulated activity of the visual cortex in the TD group, but upregulation in the ASD group. This atypical activity in visual cortex was associated with autism symptomatology. These findings suggest atypical crossmodal (auditory-visual) modulation linked to sociocommunicative deficits in ASD, in agreement with the general hypothesis of low-level sensorimotor impairments affecting core symptomatology. Autism Res 2017, 10: 130-143. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Tillery, Kim L.; Katz, Jack; Keller, Warren D.
2000-01-01
A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…
Maturation of Visual and Auditory Temporal Processing in School-Aged Children
ERIC Educational Resources Information Center
Dawes, Piers; Bishop, Dorothy V. M.
2008-01-01
Purpose: To examine development of sensitivity to auditory and visual temporal processes in children and the association with standardized measures of auditory processing and communication. Methods: Normative data on tests of visual and auditory processing were collected on 18 adults and 98 children aged 6-10 years of age. Auditory processes…
THE HARD OF HEARING. PRENTICE-HALL FOUNDATIONS OF SPEECH PATHOLOGY SERIES.
ERIC Educational Resources Information Center
O'NEILL, JOHN J.
BASIC INFORMATION ABOUT TESTING, DIAGNOSING, AND REHABILITATING THE HARD OF HEARING IS OFFERED IN THIS INTRODUCTORY TEXT. THE PHYSICS OF SOUND, AUDITORY THEORY, ANATOMY AND PATHOLOGY OF THE EAR, AND DIAGNOSTIC ROUTINES ARE DISCUSSED. A CHAPTER ON AURAL REHABILITATION INCLUDES AN OVERVIEW OF LIPREADING AND AUDITORY TRAINING TECHNIQUES FOR ADULTS…
Late Maturation of Auditory Perceptual Learning
ERIC Educational Resources Information Center
Huyck, Julia Jones; Wright, Beverly A.
2011-01-01
Adults can improve their performance on many perceptual tasks with training, but when does the response to training become mature? To investigate this question, we trained 11-year-olds, 14-year-olds and adults on a basic auditory task (temporal-interval discrimination) using a multiple-session training regimen known to be effective for adults. The…
Auditory Habituation in the Fetus and Neonate: An fMEG Study
ERIC Educational Resources Information Center
Muenssinger, Jana; Matuz, Tamara; Schleger, Franziska; Kiefer-Schmidt, Isabelle; Goelz, Rangmar; Wacker-Gussmann, Annette; Birbaumer, Niels; Preissl, Hubert
2013-01-01
Habituation--the most basic form of learning--is used to evaluate central nervous system (CNS) maturation and to detect abnormalities in fetal brain development. In the current study, habituation, stimulus specificity and dishabituation of auditory evoked responses were measured in fetuses and newborns using fetal magnetoencephalography (fMEG). An…
ERIC Educational Resources Information Center
Mokhemar, Mary Ann
This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.
NASA Astrophysics Data System (ADS)
Todd, Neil P. M.; Lee, Christopher S.
2002-05-01
It has long been noted that the world's languages vary considerably in their rhythmic organization. Different languages seem to privilege different phonological units as their basic rhythmic unit, and there is now a large body of evidence that such differences have important consequences for crucial aspects of language acquisition and processing. The most fundamental finding is that the rhythmic structure of a language strongly influences the process of spoken-word recognition. This finding, together with evidence that infants are sensitive from birth to rhythmic differences between languages, and exploit rhythmic cues to segmentation at an earlier developmental stage than other cues prompted the claim that rhythm is the key which allows infants to begin building a lexicon and then go on to acquire syntax. It is therefore of interest to determine how differences in rhythmic organization arise at the acoustic/auditory level. In this paper, it is shown how an auditory model of the primitive representation of sound provides just such an account of rhythmic differences. Its performance is evaluated on a data set of French and English sentences and compared with the results yielded by the phonetic accounts of Frank Ramus and his colleagues and Esther Grabe and her colleagues.
The Contribution of Brainstem and Cerebellar Pathways to Auditory Recognition
McLachlan, Neil M.; Wilson, Sarah J.
2017-01-01
The cerebellum has been known to play an important role in motor functions for many years. More recently its role has been expanded to include a range of cognitive and sensory-motor processes, and substantial neuroimaging and clinical evidence now points to cerebellar involvement in most auditory processing tasks. In particular, an increase in the size of the cerebellum over recent human evolution has been attributed in part to the development of speech. Despite this, the auditory cognition literature has largely overlooked afferent auditory connections to the cerebellum that have been implicated in acoustically conditioned reflexes in animals, and could subserve speech and other auditory processing in humans. This review expands our understanding of auditory processing by incorporating cerebellar pathways into the anatomy and functions of the human auditory system. We reason that plasticity in the cerebellar pathways underpins implicit learning of spectrotemporal information necessary for sound and speech recognition. Once learnt, this information automatically recognizes incoming auditory signals and predicts likely subsequent information based on previous experience. Since sound recognition processes involving the brainstem and cerebellum initiate early in auditory processing, learnt information stored in cerebellar memory templates could then support a range of auditory processing functions such as streaming, habituation, the integration of auditory feature information such as pitch, and the recognition of vocal communications. PMID:28373850
Differential Effects of Alcohol on Working Memory: Distinguishing Multiple Processes
Saults, J. Scott; Cowan, Nelson; Sher, Kenneth J.; Moreno, Matthew V.
2008-01-01
We assessed effects of alcohol consumption on different types of working memory (WM) tasks in an attempt to characterize the nature of alcohol effects on cognition. The WM tasks varied in two properties of materials to be retained in a two-stimulus comparison procedure. Conditions included (1) spatial arrays of colors, (2) temporal sequences of colors, (3) spatial arrays of spoken digits, and (4) temporal sequences of spoken digits. Alcohol consumption impaired memory for auditory and visual sequences, but not memory for simultaneous arrays of auditory or visual stimuli. These results suggest that processes needed to encode and maintain stimulus sequences, such as rehearsal, are more sensitive to alcohol intoxication than other WM mechanisms needed to maintain multiple concurrent items, such as focusing attention on them. These findings help to resolve disparate findings from prior research into alcohol’s effect on WM and on divided attention. The results suggest that moderate doses of alcohol impair WM by affecting certain mnemonic strategies and executive processes rather than by shrinking the basic holding capacity of WM. PMID:18179311
Differential effects of alcohol on working memory: distinguishing multiple processes.
Saults, J Scott; Cowan, Nelson; Sher, Kenneth J; Moreno, Matthew V
2007-12-01
The authors assessed effects of alcohol consumption on different types of working memory (WM) tasks in an attempt to characterize the nature of alcohol effects on cognition. The WM tasks varied in 2 properties of materials to be retained in a 2-stimulus comparison procedure. Conditions included (a) spatial arrays of colors, (b) temporal sequences of colors, (c) spatial arrays of spoken digits, and (d) temporal sequences of spoken digits. Alcohol consumption impaired memory for auditory and visual sequences but not memory for simultaneous arrays of auditory or visual stimuli. These results suggest that processes needed to encode and maintain stimulus sequences, such as rehearsal, are more sensitive to alcohol intoxication than other WM mechanisms needed to maintain multiple concurrent items, such as focusing attention on them. These findings help to resolve disparate findings from prior research on alcohol's effect on WM and on divided attention. The results suggest that moderate doses of alcohol impair WM by affecting certain mnemonic strategies and executive processes rather than by shrinking the basic holding capacity of WM. (c) 2008 APA, all rights reserved.
Strategy Choice Mediates the Link between Auditory Processing and Spelling
Kwong, Tru E.; Brachman, Kyle J.
2014-01-01
Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities. PMID:25198787
Strategy choice mediates the link between auditory processing and spelling.
Kwong, Tru E; Brachman, Kyle J
2014-01-01
Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.
Ma, Xiaoran; McPherson, Bradley; Ma, Lian
2016-03-01
Objective Children with nonsyndromic cleft lip and/or palate often have a high prevalence of middle ear dysfunction. However, there are also indications that they may have a higher prevalence of (central) auditory processing disorder. This study used Fisher's Auditory Problems Checklist for caregivers to determine whether children with nonsyndromic cleft lip and/or palate have potentially more auditory processing difficulties compared with craniofacially normal children. Methods Caregivers of 147 school-aged children with nonsyndromic cleft lip and/or palate were recruited for the study. This group was divided into three subgroups: cleft lip, cleft palate, and cleft lip and palate. Caregivers of 60 craniofacially normal children were recruited as a control group. Hearing health tests were conducted to evaluate peripheral hearing. Caregivers of children who passed this assessment battery completed Fisher's Auditory Problems Checklist, which contains 25 questions related to behaviors linked to (central) auditory processing disorder. Results Children with cleft palate showed the lowest scores on the Fisher's Auditory Problems Checklist questionnaire, consistent with a higher index of suspicion for (central) auditory processing disorder. There was a significant difference in the manifestation of (central) auditory processing disorder-linked behaviors between the cleft palate and the control groups. The most common behaviors reported in the nonsyndromic cleft lip and/or palate group were short attention span and reduced learning motivation, along with hearing difficulties in noise. Conclusion A higher occurrence of (central) auditory processing disorder-linked behaviors were found in children with nonsyndromic cleft lip and/or palate, particularly cleft palate. Auditory processing abilities should not be ignored in children with nonsyndromic cleft lip and/or palate, and it is necessary to consider assessment tests for (central) auditory processing disorder when an auditory diagnosis is made for this population.
[Auditory processing and high frequency audiometry in students of São Paulo].
Ramos, Cristina Silveira; Pereira, Liliane Desgualdo
2005-01-01
Auditory processing and auditory sensibility to high Frequency sounds. To characterize the localization processes, temporal ordering, hearing patterns and detection of high frequency sounds, looking for possible relations between these factors. 32 hearing fourth grade students, born in city of São Paulo, were submitted to: a simplified evaluation of the auditory processing; duration pattern test; high frequency audiometry. Three (9,4%) individuals presented auditory processing disorder (APD) and in one of them there was the coexistence of lower hearing thresholds in high frequency audiometry. APD associated to an auditory sensibility loss in high frequencies should be further investigated.
[Auditory processing evaluation in children born preterm].
Gallo, Júlia; Dias, Karin Ziliotto; Pereira, Liliane Desgualdo; Azevedo, Marisa Frasson de; Sousa, Elaine Colombo
2011-01-01
To verify the performance of children born preterm on auditory processing evaluation, and to correlate the data with behavioral hearing assessment carried out at 12 months of age, comparing the results to those of auditory processing evaluation of children born full-term. Participants were 30 children with ages between 4 and 7 years, who were divided into two groups: Group 1 (children born preterm), and Group 2 (children born full-term). The auditory processing results of Group 1 were correlated to data obtained from the behavioral auditory evaluation carried out at 12 months of age. The results were compared between groups. Subjects in Group 1 presented at least one risk indicator for hearing loss at birth. In the behavioral auditory assessment carried out at 12 months of age, 38% of the children in Group 1 were at risk for central auditory processing deficits, and 93.75% presented auditory processing deficits on the evaluation. Significant differences were found between the groups for the temporal order test, the PSI test with ipsilateral competitive message, and the speech-in-noise test. The delay in sound localization ability was associated to temporal processing deficits. Children born preterm have worse performance in auditory processing evaluation than children born full-term. Delay in sound localization at 12 months is associated to deficits on the physiological mechanism of temporal processing in the auditory processing evaluation carried out between 4 and 7 years.
ERIC Educational Resources Information Center
Lee, Christopher S.; Todd, Neil P. McAngus
2004-01-01
The world's languages display important differences in their rhythmic organization; most particularly, different languages seem to privilege different phonological units (mora, syllable, or stress foot) as their basic rhythmic unit. There is now considerable evidence that such differences have important consequences for crucial aspects of language…
Explaining the Colavita visual dominance effect.
Spence, Charles
2009-01-01
The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm, a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward over the years in order to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. Here, the empirical literature on the Colavita visual dominance effect is reviewed and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. A novel explanation of the Colavita effect is therefore put forward here, one that is based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality.
Klatzky, Roberta L; Giudice, Nicholas A; Bennett, Christopher R; Loomis, Jack M
2014-01-01
Many developers wish to capitalize on touch-screen technology for developing aids for the blind, particularly by incorporating vibrotactile stimulation to convey patterns on their surfaces, which otherwise are featureless. Our belief is that they will need to take into account basic research on haptic perception in designing these graphics interfaces. We point out constraints and limitations in haptic processing that affect the use of these devices. We also suggest ways to use sound to augment basic information from touch, and we include evaluation data from users of a touch-screen device with vibrotactile and auditory feedback that we have been developing, called a vibro-audio interface.
Auditory spatial processing in Alzheimer’s disease
Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.
2015-01-01
The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732
Developmental changes in automatic rule-learning mechanisms across early childhood.
Mueller, Jutta L; Friederici, Angela D; Männel, Claudia
2018-06-27
Infants' ability to learn complex linguistic regularities from early on has been revealed by electrophysiological studies indicating that 3-month-olds, but not adults, can automatically detect non-adjacent dependencies between syllables. While different ERP responses in adults and infants suggest that both linguistic rule learning and its link to basic auditory processing undergo developmental changes, systematic investigations of the developmental trajectories are scarce. In the present study, we assessed 2- and 4-year-olds' ERP indicators of pitch discrimination and linguistic rule learning in a syllable-based oddball design. To test for the relation between auditory discrimination and rule learning, ERP responses to pitch changes were used as predictor for potential linguistic rule-learning effects. Results revealed that 2-year-olds, but not 4-year-olds, showed ERP markers of rule learning. Although, 2-year-olds' rule learning was not dependent on differences in pitch perception, 4-year-old children demonstrated a dependency, such that those children who showed more pronounced responses to pitch changes still showed an effect of rule learning. These results narrow down the developmental decline of the ability for automatic linguistic rule learning to the age between 2 and 4 years, and, moreover, point towards a strong modification of this change by auditory processes. At an age when the ability of automatic linguistic rule learning phases out, rule learning can still be observed in children with enhanced auditory responses. The observed interrelations are plausible causes for age-of-acquisition effects and inter-individual differences in language learning. © 2018 John Wiley & Sons Ltd.
Auditory motion-specific mechanisms in the primate brain
Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.
2017-01-01
This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038
Vilela, Nadia; Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Sanches, Seisse Gabriela Gandolfi; Wertzner, Haydée Fiszbein; Carvallo, Renata Mota Mamede
2016-02-01
To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central) auditory processing disorder . Language, audiological and (central) auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central) auditory processing evaluation results. When a (central) auditory processing disorder was present in association with a speech disorder, the children tended to have lower scores on phonological assessments. A greater severity of speech disorder was related to a greater probability of the child having a (central) auditory processing disorder. The use of a cutoff value for the Percentage of Consonants Correct-Revised index successfully distinguished between children with and without a (central) auditory processing disorder. The severity of speech-sound disorder in children was influenced by the presence of (central) auditory processing disorder. The attempt to identify a cutoff value based on a severity index was successful.
Debruyne, Joke A; Francart, Tom; Janssen, A Miranda L; Douma, Kim; Brokx, Jan P L
2017-03-01
This study investigated the hypotheses that (1) prelingually deafened CI users do not have perfect electrode discrimination ability and (2) the deactivation of non-discriminable electrodes can improve auditory performance. Electrode discrimination difference limens were determined for all electrodes of the array. The subjects' basic map was subsequently compared to an experimental map, which contained only discriminable electrodes, with respect to speech understanding in quiet and in noise, listening effort, spectral ripple discrimination and subjective appreciation. Subjects were six prelingually deafened, late implanted adults using the Nucleus cochlear implant. Electrode discrimination difference limens across all subjects and electrodes ranged from 0.5 to 7.125, with significantly larger limens for basal electrodes. No significant differences were found between the basic map and the experimental map on auditory tests. Subjective appreciation was found to be significantly poorer for the experimental map. Prelingually deafened CI users were unable to discriminate between all adjacent electrodes. There was no difference in auditory performance between the basic and experimental map. Potential factors contributing to the absence of improvement with the experimental map include the reduced number of maxima, incomplete adaptation to the new frequency allocation, and the mainly basal location of deactivated electrodes.
Auditory Processing Disorder in Children
... News & Events NIDCD News Inside NIDCD Newsletter Shareable Images ... Info » Hearing, Ear Infections, and Deafness Auditory Processing Disorder Auditory processing disorder (APD) describes a condition ...
ERIC Educational Resources Information Center
Vause, Tricia; Martin, Garry L.; Yu, C.T.; Marion, Carole; Sakko, Gina
2005-01-01
The relationship between language, performance on the Assessment of Basic Learning Abilities (ABLA) test, and stimulus equivalence was examined. Five participants with minimal verbal repertoires were studied; 3 who passed up to ABLA Level 4, a visual quasi-identity discrimination and 2 who passed ABLA Level 6, an auditory-visual nonidentity…
ERIC Educational Resources Information Center
Erdener, Dogu
2016-01-01
Traditionally, second language (L2) instruction has emphasised auditory-based instruction methods. However, this approach is restrictive in the sense that speech perception by humans is not just an auditory phenomenon but a multimodal one, and specifically, a visual one as well. In the past decade, experimental studies have shown that the…
Ptok, M; Meisen, R
2008-01-01
The rapid auditory processing defi-cit theory holds that impaired reading/writing skills are not caused exclusively by a cognitive deficit specific to representation and processing of speech sounds but arise due to sensory, mainly auditory, deficits. To further explore this theory we compared different measures of auditory low level skills to writing skills in school children. prospective study. School children attending third and fourth grade. just noticeable differences for intensity and frequency (JNDI, JNDF), gap detection (GD) monaural and binaural temporal order judgement (TOJb and TOJm); grade in writing, language and mathematics. correlation analysis. No relevant correlation was found between any auditory low level processing variable and writing skills. These data do not support the rapid auditory processing deficit theory.
Auditory priming improves neural synchronization in auditory-motor entrainment.
Crasta, Jewel E; Thaut, Michael H; Anderson, Charles W; Davies, Patricia L; Gavin, William J
2018-05-22
Neurophysiological research has shown that auditory and motor systems interact during movement to rhythmic auditory stimuli through a process called entrainment. This study explores the neural oscillations underlying auditory-motor entrainment using electroencephalography. Forty young adults were randomly assigned to one of two control conditions, an auditory-only condition or a motor-only condition, prior to a rhythmic auditory-motor synchronization condition (referred to as combined condition). Participants assigned to the auditory-only condition auditory-first group) listened to 400 trials of auditory stimuli presented every 800 ms, while those in the motor-only condition (motor-first group) were asked to tap rhythmically every 800 ms without any external stimuli. Following their control condition, all participants completed an auditory-motor combined condition that required tapping along with auditory stimuli every 800 ms. As expected, the neural processes for the combined condition for each group were different compared to their respective control condition. Time-frequency analysis of total power at an electrode site on the left central scalp (C3) indicated that the neural oscillations elicited by auditory stimuli, especially in the beta and gamma range, drove the auditory-motor entrainment. For the combined condition, the auditory-first group had significantly lower evoked power for a region of interest representing sensorimotor processing (4-20 Hz) and less total power in a region associated with anticipation and predictive timing (13-16 Hz) than the motor-first group. Thus, the auditory-only condition served as a priming facilitator of the neural processes in the combined condition, more so than the motor-only condition. Results suggest that even brief periods of rhythmic training of the auditory system leads to neural efficiency facilitating the motor system during the process of entrainment. These findings have implications for interventions using rhythmic auditory stimulation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System
Anderson, Lucy A.
2016-01-01
High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the findings suggest that auditory temporal processing deficits, such as impairments in gap-in-noise detection, could arise from reduced brain sensitivity to sound offsets alone. PMID:26865621
Estrogenic modulation of auditory processing: a vertebrate comparison
Caras, Melissa L.
2013-01-01
Sex-steroid hormones are well-known regulators of vocal motor behavior in several organisms. A large body of evidence now indicates that these same hormones modulate processing at multiple levels of the ascending auditory pathway. The goal of this review is to provide a comparative analysis of the role of estrogens in vertebrate auditory function. Four major conclusions can be drawn from the literature: First, estrogens may influence the development of the mammalian auditory system. Second, estrogenic signaling protects the mammalian auditory system from noise- and age-related damage. Third, estrogens optimize auditory processing during periods of reproductive readiness in multiple vertebrate lineages. Finally, brain-derived estrogens can act locally to enhance auditory response properties in at least one avian species. This comparative examination may lead to a better appreciation of the role of estrogens in the processing of natural vocalizations and may provide useful insights toward alleviating auditory dysfunctions emanating from hormonal imbalances. PMID:23911849
Reduced auditory processing capacity during vocalization in children with Selective Mutism.
Arie, Miri; Henkin, Yael; Lamy, Dominique; Tetin-Schneider, Simona; Apter, Alan; Sadeh, Avi; Bar-Haim, Yair
2007-02-01
Because abnormal Auditory Efferent Activity (AEA) is associated with auditory distortions during vocalization, we tested whether auditory processing is impaired during vocalization in children with Selective Mutism (SM). Participants were children with SM and abnormal AEA, children with SM and normal AEA, and normally speaking controls, who had to detect aurally presented target words embedded within word lists under two conditions: silence (single task), and while vocalizing (dual task). To ascertain specificity of auditory-vocal deficit, effects of concurrent vocalizing were also examined during a visual task. Children with SM and abnormal AEA showed impaired auditory processing during vocalization relative to children with SM and normal AEA, and relative to control children. This impairment is specific to the auditory modality and does not reflect difficulties in dual task per se. The data extends previous findings suggesting that deficient auditory processing is involved in speech selectivity in SM.
Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne
2016-12-01
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.
Emergent selectivity for task-relevant stimuli in higher-order auditory cortex
Atiani, Serin; David, Stephen V.; Elgueda, Diego; Locastro, Michael; Radtke-Schuller, Susanne; Shamma, Shihab A.; Fritz, Jonathan B.
2014-01-01
A variety of attention-related effects have been demonstrated in primary auditory cortex (A1). However, an understanding of the functional role of higher auditory cortical areas in guiding attention to acoustic stimuli has been elusive. We recorded from neurons in two tonotopic cortical belt areas in the dorsal posterior ectosylvian gyrus (dPEG) of ferrets trained on a simple auditory discrimination task. Neurons in dPEG showed similar basic auditory tuning properties to A1, but during behavior we observed marked differences between these areas. In the belt areas, changes in neuronal firing rate and response dynamics greatly enhanced responses to target stimuli relative to distractors, allowing for greater attentional selection during active listening. Consistent with existing anatomical evidence, the pattern of sensory tuning and behavioral modulation in auditory belt cortex links the spectro-temporal representation of the whole acoustic scene in A1 to a more abstracted representation of task-relevant stimuli observed in frontal cortex. PMID:24742467
Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L
2012-04-01
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.
ERIC Educational Resources Information Center
Fox, Allison M.; Reid, Corinne L.; Anderson, Mike; Richardson, Cassandra; Bishop, Dorothy V. M.
2012-01-01
According to the rapid auditory processing theory, the ability to parse incoming auditory information underpins learning of oral and written language. There is wide variation in this low-level perceptual ability, which appears to follow a protracted developmental course. We studied the development of rapid auditory processing using event-related…
Sensing Super-Position: Human Sensing Beyond the Visual Spectrum
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2007-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems.
Musical Experience, Auditory Perception and Reading-Related Skills in Children
Banai, Karen; Ahissar, Merav
2013-01-01
Background The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Methodology/Principal Findings Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Conclusions/Significance Participants’ previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case. PMID:24086654
Sanguebuche, Taissane Rodrigues; Peixe, Bruna Pias; Bruno, Rúbia Soares; Biaggio, Eliara Pinto Vieira; Garcia, Michele Vargas
2018-01-01
Introduction The auditory system consists of sensory structures and central connections. The evaluation of the auditory pathway at a central level can be performed through behavioral and electrophysiological tests, because they are complementary to each other and provide important information about comprehension. Objective To correlate the findings of speech brainstem-evoked response audiometry with the behavioral tests Random Gap Detection Test and Masking Level Difference in adults with hearing loss. Methods All patients were submitted to a basic audiological evaluation, to the aforementioned behavioral tests, and to an electrophysiological assessment, by means of click-evoked and speech-evoked brainstem response audiometry. Results There were no statistically significant values among the electrophysiological test and the behavioral tests. However, there was a significant correlation between the V and A waves, as well as the D and F waves, of the speech-evoked brainstem response audiometry peaks. Such correlations are positive, indicating that the increase of a variable implies an increase in another and vice versa. Conclusion It was possible to correlate the findings of the speech-evoked brainstem response audiometry with those of the behavioral tests Random Gap Detection and Masking Level Difference. However, there was no statistically significant correlation between them. This shows that the electrophysiological evaluation does not depend uniquely on the behavioral skills of temporal resolution and selective attention. PMID:29379574
Ding, Nai; Pan, Xunyi; Luo, Cheng; Su, Naifei; Zhang, Wen; Zhang, Jianfeng
2018-01-31
How the brain groups sequential sensory events into chunks is a fundamental question in cognitive neuroscience. This study investigates whether top-down attention or specific tasks are required for the brain to apply lexical knowledge to group syllables into words. Neural responses tracking the syllabic and word rhythms of a rhythmic speech sequence were concurrently monitored using electroencephalography (EEG). The participants performed different tasks, attending to either the rhythmic speech sequence or a distractor, which was another speech stream or a nonlinguistic auditory/visual stimulus. Attention to speech, but not a lexical-meaning-related task, was required for reliable neural tracking of words, even when the distractor was a nonlinguistic stimulus presented cross-modally. Neural tracking of syllables, however, was reliably observed in all tested conditions. These results strongly suggest that neural encoding of individual auditory events (i.e., syllables) is automatic, while knowledge-based construction of temporal chunks (i.e., words) crucially relies on top-down attention. SIGNIFICANCE STATEMENT Why we cannot understand speech when not paying attention is an old question in psychology and cognitive neuroscience. Speech processing is a complex process that involves multiple stages, e.g., hearing and analyzing the speech sound, recognizing words, and combining words into phrases and sentences. The current study investigates which speech-processing stage is blocked when we do not listen carefully. We show that the brain can reliably encode syllables, basic units of speech sounds, even when we do not pay attention. Nevertheless, when distracted, the brain cannot group syllables into multisyllabic words, which are basic units for speech meaning. Therefore, the process of converting speech sound into meaning crucially relies on attention. Copyright © 2018 the authors 0270-6474/18/381178-11$15.00/0.
The function and failure of sensory predictions.
Bansal, Sonia; Ford, Judith M; Spering, Miriam
2018-04-23
Humans and other primates are equipped with neural mechanisms that allow them to automatically make predictions about future events, facilitating processing of expected sensations and actions. Prediction-driven control and monitoring of perceptual and motor acts are vital to normal cognitive functioning. This review provides an overview of corollary discharge mechanisms involved in predictions across sensory modalities and discusses consequences of predictive coding for cognition and behavior. Converging evidence now links impairments in corollary discharge mechanisms to neuropsychiatric symptoms such as hallucinations and delusions. We review studies supporting a prediction-failure hypothesis of perceptual and cognitive disturbances. We also outline neural correlates underlying prediction function and failure, highlighting similarities across the visual, auditory, and somatosensory systems. In linking basic psychophysical and psychophysiological evidence of visual, auditory, and somatosensory prediction failures to neuropsychiatric symptoms, our review furthers our understanding of disease mechanisms. © 2018 New York Academy of Sciences.
Barker, Matthew D; Purdy, Suzanne C
2016-01-01
This research investigates a novel method for identifying and measuring school-aged children with poor auditory processing through a tablet computer. Feasibility and test-retest reliability are investigated by examining the percentage of Group 1 participants able to complete the tasks and developmental effects on performance. Concurrent validity was investigated against traditional tests of auditory processing using Group 2. There were 847 students aged 5 to 13 years in group 1, and 46 aged 5 to 14 years in group 2. Some tasks could not be completed by the youngest participants. Significant correlations were found between results of most auditory processing areas assessed by the Feather Squadron test and traditional auditory processing tests. Test-retest comparisons indicated good reliability for most of the Feather Squadron assessments and some of the traditional tests. The results indicate the Feather Squadron assessment is a time-efficient, feasible, concurrently valid, and reliable approach for measuring auditory processing in school-aged children. Clinically, this may be a useful option for audiologists when performing auditory processing assessments as it is a relatively fast, engaging, and easy way to assess auditory processing abilities. Research is needed to investigate further the construct validity of this new assessment by examining the association between performance on Feather Squadron and objective evoked potential, lesion studies, and/or functional imaging measures of auditory function.
Eye closure helps memory by reducing cognitive load and enhancing visualisation.
Vredeveldt, Annelies; Hitch, Graham J; Baddeley, Alan D
2011-10-01
Closing the eyes helps memory. We investigated the mechanisms underlying the eyeclosure effect by exposing 80 eyewitnesses to different types of distraction during the witness interview: blank screen (control), eyes closed, visual distraction, and auditory distraction. We examined the cognitive load hypothesis by comparing any type of distraction (visual or auditory) with minimal distraction (blank screen or eyes closed). We found recall to be significantly better when distraction was minimal, providing evidence that eyeclosure reduces cognitive load. We examined the modality-specific interference hypothesis by comparing the effects of visual and auditory distraction on recall of visual and auditory information. Visual and auditory distraction selectively impaired memory for information presented in the same modality, supporting the role of visualisation in the eyeclosure effect. Analysis of recall in terms of grain size revealed that recall of basic information about the event was robust, whereas recall of specific details was prone to both general and modality-specific disruptions.
Baltus, Alina; Herrmann, Christoph Siegfried
2016-06-01
Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Musical Experience, Sensorineural Auditory Processing, and Reading Subskills in Adults.
Tichko, Parker; Skoe, Erika
2018-04-27
Developmental research suggests that sensorineural auditory processing, reading subskills (e.g., phonological awareness and rapid naming), and musical experience are related during early periods of reading development. Interestingly, recent work suggests that these relations may extend into adulthood, with indices of sensorineural auditory processing relating to global reading ability. However, it is largely unknown whether sensorineural auditory processing relates to specific reading subskills, such as phonological awareness and rapid naming, as well as musical experience in mature readers. To address this question, we recorded electrophysiological responses to a repeating click (auditory stimulus) in a sample of adult readers. We then investigated relations between electrophysiological responses to sound, reading subskills, and musical experience in this same set of adult readers. Analyses suggest that sensorineural auditory processing, reading subskills, and musical experience are related in adulthood, with faster neural conduction times and greater musical experience associated with stronger rapid-naming skills. These results are similar to the developmental findings that suggest reading subskills are related to sensorineural auditory processing and musical experience in children.
Behavioral Indications of Auditory Processing Disorders.
ERIC Educational Resources Information Center
Hartman, Kerry McGoldrick
1988-01-01
Identifies disruptive behaviors of children that may indicate central auditory processing disorders (CAPDs), perceptual handicaps of auditory discrimination or auditory memory not related to hearing ability. Outlines steps to modify the communication environment for CAPD children at home and in the classroom. (SV)
ERIC Educational Resources Information Center
Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean
2015-01-01
We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…
Skouras, Stavros; Lohmann, Gabriele
2018-01-01
Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions. PMID:29385142
Spatial processing in the auditory cortex of the macaque monkey
NASA Astrophysics Data System (ADS)
Recanzone, Gregg H.
2000-10-01
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
Speech enhancement using the modified phase-opponency model.
Deshmukh, Om D; Espy-Wilson, Carol Y; Carney, Laurel H
2007-06-01
In this paper we present a model called the Modified Phase-Opponency (MPO) model for single-channel speech enhancement when the speech is corrupted by additive noise. The MPO model is based on the auditory PO model, proposed for detection of tones in noise. The PO model includes a physiologically realistic mechanism for processing the information in neural discharge times and exploits the frequency-dependent phase properties of the tuned filters in the auditory periphery by using a cross-auditory-nerve-fiber coincidence detection for extracting temporal cues. The MPO model alters the components of the PO model such that the basic functionality of the PO model is maintained but the properties of the model can be analyzed and modified independently. The MPO-based speech enhancement scheme does not need to estimate the noise characteristics nor does it assume that the noise satisfies any statistical model. The MPO technique leads to the lowest value of the LPC-based objective measures and the highest value of the perceptual evaluation of speech quality measure compared to other methods when the speech signals are corrupted by fluctuating noise. Combining the MPO speech enhancement technique with our aperiodicity, periodicity, and pitch detector further improves its performance.
Moving to the Beat and Singing are Linked in Humans
Dalla Bella, Simone; Berkowska, Magdalena; Sowiński, Jakub
2015-01-01
The abilities to sing and to move to the beat of a rhythmic auditory stimulus emerge early during development, and both engage perceptual, motor, and sensorimotor processes. These similarities between singing and synchronization to a beat may be rooted in biology. Patel (2008) has suggested that motor synchronization to auditory rhythms may have emerged during evolution as a byproduct of selection for vocal learning (“vocal learning and synchronization hypothesis”). This view predicts a strong link between vocal performance and synchronization skills in humans. Here, we tested this prediction by asking occasional singers to tap along with auditory pulse trains and to imitate familiar melodies. Both vocal imitation and synchronization skills were measured in terms of accuracy and precision or consistency. Accurate and precise singers tapped more in the vicinity of the pacing stimuli (i.e., they were more accurate) than less accurate and less precise singers. Moreover, accurate singers were more consistent when tapping to the beat. These differences cannot be ascribed to basic motor skills or to motivational factors. Individual differences in terms of singing proficiency and synchronization skills may reflect the variability of a shared sensorimotor translation mechanism. PMID:26733370
The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.
Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte
2016-02-03
Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.
Visual form predictions facilitate auditory processing at the N1.
Paris, Tim; Kim, Jeesun; Davis, Chris
2017-02-20
Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.
Auditory brainstem response to complex sounds: a tutorial
Skoe, Erika; Kraus, Nina
2010-01-01
This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007
ERIC Educational Resources Information Center
Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.
2016-01-01
Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…
Impact of Educational Level on Performance on Auditory Processing Tests.
Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane
2016-01-01
Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.
Auditory processing disorders, verbal disfluency, and learning difficulties: a case study.
Jutras, Benoît; Lagacé, Josée; Lavigne, Annik; Boissonneault, Andrée; Lavoie, Charlen
2007-01-01
This case study reports the findings of auditory behavioral and electrophysiological measures performed on a graduate student (identified as LN) presenting verbal disfluency and learning difficulties. Results of behavioral audiological testing documented the presence of auditory processing disorders, particularly temporal processing and binaural integration. Electrophysiological test results, including middle latency, late latency and cognitive potentials, revealed that LN's central auditory system processes acoustic stimuli differently to a reference group with normal hearing.
Perception of environmental sounds by experienced cochlear implant patients.
Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan
2011-01-01
Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries, or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds, and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Seventeen experienced postlingually deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern, and temporal order for tones tests), and a backward digit recall test. The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants, and r = 0.48 for vowels. HINT and CNC scores in quiet moderately correlated with the temporal order for tones. However, the correlation between speech and environmental sounds changed little after partialling out the variance due to other variables. Present findings indicate that environmental sound identification is difficult for CI patients. They further suggest that speech and environmental sounds may overlap considerably in their perceptual processing. Certain spectrotemproral processing abilities are separately associated with speech and environmental sound performance. However, they do not appear to mediate the relationship between speech and environmental sounds in CI patients. Environmental sound rehabilitation may be beneficial to some patients. Environmental sound testing may have potential diagnostic applications, especially with difficult-to-test populations and might also be predictive of speech performance for prelingually deafened patients with cochlear implants.
Auditory spatial processing in the human cortex.
Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C
2012-12-01
The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
Auditory risk assessment of college music students in jazz band-based instructional activity.
Gopal, Kamakshi V; Chesky, Kris; Beschoner, Elizabeth A; Nelson, Paul D; Stewart, Bradley J
2013-01-01
It is well-known that musicians are at risk for music-induced hearing loss, however, systematic evaluation of music exposure and its effects on the auditory system are still difficult to assess. The purpose of the study was to determine if college students in jazz band-based instructional activity are exposed to loud classroom noise and consequently exhibit acute but significant changes in basic auditory measures compared to non-music students in regular classroom sessions. For this we (1) measured and compared personal exposure levels of college students (n = 14) participating in a routine 50 min jazz ensemble-based instructional activity (experimental) to personal exposure levels of non-music students (n = 11) participating in a 50-min regular classroom activity (control), and (2) measured and compared pre- to post-auditory changes associated with these two types of classroom exposures. Results showed that the L eq (equivalent continuous noise level) generated during the 50 min jazz ensemble-based instructional activity ranged from 95 dBA to 105.8 dBA with a mean of 99.5 ± 2.5 dBA. In the regular classroom, the L eq ranged from 46.4 dBA to 67.4 dBA with a mean of 49.9 ± 10.6 dBA. Additionally, significant differences were observed in pre to post-auditory measures between the two groups. The experimental group showed a significant temporary threshold shift bilaterally at 4000 Hz (P < 0.05), and a significant decrease in the amplitude of transient-evoked otoacoustic emission response in both ears (P < 0.05) after exposure to the jazz ensemble-based instructional activity. No significant changes were found in the control group between pre- and post-exposure measures. This study quantified the noise exposure in jazz band-based practice sessions and its effects on basic auditory measures. Temporary, yet significant, auditory changes seen in music students place them at risk for hearing loss compared to their non-music cohorts.
Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David
2015-02-01
Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.
Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David
2014-01-01
Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals - over a range of time scales from milliseconds to seconds - renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own ‚privileged‘ temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. PMID:24956028
Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru
2016-01-01
The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060
Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru
2017-03-01
The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.
Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.
Meredith, M Alex; Allman, Brian L
2015-03-01
The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Auditory object perception: A neurobiological model and prospective review.
Brefczynski-Lewis, Julie A; Lewis, James W
2017-10-01
Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein. Copyright © 2017. Published by Elsevier Ltd.
Auditory Processing Testing: In the Booth versus Outside the Booth.
Lucker, Jay R
2017-09-01
Many audiologists believe that auditory processing testing must be carried out in a soundproof booth. This expectation is especially a problem in places such as elementary schools. Research comparing pure-tone thresholds obtained in sound booths compared to quiet test environments outside of these booths does not support that belief. Auditory processing testing is generally carried out at above threshold levels, and therefore may be even less likely to require a soundproof booth. The present study was carried out to compare test results in soundproof booths versus quiet rooms. The purpose of this study was to determine whether auditory processing tests can be administered in a quiet test room rather than in the soundproof test suite. The outcomes would identify that audiologists can provide auditory processing testing for children under various test conditions including quiet rooms at their school. A battery of auditory processing tests was administered at a test level equivalent to 50 dB HL through headphones. The same equipment was used for testing in both locations. Twenty participants identified with normal hearing were included in this study, ten having no auditory processing concerns and ten exhibiting auditory processing problems. All participants underwent a battery of tests, both inside the test booth and outside the booth in a quiet room. Order of testing (inside versus outside) was counterbalanced. Participants were first determined to have normal hearing thresholds for tones and speech. Auditory processing tests were recorded and presented from an HP EliteBook laptop computer with noise-canceling headphones attached to a y-cord that not only presented the test stimuli to the participants but also allowed monitor headphones to be worn by the evaluator. The same equipment was used inside as well as outside the booth. No differences were found for each auditory processing measure as a function of the test setting or the order in which testing was done, that is, in the booth or in the room. Results from the present study indicate that one can obtain the same results on auditory processing tests, regardless of whether testing is completed in a soundproof booth or in a quiet test environment. Therefore, audiologists should not be required to test for auditory processing in a soundproof booth. This study shows that audiologists can conduct testing in a quiet room so long as the background noise is sufficiently controlled. American Academy of Audiology
Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter
2018-05-01
Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.
Muenssinger, Jana; Stingl, Krunoslav T.; Matuz, Tamara; Binder, Gerhard; Ehehalt, Stefan; Preissl, Hubert
2013-01-01
Habituation—the response decrement to repetitively presented stimulation—is a basic cognitive capability and suited to investigate development and integrity of the human brain. To evaluate the developmental process of auditory habituation, the current study used magnetoencephalography (MEG) to investigate auditory habituation, dishabituation and stimulus specificity in children and adults and compared the results between age groups. Twenty-nine children (Mage = 9.69 years, SD ± 0.47) and 14 adults (Mage = 29.29 years, SD ± 3.47) participated in the study and passively listened to a habituation paradigm consisting of 100 trains of tones which were composed of five 500 Hz tones, one 750 Hz tone (dishabituator) and another two 500 Hz tones, respectively while focusing their attention on a silent movie. Adults showed the expected habituation and stimulus specificity within-trains while no response decrement was found between trains. Sensory adaptation or fatigue as a source for response decrement in adults is unlikely due to the strong reaction to the dishabituator (stimulus specificity) and strong mismatch negativity (MMN) responses. However, in children neither habituation nor dishabituation or stimulus specificity could be found within-trains, response decrement was found across trains. It can be speculated that the differences between children and adults are linked to differences in stimulus processing due to attentional processes. This study shows developmental differences in task-related brain activation and discusses the possible influence of broader concepts such as attention, which should be taken into account when comparing performance in an identical task between age groups. PMID:23882207
Musical Experience, Sensorineural Auditory Processing, and Reading Subskills in Adults
Tichko, Parker; Skoe, Erika
2018-01-01
Developmental research suggests that sensorineural auditory processing, reading subskills (e.g., phonological awareness and rapid naming), and musical experience are related during early periods of reading development. Interestingly, recent work suggests that these relations may extend into adulthood, with indices of sensorineural auditory processing relating to global reading ability. However, it is largely unknown whether sensorineural auditory processing relates to specific reading subskills, such as phonological awareness and rapid naming, as well as musical experience in mature readers. To address this question, we recorded electrophysiological responses to a repeating click (auditory stimulus) in a sample of adult readers. We then investigated relations between electrophysiological responses to sound, reading subskills, and musical experience in this same set of adult readers. Analyses suggest that sensorineural auditory processing, reading subskills, and musical experience are related in adulthood, with faster neural conduction times and greater musical experience associated with stronger rapid-naming skills. These results are similar to the developmental findings that suggest reading subskills are related to sensorineural auditory processing and musical experience in children. PMID:29702572
Comorbidity of Auditory Processing, Language, and Reading Disorders
ERIC Educational Resources Information Center
Sharma, Mridula; Purdy, Suzanne C.; Kelly, Andrea S.
2009-01-01
Purpose: The authors assessed comorbidity of auditory processing disorder (APD), language impairment (LI), and reading disorder (RD) in school-age children. Method: Children (N = 68) with suspected APD and nonverbal IQ standard scores of 80 or more were assessed using auditory, language, reading, attention, and memory measures. Auditory processing…
Yang, L; Chen, S; Chen, C-M; Khan, F; Forchelli, G; Javitt, D C
2012-07-01
While 20% of schizophrenia patients worldwide speak tonal languages (e.g. Mandarin), studies are limited to Western-language patients. Western-language patients show tonal deficits that are related to impaired emotional processing of speech. However, language processing is minimally affected. In contrast, in Mandarin, syllables are voiced in one of four tones, with word meaning varying accordingly. We hypothesized that Mandarin-speaking schizophrenia patients would show impairments in underlying basic auditory processing that, unlike in Western groups, would relate to deficits in word recognition and social outcomes. Altogether, 22 Mandarin-speaking schizophrenia patients and 44 matched healthy participants were recruited from New York City. The auditory tasks were: (1) tone matching; (2) distorted tunes; (3) Chinese word discrimination; (4) Chinese word identification. Social outcomes were measured by marital status, employment and most recent employment status. Patients showed deficits in tone-matching, distorted tunes, word discrimination and word identification versus controls (all p<0.0001). Impairments in tone-matching across groups correlated with both word identification (p<0.0001) and discrimination (p<0.0001). On social outcomes, tonally impaired patients had 'lower-status' jobs overall when compared with tonally intact patients (p<0.005) and controls (p<0.0001). Our study is the first to investigate an interaction between neuropsychology and language among Mandarin-speaking schizophrenia patients. As predicted, patients were highly impaired in both tone and auditory word processing, with these two measures significantly correlated. Tonally impaired patients showed significantly worse employment-status function than tonally intact patients, suggesting a link between sensory impairment and employment status outcome. While neuropsychological deficits appear similar cross-culturally, their consequences may be language- and culture-dependent.
The use of listening devices to ameliorate auditory deficit in children with autism.
Rance, Gary; Saunders, Kerryn; Carew, Peter; Johansson, Marlin; Tan, Johanna
2014-02-01
To evaluate both monaural and binaural processing skills in a group of children with autism spectrum disorder (ASD) and to determine the degree to which personal frequency modulation (radio transmission) (FM) listening systems could ameliorate their listening difficulties. Auditory temporal processing (amplitude modulation detection), spatial listening (integration of binaural difference cues), and functional hearing (speech perception in background noise) were evaluated in 20 children with ASD. Ten of these subsequently underwent a 6-week device trial in which they wore the FM system for up to 7 hours per day. Auditory temporal processing and spatial listening ability were poorer in subjects with ASD than in matched controls (temporal: P = .014 [95% CI -6.4 to -0.8 dB], spatial: P = .003 [1.0 to 4.4 dB]), and performance on both of these basic processing measures was correlated with speech perception ability (temporal: r = -0.44, P = .022; spatial: r = -0.50, P = .015). The provision of FM listening systems resulted in improved discrimination of speech in noise (P < .001 [11.6% to 21.7%]). Furthermore, both participant and teacher questionnaire data revealed device-related benefits across a range of evaluation categories including Effect of Background Noise (P = .036 [-60.7% to -2.8%]) and Ease of Communication (P = .019 [-40.1% to -5.0%]). Eight of the 10 participants who undertook the 6-week device trial remained consistent FM users at study completion. Sustained use of FM listening devices can enhance speech perception in noise, aid social interaction, and improve educational outcomes in children with ASD. Copyright © 2014 Mosby, Inc. All rights reserved.
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat
2014-01-01
Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems.
Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima
2016-01-01
Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits. PMID:28050213
Neural circuits in auditory and audiovisual memory.
Plakke, B; Romanski, L M
2016-06-01
Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Auditory processing theories of language disorders: past, present, and future.
Miller, Carol A
2011-07-01
The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory processing disorder (APD), specific language impairment (SLI), and dyslexia. The history of auditory processing theories of these 3 disorders is described, points of convergence and controversy within and among the different branches of research literature are considered, and the influence of research on practice is discussed. The theoretical and clinical contributions of neurophysiological methods are also reviewed, and suggested approaches for critical reading of the research literature are provided. Research on the role of auditory processing in communication disorders springs from a variety of theoretical perspectives and assumptions, and this variety, combined with controversies over the interpretation of research results, makes it difficult to draw clinical implications from the literature. Neurophysiological research methods are a promising route to better understanding of auditory processing. Progress in theory development and its clinical application is most likely to be made when researchers from different disciplines and theoretical perspectives communicate clearly and combine the strengths of their approaches.
Visual and auditory perception in preschool children at risk for dyslexia.
Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina
2014-11-01
Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.
Psychophysical and Neural Correlates of Auditory Attraction and Aversion
NASA Astrophysics Data System (ADS)
Patten, Kristopher Jakob
This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.
Tuning in to the Voices: A Multisite fMRI Study of Auditory Hallucinations
Ford, Judith M.; Roach, Brian J.; Jorgensen, Kasper W.; Turner, Jessica A.; Brown, Gregory G.; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A.; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S.; Lim, Kelvin O.; Glover, Gary; Potkin, Steven G.; Mathalon, Daniel H.
2009-01-01
Introduction: Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically “tuned” to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. Methods: At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Results: Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Discussion: Although “voices” are the anticipated sensory experience, it appears that even primary auditory cortex is “turned on” and “tuned in” to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample. PMID:18987102
Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna
2017-11-01
Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Neural circuits in Auditory and Audiovisual Memory
Plakke, B.; Romanski, L.M.
2016-01-01
Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. PMID:26656069
Enhanced attention-dependent activity in the auditory cortex of older musicians.
Zendel, Benjamin Rich; Alain, Claude
2014-01-01
Musical training improves auditory processing abilities, which correlates with neuro-plastic changes in exogenous (input-driven) and endogenous (attention-dependent) components of auditory event-related potentials (ERPs). Evidence suggests that musicians, compared to non-musicians, experience less age-related decline in auditory processing abilities. Here, we investigated whether lifelong musicianship mitigates exogenous or endogenous processing by measuring auditory ERPs in younger and older musicians and non-musicians while they either attended to auditory stimuli or watched a muted subtitled movie of their choice. Both age and musical training-related differences were observed in the exogenous components; however, the differences between musicians and non-musicians were similar across the lifespan. These results suggest that exogenous auditory ERPs are enhanced in musicians, but decline with age at the same rate. On the other hand, attention-related activity, modeled in the right auditory cortex using a discrete spatiotemporal source analysis, was selectively enhanced in older musicians. This suggests that older musicians use a compensatory strategy to overcome age-related decline in peripheral and exogenous processing of acoustic information. Copyright © 2014 Elsevier Inc. All rights reserved.
Fengler, Ineke; Nava, Elena; Röder, Brigitte
2015-01-01
Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166
Testing the dual-pathway model for auditory processing in human cortex.
Zündorf, Ida C; Lewald, Jörg; Karnath, Hans-Otto
2016-01-01
Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis. Copyright © 2015 Elsevier Inc. All rights reserved.
Idealized Computational Models for Auditory Receptive Fields
Lindeberg, Tony; Friberg, Anders
2015-01-01
We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973
Girbau, Dolors; Schwartz, Richard G
2011-05-01
Although receptive priming has long been used as a way to examine lexical access in adults, few studies have applied this method to children and rarely in an auditory modality. We compared auditory associative priming in children and adults. A testing battery and a Lexical Decision (LD) task was administered to 42 adults and 27 children (8;1-10; 11 years-old) from Spain. They listened to Spanish word pairs (semantically related/unrelated word pairs and word-pseudoword pairs), and tone pairs. Then participants pressed one key for word pairs, and another for pairs with a word and a pseudoword. They also had to press the two keys alternatively for tone pairs as a basic auditory control. Both groups of participants, children and adults, exhibited semantic priming, with significantly faster Reaction Times (RTs) to semantically related word pairs than to unrelated pairs and to the two word-pseudoword sets. The priming effect was twice as large in the adults compared to children, and the children (not the adults) were significantly slower in their response to word-pseudoword pairs than to the unrelated word pairs. Moreover, accuracy was somewhat higher in adults than children for each word pair type, but especially in the word-pseudoword pairs. As expected, children were significantly slower than adults in the RTs for all stimulus types, and their RTs decreased significantly from 8 to 10 years of age and they also decreased in relation to some of their language abilities development (e.g., relative clauses comprehension). In both age groups, the Reaction Time average for tone pairs was lower than for speech pairs, but only all adults obtained 100% accuracy (which was slightly lower in children). Auditory processing and semantic networks are still developing in 8-10 year old children.
Air traffic controllers' long-term speech-in-noise training effects: A control group study.
Zaballos, Maria T P; Plasencia, Daniel P; González, María L Z; de Miguel, Angel R; Macías, Ángel R
2016-01-01
Speech perception in noise relies on the capacity of the auditory system to process complex sounds using sensory and cognitive skills. The possibility that these can be trained during adulthood is of special interest in auditory disorders, where speech in noise perception becomes compromised. Air traffic controllers (ATC) are constantly exposed to radio communication, a situation that seems to produce auditory learning. The objective of this study has been to quantify this effect. 19 ATC and 19 normal hearing individuals underwent a speech in noise test with three signal to noise ratios: 5, 0 and -5 dB. Noise and speech were presented through two different loudspeakers in azimuth position. Speech tokes were presented at 65 dB SPL, while white noise files were at 60, 65 and 70 dB respectively. Air traffic controllers outperform the control group in all conditions [P<0.05 in ANOVA and Mann-Whitney U tests]. Group differences were largest in the most difficult condition, SNR=-5 dB. However, no correlation between experience and performance were found for any of the conditions tested. The reason might be that ceiling performance is achieved much faster than the minimum experience time recorded, 5 years, although intrinsic cognitive abilities cannot be disregarded. ATC demonstrated enhanced ability to hear speech in challenging listening environments. This study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions, although good cognitive qualities are likely to be a basic requirement for this training to be effective. Our results show that ATC outperform the control group in all conditions. Thus, this study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions.
1981-07-10
Pohlmann, L. D. Some models of observer behavior in two-channel auditory signal detection. Perception and Psychophy- sics, 1973, 14, 101-109. Spelke...spatial), and processing modalities ( auditory versus visual input, vocal versus manual response). If validated, this configuration has both theoretical...conclusion that auditory and visual processes will compete, as will spatial and verbal (albeit to a lesser extent than auditory - auditory , visual-visual
Neurophysiological Studies of Auditory Verbal Hallucinations
Ford, Judith M.; Dierks, Thomas; Fisher, Derek J.; Herrmann, Christoph S.; Hubl, Daniela; Kindler, Jochen; Koenig, Thomas; Mathalon, Daniel H.; Spencer, Kevin M.; Strik, Werner; van Lutterveld, Remko
2012-01-01
We discuss 3 neurophysiological approaches to study auditory verbal hallucinations (AVH). First, we describe “state” (or symptom capture) studies where periods with and without hallucinations are compared “within” a patient. These studies take 2 forms: passive studies, where brain activity during these states is compared, and probe studies, where brain responses to sounds during these states are compared. EEG (electroencephalography) and MEG (magnetoencephalography) data point to frontal and temporal lobe activity, the latter resulting in competition with external sounds for auditory resources. Second, we discuss “trait” studies where EEG and MEG responses to sounds are recorded from patients who hallucinate and those who do not. They suggest a tendency to hallucinate is associated with competition for auditory processing resources. Third, we discuss studies addressing possible mechanisms of AVH, including spontaneous neural activity, abnormal self-monitoring, and dysfunctional interregional communication. While most studies show differences in EEG and MEG responses between patients and controls, far fewer show symptom relationships. We conclude that efforts to understand the pathophysiology of AVH using EEG and MEG have been hindered by poor anatomical resolution of the EEG and MEG measures, poor assessment of symptoms, poor understanding of the phenomenon, poor models of the phenomenon, decoupling of the symptoms from the neurophysiology due to medications and comorbidites, and the possibility that the schizophrenia diagnosis breeds truer than the symptoms it comprises. These problems are common to studies of other psychiatric symptoms and should be considered when attempting to understand the basic neural mechanisms responsible for them. PMID:22368236
The influence of (central) auditory processing disorder in speech sound disorders.
Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein
2016-01-01
Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Maricich, Stephen M.; Xia, Anping; Mathes, Erin L.; Wang, Vincent Y.; Oghalai, John S.; Fritzsch, Bernd; Zoghbi, Huda Y.
2009-01-01
Atoh1 is a basic helix-loop-helix transcription factor necessary for the specification of inner ear hair cells and central auditory system neurons derived from the rhombic lip. We used the Cre-loxP system and two Cre-driver lines (Egr2Cre and Hoxb1Cre) to delete Atoh1 from different regions of the cochlear nucleus (CN) and accessory auditory nuclei (AAN). Adult Atoh1-conditional knockout mice (Atoh1CKO) are behaviorally deaf, have diminished auditory brainstem evoked responses and disrupted CN and AAN morphology and connectivity. In addition, Egr2; Atoh1CKO mice lose spiral ganglion neurons in the cochlea and AAN neurons during the first 3 days of life, revealing a novel critical period in the development of these neurons. These new mouse models of predominantly central deafness illuminate the importance of the CN for support of a subset of peripheral and central auditory neurons. PMID:19741118
Underlying Mechanisms of Tinnitus: Review and Clinical Implications
Henry, James A.; Roberts, Larry E.; Caspary, Donald M.; Theodoroff, Sarah M.; Salvi, Richard J.
2016-01-01
Background The study of tinnitus mechanisms has increased tenfold in the last decade. The common denominator for all of these studies is the goal of elucidating the underlying neural mechanisms of tinnitus with the ultimate purpose of finding a cure. While these basic science findings may not be immediately applicable to the clinician who works directly with patients to assist them in managing their reactions to tinnitus, a clear understanding of these findings is needed to develop the most effective procedures for alleviating tinnitus. Purpose The goal of this review is to provide audiologists and other health-care professionals with a basic understanding of the neurophysiological changes in the auditory system likely to be responsible for tinnitus. Results It is increasingly clear that tinnitus is a pathology involving neuroplastic changes in central auditory structures that take place when the brain is deprived of its normal input by pathology in the cochlea. Cochlear pathology is not always expressed in the audiogram but may be detected by more sensitive measures. Neural changes can occur at the level of synapses between inner hair cells and the auditory nerve and within multiple levels of the central auditory pathway. Long-term maintenance of tinnitus is likely a function of a complex network of structures involving central auditory and nonauditory systems. Conclusions Patients often have expectations that a treatment exists to cure their tinnitus. They should be made aware that research is increasing to discover such a cure and that their reactions to tinnitus can be mitigated through the use of evidence-based behavioral interventions. PMID:24622858
Sensitivity and specificity of auditory steady‐state response testing
Rabelo, Camila Maia; Schochat, Eliane
2011-01-01
INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442
Dong, Xuebao; Suo, Puxia; Yuan, Xin; Yao, Xuefeng
2015-01-01
Auditory evoked potentials (AEPs) have been used as a measure of the depth of anesthesia during the intra-operative process. AEPs are classically divided, on the basis of their latency, into first, fast, middle, slow, and late components. The use of auditory evoked potential has been advocated for the assessment of Intra-operative awareness (IOA), but has not been considered seriously enough to universalize it. It is because we have not explored enough the impact of auditory perception and auditory processing on the IOA phenomena as well as on the subsequent psychological impact of IOA on the patient. More importantly, we have seldom tried to look at the phenomena of IOP from the perspective of consciousness itself. This perspective is especially important because many of IOA phenomena exist in the subconscious domain than they do in the conscious domain of explicit recall. Two important forms of these subconscious manifestations of IOA are the implicit recall phenomena and post-operative dreams related to the operation. Here, we present an integrated auditory consciousness-based model of IOA. We start with a brief description of auditory awareness and the factors affecting it. Further, we proceed to the evaluation of conscious and subconscious information processing by auditory modality and how they interact during and after intra-operative period. Further, we show that both conscious and subconscious auditory processing affect the IOA experience and both have serious psychological implications on the patient subsequently. These effects could be prevented by using auditory evoked potential during monitoring of anesthesia, especially the mid-latency auditory evoked potentials (MLAERs). To conclude our model with present hypothesis, we propose that the use of auditory evoked potential should be universal with general anesthesia use in order to prevent the occurrences of distressing outcomes resulting from both conscious and subconscious auditory processing during anesthesia.
Auditory Processing Disorder and Foreign Language Acquisition
ERIC Educational Resources Information Center
Veselovska, Ganna
2015-01-01
This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…
Auditory processing deficits in individuals with primary open-angle glaucoma.
Rance, Gary; O'Hare, Fleur; O'Leary, Stephen; Starr, Arnold; Ly, Anna; Cheng, Belinda; Tomlin, Dani; Graydon, Kelley; Chisari, Donella; Trounce, Ian; Crowston, Jonathan
2012-01-01
The high energy demand of the auditory and visual pathways render these sensory systems prone to diseases that impair mitochondrial function. Primary open-angle glaucoma, a neurodegenerative disease of the optic nerve, has recently been associated with a spectrum of mitochondrial abnormalities. This study sought to investigate auditory processing in individuals with open-angle glaucoma. DESIGN/STUDY SAMPLE: Twenty-seven subjects with open-angle glaucoma underwent electrophysiologic (auditory brainstem response), auditory temporal processing (amplitude modulation detection), and speech perception (monosyllabic words in quiet and background noise) assessment in each ear. A cohort of age, gender and hearing level matched control subjects was also tested. While the majority of glaucoma subjects in this study demonstrated normal auditory function, there were a significant number (6/27 subjects, 22%) who showed abnormal auditory brainstem responses and impaired auditory perception in one or both ears. The finding that a significant proportion of subjects with open-angle glaucoma presented with auditory dysfunction provides evidence of systemic neuronal susceptibility. Affected individuals may suffer significant communication difficulties in everyday listening situations.
Milner, Rafał; Rusiniak, Mateusz; Lewandowska, Monika; Wolak, Tomasz; Ganc, Małgorzata; Piątkowska-Janko, Ewa; Bogorodzki, Piotr; Skarżyński, Henryk
2014-01-01
Background The neural underpinnings of auditory information processing have often been investigated using the odd-ball paradigm, in which infrequent sounds (deviants) are presented within a regular train of frequent stimuli (standards). Traditionally, this paradigm has been applied using either high temporal resolution (EEG) or high spatial resolution (fMRI, PET). However, used separately, these techniques cannot provide information on both the location and time course of particular neural processes. The goal of this study was to investigate the neural correlates of auditory processes with a fine spatio-temporal resolution. A simultaneous auditory evoked potentials (AEP) and functional magnetic resonance imaging (fMRI) technique (AEP-fMRI), together with an odd-ball paradigm, were used. Material/Methods Six healthy volunteers, aged 20–35 years, participated in an odd-ball simultaneous AEP-fMRI experiment. AEP in response to acoustic stimuli were used to model bioelectric intracerebral generators, and electrophysiological results were integrated with fMRI data. Results fMRI activation evoked by standard stimuli was found to occur mainly in the primary auditory cortex. Activity in these regions overlapped with intracerebral bioelectric sources (dipoles) of the N1 component. Dipoles of the N1/P2 complex in response to standard stimuli were also found in the auditory pathway between the thalamus and the auditory cortex. Deviant stimuli induced fMRI activity in the anterior cingulate gyrus, insula, and parietal lobes. Conclusions The present study showed that neural processes evoked by standard stimuli occur predominantly in subcortical and cortical structures of the auditory pathway. Deviants activate areas non-specific for auditory information processing. PMID:24413019
Auditory Processing Disorders: An Overview. ERIC Digest.
ERIC Educational Resources Information Center
Ciocci, Sandra R.
This digest presents an overview of children with auditory processing disorders (APDs), children who can typically hear information but have difficulty attending to, storing, locating, retrieving, and/or clarifying that information to make it useful for academic and social purposes. The digest begins by describing central auditory processing and…
Black, Emily; Stevenson, Jennifer L; Bish, Joel P
2017-08-01
The global precedence effect is a phenomenon in which global aspects of visual and auditory stimuli are processed before local aspects. Individuals with musical experience perform better on all aspects of auditory tasks compared with individuals with less musical experience. The hemispheric lateralization of this auditory processing is less well-defined. The present study aimed to replicate the global precedence effect with auditory stimuli and to explore the lateralization of global and local auditory processing in individuals with differing levels of musical experience. A total of 38 college students completed an auditory-directed attention task while electroencephalography was recorded. Individuals with low musical experience responded significantly faster and more accurately in global trials than in local trials regardless of condition, and significantly faster and more accurately when pitches traveled in the same direction (compatible condition) than when pitches traveled in two different directions (incompatible condition) consistent with a global precedence effect. In contrast, individuals with high musical experience showed less of a global precedence effect with regards to accuracy, but not in terms of reaction time, suggesting an increased ability to overcome global bias. Further, a difference in P300 latency between hemispheres was observed. These findings provide a preliminary neurological framework for auditory processing of individuals with differing degrees of musical experience.
Auditory Reserve and the Legacy of Auditory Experience
Skoe, Erika; Kraus, Nina
2014-01-01
Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function. PMID:25405381
Phonological Processing in Human Auditory Cortical Fields
Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.
2011-01-01
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252
Evaluating the Use of Auditory Systems to Improve Performance in Combat Search and Rescue
2012-03-01
take advantage of human binaural hearing to present spatial information through auditory stimuli as it would occur in the real world. This allows the...multiple operators unambiguously and in a short amount of time. Spatial audio basics Spatial audio works with human binaural hearing to generate... binaural recordings “sound better” when heard in the same location where the recordings were made. While this appears to be related to the acoustic
Double dissociation of 'what' and 'where' processing in auditory cortex.
Lomber, Stephen G; Malhotra, Shveta
2008-05-01
Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.
Auditory perception in the aging brain: the role of inhibition and facilitation in early processing.
Stothart, George; Kazanina, Nina
2016-11-01
Aging affects the interplay between peripheral and cortical auditory processing. Previous studies have demonstrated that older adults are less able to regulate afferent sensory information and are more sensitive to distracting information. Using auditory event-related potentials we investigated the role of cortical inhibition on auditory and audiovisual processing in younger and older adults. Across puretone, auditory and audiovisual speech paradigms older adults showed a consistent pattern of inhibitory deficits, manifested as increased P50 and/or N1 amplitudes and an absent or significantly reduced N2. Older adults were still able to use congruent visual articulatory information to aid auditory processing but appeared to require greater neural effort to resolve conflicts generated by incongruent visual information. In combination, the results provide support for the Inhibitory Deficit Hypothesis of aging. They extend previous findings into the audiovisual domain and highlight older adults' ability to benefit from congruent visual information during speech processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.
Woolley, Sarah M N; Portfors, Christine V
2013-11-01
The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.
Acetylcholinesterase Inhibition and Information Processing in the Auditory Cortex
1986-04-30
9,24,29,30), or for causing auditory hallucinations (2,23,31,32). Thus, compounds which alter cho- linergic transmission, in particular anticholinesterases...the upper auditory system. Thus, attending to and understanding verbal messages in humans, irrespective of the particular voice which speaks them, may...00, AD ACETYLCHOLINESTERASE INHIBITION AND INFORMATION PROCESSING IN THE AUDITORY CORTEX ANNUAL SUMMARY REPORT DTIC ELECTENORMAN M
Dlouha, Olga; Novak, Alexej; Vokral, Jan
2007-06-01
The aim of this project is to use central auditory tests for diagnosis of central auditory processing disorder (CAPD) in children with specific language impairment (SLI), in order to confirm relationship between speech-language impairment and central auditory processing. We attempted to establish special dichotic binaural tests in Czech language modified for younger children. Tests are based on behavioral audiometry using dichotic listening (different auditory stimuli that presented to each ear simultaneously). The experimental tasks consisted of three auditory measures (test 1-3)-dichotic listening of two-syllable words presented like binaural interaction tests. Children with SLI are unable to create simple sentences from two words that are heard separately but simultaneously. Results in our group of 90 pre-school children (6-7 years old) confirmed integration deficit and problems with quality of short-term memory. Average rate of success of children with specific language impairment was 56% in test 1, 64% in test 2 and 63% in test 3. Results of control group: 92% in test 1, 93% in test 2 and 92% in test 3 (p<0.001). Our results indicate the relationship between disorders of speech-language perception and central auditory processing disorders.
Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat
2014-01-01
Background: Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Methods: Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Results: Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). Conclusion: The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems. PMID:25679009
Emotion modulates activity in the 'what' but not 'where' auditory processing pathway.
Kryklywy, James H; Macpherson, Ewan A; Greening, Steven G; Mitchell, Derek G V
2013-11-15
Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway. Copyright © 2013 Elsevier Inc. All rights reserved.
Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed
2014-11-01
Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Bidelman, Gavin M.; Heinz, Michael G.
2011-01-01
Human listeners prefer consonant over dissonant musical intervals and the perceived contrast between these classes is reduced with cochlear hearing loss. Population-level activity of normal and impaired model auditory-nerve (AN) fibers was examined to determine (1) if peripheral auditory neurons exhibit correlates of consonance and dissonance and (2) if the reduced perceptual difference between these qualities observed for hearing-impaired listeners can be explained by impaired AN responses. In addition, acoustical correlates of consonance-dissonance were also explored including periodicity and roughness. Among the chromatic pitch combinations of music, consonant intervals∕chords yielded more robust neural pitch-salience magnitudes (determined by harmonicity∕periodicity) than dissonant intervals∕chords. In addition, AN pitch-salience magnitudes correctly predicted the ordering of hierarchical pitch and chordal sonorities described by Western music theory. Cochlear hearing impairment compressed pitch salience estimates between consonant and dissonant pitch relationships. The reduction in contrast of neural responses following cochlear hearing loss may explain the inability of hearing-impaired listeners to distinguish musical qualia as clearly as normal-hearing individuals. Of the neural and acoustic correlates explored, AN pitch salience was the best predictor of behavioral data. Results ultimately show that basic pitch relationships governing music are already present in initial stages of neural processing at the AN level. PMID:21895089
ERIC Educational Resources Information Center
Fair, Lisl; Louw, Brenda; Hugo, Rene
2001-01-01
This study compiled a comprehensive early auditory processing skills assessment battery and evaluated the battery to toddlers with (n=8) and without (n=9) early recurrent otitis media. The assessment battery successfully distinguished between normal and deficient early auditory processing development in the subjects. The study also found parents…
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Differential processing of melodic, rhythmic and simple tone deviations in musicians--an MEG study.
Lappe, Claudia; Lappe, Markus; Pantev, Christo
2016-01-01
Rhythm and melody are two basic characteristics of music. Performing musicians have to pay attention to both, and avoid errors in either aspect of their performance. To investigate the neural processes involved in detecting melodic and rhythmic errors from auditory input we tested musicians on both kinds of deviations in a mismatch negativity (MMN) design. We found that MMN responses to a rhythmic deviation occurred at shorter latencies than MMN responses to a melodic deviation. Beamformer source analysis showed that the melodic deviation activated superior temporal, inferior frontal and superior frontal areas whereas the activation pattern of the rhythmic deviation focused more strongly on inferior and superior parietal areas, in addition to superior temporal cortex. Activation in the supplementary motor area occurred for both types of deviations. We also recorded responses to similar pitch and tempo deviations in a simple, non-musical repetitive tone pattern. In this case, there was no latency difference between the MMNs and cortical activation was smaller and mostly limited to auditory cortex. The results suggest that prediction and error detection of musical stimuli in trained musicians involve a broad cortical network and that rhythmic and melodic errors are processed in partially different cortical streams. Copyright © 2015 Elsevier Inc. All rights reserved.
Bathelt, Joe; Dale, Naomi; de Haan, Michelle
2017-10-01
Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve). Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Forebrain pathway for auditory space processing in the barn owl.
Cohen, Y E; Miller, G L; Knudsen, E I
1998-02-01
The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.
Lount, Sarah A; Purdy, Suzanne C; Hand, Linda
2017-01-01
International evidence suggests youth offenders have greater difficulties with oral language than their nonoffending peers. This study examined the hearing, auditory processing, and language skills of male youth offenders and remandees (YORs) in New Zealand. Thirty-three male YORs, aged 14-17 years, were recruited from 2 youth justice residences, plus 39 similarly aged male students from local schools for comparison. Testing comprised tympanometry, self-reported hearing, pure-tone audiometry, 4 auditory processing tests, 2 standardized language tests, and a nonverbal intelligence test. Twenty-one (64%) of the YORs were identified as language impaired (LI), compared with 4 (10%) of the controls. Performance on all language measures was significantly worse in the YOR group, as were their hearing thresholds. Nine (27%) of the YOR group versus 7 (18%) of the control group fulfilled criteria for auditory processing disorder. Only 1 YOR versus 5 controls had an auditory processing disorder without LI. Language was an area of significant difficulty for YORs. Difficulties with auditory processing were more likely to be accompanied by LI in this group, compared with the controls. Provision of speech-language therapy services and awareness of auditory and language difficulties should be addressed in youth justice systems.
Engineer, C.T.; Centanni, T.M.; Im, K.W.; Borland, M.S.; Moreno, N.A.; Carraway, R.S.; Wilson, L.G.; Kilgard, M.P.
2014-01-01
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. PMID:24639033
Palmiero, Massimiliano; Di Matteo, Rosalia; Belardinelli, Marta Olivetti
2014-05-01
Two experiments comparing imaginative processing in different modalities and semantic processing were carried out to investigate the issue of whether conceptual knowledge can be represented in different format. Participants were asked to judge the similarity between visual images, auditory images, and olfactory images in the imaginative block, if two items belonged to the same category in the semantic block. Items were verbally cued in both experiments. The degree of similarity between the imaginative and semantic items was changed across experiments. Experiment 1 showed that the semantic processing was faster than the visual and the auditory imaginative processing, whereas no differentiation was possible between the semantic processing and the olfactory imaginative processing. Experiment 2 revealed that only the visual imaginative processing could be differentiated from the semantic processing in terms of accuracy. These results showed that the visual and auditory imaginative processing can be differentiated from the semantic processing, although both visual and auditory images strongly rely on semantic representations. On the contrary, no differentiation is possible within the olfactory domain. Results are discussed in the frame of the imagery debate.
Vahaba, Daniel M; Macedo-Lima, Matheus; Remage-Healey, Luke
2017-01-01
Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor's song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM's established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E 2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches ( Taeniopygia guttata ) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E 2 administration on sensory processing. In sensory-aged subjects, E 2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E 2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E 2 sensitivity that each precisely track a key neural "switch point" from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds.
2017-01-01
Abstract Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor’s song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM’s established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches (Taeniopygia guttata) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E2 administration on sensory processing. In sensory-aged subjects, E2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E2 sensitivity that each precisely track a key neural “switch point” from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds. PMID:29255797
Operator Performance Measures for Assessing Voice Communication Effectiveness
1989-07-01
performance and work- load assessment techniques have been based.I Broadbent (1958) described a limited capacity filter model of human information...INFORMATION PROCESSING 20 3.1.1. Auditory Attention 20 3.1.2. Auditory Memory 24 3.2. MODELS OF INFORMATION PROCESSING 24 3.2.1. Capacity Theories 25...Learning 0 Attention * Language Specialization • Decision Making• Problem Solving Auditory Information Processing Models of Processing Ooemtor
Effect of conductive hearing loss on central auditory function.
Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher
It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: p<0.001). Individuals with CHL had significantly lower correct responses than individuals with normal hearing for both sides (p<0.001). No correlation was found between GIN performance and degree of hearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
D'Souza, Dean; D'Souza, Hana; Johnson, Mark H; Karmiloff-Smith, Annette
2016-08-01
Typically-developing (TD) infants can construct unified cross-modal percepts, such as a speaking face, by integrating auditory-visual (AV) information. This skill is a key building block upon which higher-level skills, such as word learning, are built. Because word learning is seriously delayed in most children with neurodevelopmental disorders, we assessed the hypothesis that this delay partly results from a deficit in integrating AV speech cues. AV speech integration has rarely been investigated in neurodevelopmental disorders, and never previously in infants. We probed for the McGurk effect, which occurs when the auditory component of one sound (/ba/) is paired with the visual component of another sound (/ga/), leading to the perception of an illusory third sound (/da/ or /tha/). We measured AV integration in 95 infants/toddlers with Down, fragile X, or Williams syndrome, whom we matched on Chronological and Mental Age to 25 TD infants. We also assessed a more basic AV perceptual ability: sensitivity to matching vs. mismatching AV speech stimuli. Infants with Williams syndrome failed to demonstrate a McGurk effect, indicating poor AV speech integration. Moreover, while the TD children discriminated between matching and mismatching AV stimuli, none of the other groups did, hinting at a basic deficit or delay in AV speech processing, which is likely to constrain subsequent language development. Copyright © 2016 Elsevier Inc. All rights reserved.
Influence of Eye Movements, Auditory Perception, and Phonemic Awareness in the Reading Process
ERIC Educational Resources Information Center
Megino-Elvira, Laura; Martín-Lobo, Pilar; Vergara-Moragues, Esperanza
2016-01-01
The authors' aim was to analyze the relationship of eye movements, auditory perception, and phonemic awareness with the reading process. The instruments used were the King-Devick Test (saccade eye movements), the PAF test (auditory perception), the PFC (phonemic awareness), the PROLEC-R (lexical process), the Canals reading speed test, and the…
ERIC Educational Resources Information Center
Chonchaiya, Weerasak; Tardif, Twila; Mai, Xiaoqin; Xu, Lin; Li, Mingyan; Kaciroti, Niko; Kileny, Paul R.; Shao, Jie; Lozoff, Betsy
2013-01-01
Auditory processing capabilities at the subcortical level have been hypothesized to impact an individual's development of both language and reading abilities. The present study examined whether auditory processing capabilities relate to language development in healthy 9-month-old infants. Participants were 71 infants (31 boys and 40 girls) with…
Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo
2014-01-01
Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.
Central Auditory Processing of Temporal and Spectral-Variance Cues in Cochlear Implant Listeners
Pham, Carol Q.; Bremen, Peter; Shen, Weidong; Yang, Shi-Ming; Middlebrooks, John C.; Zeng, Fan-Gang; Mc Laughlin, Myles
2015-01-01
Cochlear implant (CI) listeners have difficulty understanding speech in complex listening environments. This deficit is thought to be largely due to peripheral encoding problems arising from current spread, which results in wide peripheral filters. In normal hearing (NH) listeners, central processing contributes to segregation of speech from competing sounds. We tested the hypothesis that basic central processing abilities are retained in post-lingually deaf CI listeners, but processing is hampered by degraded input from the periphery. In eight CI listeners, we measured auditory nerve compound action potentials to characterize peripheral filters. Then, we measured psychophysical detection thresholds in the presence of multi-electrode maskers placed either inside (peripheral masking) or outside (central masking) the peripheral filter. This was intended to distinguish peripheral from central contributions to signal detection. Introduction of temporal asynchrony between the signal and masker improved signal detection in both peripheral and central masking conditions for all CI listeners. Randomly varying components of the masker created spectral-variance cues, which seemed to benefit only two out of eight CI listeners. Contrastingly, the spectral-variance cues improved signal detection in all five NH listeners who listened to our CI simulation. Together these results indicate that widened peripheral filters significantly hamper central processing of spectral-variance cues but not of temporal cues in post-lingually deaf CI listeners. As indicated by two CI listeners in our study, however, post-lingually deaf CI listeners may retain some central processing abilities similar to NH listeners. PMID:26176553
Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Porges, Stephen W; Macellaio, Matthew; Stanfill, Shannon D; McCue, Kimberly; Lewis, Gregory F; Harden, Emily R; Handelman, Mika; Denver, John; Bazhenova, Olga V; Heilman, Keri J
2013-06-01
The current study evaluated processes underlying two common symptoms (i.e., state regulation problems and deficits in auditory processing) associated with a diagnosis of autism spectrum disorders. Although these symptoms have been treated in the literature as unrelated, when informed by the Polyvagal Theory, these symptoms may be viewed as the predictable consequences of depressed neural regulation of an integrated social engagement system, in which there is down regulation of neural influences to the heart (i.e., via the vagus) and to the middle ear muscles (i.e., via the facial and trigeminal cranial nerves). Respiratory sinus arrhythmia (RSA) and heart period were monitored to evaluate state regulation during a baseline and two auditory processing tasks (i.e., the SCAN tests for Filtered Words and Competing Words), which were used to evaluate auditory processing performance. Children with a diagnosis of autism spectrum disorders (ASD) were contrasted with aged matched typically developing children. The current study identified three features that distinguished the ASD group from a group of typically developing children: 1) baseline RSA, 2) direction of RSA reactivity, and 3) auditory processing performance. In the ASD group, the pattern of change in RSA during the attention demanding SCAN tests moderated the relation between performance on the Competing Words test and IQ. In addition, in a subset of ASD participants, auditory processing performance improved and RSA increased following an intervention designed to improve auditory processing. Copyright © 2012 Elsevier B.V. All rights reserved.
Ito, Tetsufumi; Oliver, Douglas L.
2012-01-01
The inferior colliculus (IC) in the midbrain of the auditory system uses a unique basic circuit to organize the inputs from virtually all of the lower auditory brainstem and transmit this information to the medial geniculate body (MGB) in the thalamus. Here, we review the basic circuit of the IC, the neuronal types, the organization of their inputs and outputs. We specifically discuss the large GABAergic (LG) neurons and how they differ from the small GABAergic (SG) and the more numerous glutamatergic neurons. The somata and dendrites of LG neurons are identified by axosomatic glutamatergic synapses that are lacking in the other cell types and exclusively contain the glutamate transporter VGLUT2. Although LG neurons are most numerous in the central nucleus of the IC (ICC), an analysis of their distribution suggests that they are not specifically associated with one set of ascending inputs. The inputs to ICC may be organized into functional zones with different subsets of brainstem inputs, but each zone may contain the same three neuron types. However, the sources of VGLUT2 axosomatic terminals on the LG neuron are not known. Neurons in the dorsal cochlear nucleus, superior olivary complex, intermediate nucleus of the lateral lemniscus, and IC itself that express the gene for VGLUT2 only are the likely origin of the dense VGLUT2 axosomatic terminals on LG tectothalamic neurons. The IC is unique since LG neurons are GABAergic tectothalamic neurons in addition to the numerous glutamatergic tectothalamic neurons. SG neurons evidently target other auditory structures. The basic circuit of the IC and the LG neurons in particular, has implications for the transmission of information about sound through the midbrain to the MGB. PMID:22855671
Visual activity predicts auditory recovery from deafness after adult cochlear implantation.
Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal
2013-12-01
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Wilkinson, Sam
2018-01-01
Two challenges that face popular self-monitoring theories (SMTs) of auditory verbal hallucination (AVH) are that they cannot account for the auditory phenomenology of AVHs and that they cannot account for their variety. In this paper I show that both challenges can be met by adopting a predictive processing framework (PPF), and by viewing AVHs as arising from abnormalities in predictive processing. I show how, within the PPF, both the auditory phenomenology of AVHs, and three subtypes of AVH, can be accounted for. PMID:25286243
Temporal processing and long-latency auditory evoked potential in stutterers.
Prestes, Raquel; de Andrade, Adriana Neves; Santos, Renata Beatriz Fernandes; Marangoni, Andrea Tortosa; Schiefer, Ana Maria; Gil, Daniela
Stuttering is a speech fluency disorder, and may be associated with neuroaudiological factors linked to central auditory processing, including changes in auditory processing skills and temporal resolution. To characterize the temporal processing and long-latency auditory evoked potential in stutterers and to compare them with non-stutterers. The study included 41 right-handed subjects, aged 18-46 years, divided into two groups: stutterers (n=20) and non-stutters (n=21), compared according to age, education, and sex. All subjects were submitted to the duration pattern tests, random gap detection test, and long-latency auditory evoked potential. Individuals who stutter showed poorer performance on Duration Pattern and Random Gap Detection tests when compared with fluent individuals. In the long-latency auditory evoked potential, there was a difference in the latency of N2 and P3 components; stutterers had higher latency values. Stutterers have poor performance in temporal processing and higher latency values for N2 and P3 components. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Neural coding of syntactic structure in learned vocalizations in the songbird.
Fujimoto, Hisataka; Hasegawa, Taku; Watanabe, Dai
2011-07-06
Although vocal signals including human languages are composed of a finite number of acoustic elements, complex and diverse vocal patterns can be created from combinations of these elements, linked together by syntactic rules. To enable such syntactic vocal behaviors, neural systems must extract the sequence patterns from auditory information and establish syntactic rules to generate motor commands for vocal organs. However, the neural basis of syntactic processing of learned vocal signals remains largely unknown. Here we report that the basal ganglia projecting premotor neurons (HVC(X) neurons) in Bengalese finches represent syntactic rules that generate variable song sequences. When vocalizing an alternative transition segment between song elements called syllables, sparse burst spikes of HVC(X) neurons code the identity of a specific syllable type or a specific transition direction among the alternative trajectories. When vocalizing a variable repetition sequence of the same syllable, HVC(X) neurons not only signal the initiation and termination of the repetition sequence but also indicate the progress and state-of-completeness of the repetition. These different types of syntactic information are frequently integrated within the activity of single HVC(X) neurons, suggesting that syntactic attributes of the individual neurons are not programmed as a basic cellular subtype in advance but acquired in the course of vocal learning and maturation. Furthermore, some auditory-vocal mirroring type HVC(X) neurons display transition selectivity in the auditory phase, much as they do in the vocal phase, suggesting that these songbirds may extract syntactic rules from auditory experience and apply them to form their own vocal behaviors.
Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha
2016-12-01
Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.
An experimental study on target recognition using white canes.
Nunokawa, Kiyohiko; Ino, Shuichi
2010-01-01
To understand basic tactile perception using white canes, we compared tapping (two times) and pushing (two times) methods using the index finger and using a white cane, with and without accompanying auditory information. Participants were six visually impaired individuals who used a white cane to walk independently in their daily lives. For each of the tapping and pushing and sound or no sound conditions, participants gave magnitude estimates for the hardness of rubber panels. Results indicated that using a white cane produces sensitivity levels equal to using a finger when accompanied by auditory information, and suggested that when using a white cane to estimate the hardness of a target, it is most effective to have two different modalities of tactile and auditory information derived from tapping.
ERIC Educational Resources Information Center
Iliadou, Vasiliki; Bamiou, Doris Eva
2012-01-01
Purpose: To investigate the clinical utility of the Children's Auditory Processing Performance Scale (CHAPPS; Smoski, Brunt, & Tannahill, 1992) to evaluate listening ability in 12-year-old children referred for auditory processing assessment. Method: This was a prospective case control study of 97 children (age range = 11;4 [years;months] to…
ERIC Educational Resources Information Center
Emerson, Maria F.; And Others
1997-01-01
The SCAN: A Screening Test for Auditory Processing Disorders was administered to 14 elementary children with a history of otitis media and 14 typical children, to evaluate the validity of the test in identifying children with central auditory processing disorder. Another experiment found that test results differed based on the testing environment…
Temporal factors affecting somatosensory–auditory interactions in speech processing
Ito, Takayuki; Gracco, Vincent L.; Ostry, David J.
2014-01-01
Speech perception is known to rely on both auditory and visual information. However, sound-specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study, we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory–auditory interaction in speech perception. We examined the changes in event-related potentials (ERPs) in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the ERP was reliably different from the two unisensory potentials. More importantly, the magnitude of the ERP difference varied as a function of the relative timing of the somatosensory–auditory stimulation. Event-related activity change due to stimulus timing was seen between 160 and 220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory–auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production. PMID:25452733
Lailach, S; Zahnert, T
2016-12-01
The present article about the basics of ear surgery is a short overview of current indications, the required diagnostics and surgical procedures of common otologic diseases. In addition to plastic and reconstructive surgery of the auricle, principles of surgery of the external auditory canal, basics of middle ear surgery and the tumor surgery of the temporal bone are shown. Additionally, aspects of the surgical hearing rehabilitation (excluding implantable hearing systems) are presented considering current study results. Georg Thieme Verlag KG Stuttgart · New York.
Estradiol-dependent modulation of auditory processing and selectivity in songbirds
Maney, Donna; Pinaud, Raphael
2011-01-01
The steroid hormone estradiol plays an important role in reproductive development and behavior and modulates a wide array of physiological and cognitive processes. Recently, reports from several research groups have converged to show that estradiol also powerfully modulates sensory processing, specifically, the physiology of central auditory circuits in songbirds. These investigators have discovered that (1) behaviorally-relevant auditory experience rapidly increases estradiol levels in the auditory forebrain; (2) estradiol instantaneously enhances the responsiveness and coding efficiency of auditory neurons; (3) these changes are mediated by a non-genomic effect of brain-generated estradiol on the strength of inhibitory neurotransmission; and (4) estradiol regulates biochemical cascades that induce the expression of genes involved in synaptic plasticity. Together, these findings have established estradiol as a central regulator of auditory function and intensified the need to consider brain-based mechanisms, in addition to peripheral organ dysfunction, in hearing pathologies associated with estrogen deficiency. PMID:21146556
The Perception of Auditory Motion
Leung, Johahn
2016-01-01
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029
Harris, Jill; Kamke, Marc R
2014-11-01
Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Demopoulos, Carly; Yu, Nina; Tripp, Jennifer; Mota, Nayara; Brandes-Aitken, Anne N.; Desai, Shivani S.; Hill, Susanna S.; Antovich, Ashley D.; Harris, Julia; Honma, Susanne; Mizuiri, Danielle; Nagarajan, Srikantan S.; Marco, Elysa J.
2017-01-01
This study compared magnetoencephalographic (MEG) imaging-derived indices of auditory and somatosensory cortical processing in children aged 8–12 years with autism spectrum disorder (ASD; N = 18), those with sensory processing dysfunction (SPD; N = 13) who do not meet ASD criteria, and typically developing control (TDC; N = 19) participants. The magnitude of responses to both auditory and tactile stimulation was comparable across all three groups; however, the M200 latency response from the left auditory cortex was significantly delayed in the ASD group relative to both the TDC and SPD groups, whereas the somatosensory response of the ASD group was only delayed relative to TDC participants. The SPD group did not significantly differ from either group in terms of somatosensory latency, suggesting that participants with SPD may have an intermediate phenotype between ASD and TDC with regard to somatosensory processing. For the ASD group, correlation analyses indicated that the left M200 latency delay was significantly associated with performance on the WISC-IV Verbal Comprehension Index as well as the DSTP Acoustic-Linguistic index. Further, these cortical auditory response delays were not associated with somatosensory cortical response delays or cognitive processing speed in the ASD group, suggesting that auditory delays in ASD are domain specific rather than associated with generalized processing delays. The specificity of these auditory delays to the ASD group, in addition to their correlation with verbal abilities, suggests that auditory sensory dysfunction may be implicated in communication symptoms in ASD, motivating further research aimed at understanding the impact of sensory dysfunction on the developing brain. PMID:28603492
Barone, Pascal; Chambaudie, Laure; Strelnikov, Kuzma; Fraysse, Bernard; Marx, Mathieu; Belin, Pascal; Deguine, Olivier
2016-10-01
Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ludersdorfer, Philipp; Wimmer, Heinz; Richlan, Fabio; Schurz, Matthias; Hutzler, Florian; Kronbichler, Martin
2016-01-01
The present fMRI study investigated the hypothesis that activation of the left ventral occipitotemporal cortex (vOT) in response to auditory words can be attributed to lexical orthographic rather than lexico-semantic processing. To this end, we presented auditory words in both an orthographic ("three or four letter word?") and a semantic ("living or nonliving?") task. In addition, a auditory control condition presented tones in a pitch evaluation task. The results showed that the left vOT exhibited higher activation for orthographic relative to semantic processing of auditory words with a peak in the posterior part of vOT. Comparisons to the auditory control condition revealed that orthographic processing of auditory words elicited activation in a large vOT cluster. In contrast, activation for semantic processing was only weak and restricted to the middle part vOT. We interpret our findings as speaking for orthographic processing in left vOT. In particular, we suggest that activation in left middle vOT can be attributed to accessing orthographic whole-word representations. While activation of such representations was experimentally ascertained in the orthographic task, it might have also occurred automatically in the semantic task. Activation in the more posterior vOT region, on the other hand, may reflect the generation of explicit images of word-specific letter sequences required by the orthographic but not the semantic task. In addition, based on cross-modal suppression, the finding of marked deactivations in response to the auditory tones is taken to reflect the visual nature of representations and processes in left vOT. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Auditory processing and morphological anomalies in medial geniculate nucleus of Cntnap2 mutant mice.
Truong, Dongnhu T; Rendall, Amanda R; Castelluccio, Brian C; Eigsti, Inge-Marie; Fitch, R Holly
2015-12-01
Genetic epidemiological studies support a role for CNTNAP2 in developmental language disorders such as autism spectrum disorder, specific language impairment, and dyslexia. Atypical language development and function represent a core symptom of autism spectrum disorder (ASD), with evidence suggesting that aberrant auditory processing-including impaired spectrotemporal processing and enhanced pitch perception-may both contribute to an anomalous language phenotype. Investigation of gene-brain-behavior relationships in social and repetitive ASD symptomatology have benefited from experimentation on the Cntnap2 knockout (KO) mouse. However, auditory-processing behavior and effects on neural structures within the central auditory pathway have not been assessed in this model. Thus, this study examined whether auditory-processing abnormalities were associated with mutation of the Cntnap2 gene in mice. Cntnap2 KO mice were assessed on auditory-processing tasks including silent gap detection, embedded tone detection, and pitch discrimination. Cntnap2 knockout mice showed deficits in silent gap detection but a surprising superiority in pitch-related discrimination as compared with controls. Stereological analysis revealed a reduction in the number and density of neurons, as well as a shift in neuronal size distribution toward smaller neurons, in the medial geniculate nucleus of mutant mice. These findings are consistent with a central role for CNTNAP2 in the ontogeny and function of neural systems subserving auditory processing and suggest that developmental disruption of these neural systems could contribute to the atypical language phenotype seen in autism spectrum disorder. (c) 2015 APA, all rights reserved).
Language networks in anophthalmia: maintained hierarchy of processing in 'visual' cortex.
Watkins, Kate E; Cowey, Alan; Alexander, Iona; Filippini, Nicola; Kennedy, James M; Smith, Stephen M; Ragge, Nicola; Bridge, Holly
2012-05-01
Imaging studies in blind subjects have consistently shown that sensory and cognitive tasks evoke activity in the occipital cortex, which is normally visual. The precise areas involved and degree of activation are dependent upon the cause and age of onset of blindness. Here, we investigated the cortical language network at rest and during an auditory covert naming task in five bilaterally anophthalmic subjects, who have never received visual input. When listening to auditory definitions and covertly retrieving words, these subjects activated lateral occipital cortex bilaterally in addition to the language areas activated in sighted controls. This activity was significantly greater than that present in a control condition of listening to reversed speech. The lateral occipital cortex was also recruited into a left-lateralized resting-state network that usually comprises anterior and posterior language areas. Levels of activation to the auditory naming and reversed speech conditions did not differ in the calcarine (striate) cortex. This primary 'visual' cortex was not recruited to the left-lateralized resting-state network and showed high interhemispheric correlation of activity at rest, as is typically seen in unimodal cortical areas. In contrast, the interhemispheric correlation of resting activity in extrastriate areas was reduced in anophthalmia to the level of cortical areas that are heteromodal, such as the inferior frontal gyrus. Previous imaging studies in the congenitally blind show that primary visual cortex is activated in higher-order tasks, such as language and memory to a greater extent than during more basic sensory processing, resulting in a reversal of the normal hierarchy of functional organization across 'visual' areas. Our data do not support such a pattern of organization in anophthalmia. Instead, the patterns of activity during task and the functional connectivity at rest are consistent with the known hierarchy of processing in these areas normally seen for vision. The differences in cortical organization between bilateral anophthalmia and other forms of congenital blindness are considered to be due to the total absence of stimulation in 'visual' cortex by light or retinal activity in the former condition, and suggests development of subcortical auditory input to the geniculo-striate pathway.
Brain function assessment in different conscious states.
Ozgoren, Murat; Bayazit, Onur; Kocaaslan, Sibel; Gokmen, Necati; Oniz, Adile
2010-06-03
The study of brain functioning is a major challenge in neuroscience fields as human brain has a dynamic and ever changing information processing. Case is worsened with conditions where brain undergoes major changes in so-called different conscious states. Even though the exact definition of consciousness is a hard one, there are certain conditions where the descriptions have reached a consensus. The sleep and the anesthesia are different conditions which are separable from each other and also from wakefulness. The aim of our group has been to tackle the issue of brain functioning with setting up similar research conditions for these three conscious states. In order to achieve this goal we have designed an auditory stimulation battery with changing conditions to be recorded during a 40 channel EEG polygraph (Nuamps) session. The stimuli (modified mismatch, auditory evoked etc.) have been administered both in the operation room and the sleep lab via Embedded Interactive Stimulus Unit which was developed in our lab. The overall study has provided some results for three domains of consciousness. In order to be able to monitor the changes we have incorporated Bispectral Index Monitoring to both sleep and anesthesia conditions. The first stage results have provided a basic understanding in these altered states such that auditory stimuli have been successfully processed in both light and deep sleep stages. The anesthesia provides a sudden change in brain responsiveness; therefore a dosage dependent anesthetic administration has proved to be useful. The auditory processing was exemplified targeting N1 wave, with a thorough analysis from spectrogram to sLORETA. The frequency components were observed to be shifting throughout the stages. The propofol administration and the deeper sleep stages both resulted in the decreasing of N1 component. The sLORETA revealed similar activity at BA7 in sleep (BIS 70) and target propofol concentration of 1.2 microg/mL. The current study utilized similar stimulation and recording system and incorporated BIS dependent values to validate a common approach to sleep and anesthesia. Accordingly the brain has a complex behavior pattern, dynamically changing its responsiveness in accordance with stimulations and states.
Brainstem Correlates of Temporal Auditory Processing in Children with Specific Language Impairment
ERIC Educational Resources Information Center
Basu, Madhavi; Krishnan, Ananthanarayan; Weber-Fox, Christine
2010-01-01
Deficits in identification and discrimination of sounds with short inter-stimulus intervals or short formant transitions in children with specific language impairment (SLI) have been taken to reflect an underlying temporal auditory processing deficit. Using the sustained frequency following response (FFR) and the onset auditory brainstem responses…
Positron Emission Tomography in Cochlear Implant and Auditory Brainstem Implant Recipients.
ERIC Educational Resources Information Center
Miyamoto, Richard T.; Wong, Donald
2001-01-01
Positron emission tomography imaging was used to evaluate the brain's response to auditory stimulation, including speech, in deaf adults (five with cochlear implants and one with an auditory brainstem implant). Functional speech processing was associated with activation in areas classically associated with speech processing. (Contains five…
Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith
ERIC Educational Resources Information Center
Bailey, Frank S.; Yocum, Russell G.
2015-01-01
The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…
Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul
2016-01-01
Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception. PMID:27042360
Auditory conflict and congruence in frontotemporal dementia.
Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D
2017-09-01
Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquiere, Pol
2006-01-01
In this project, the hypothesis of an auditory temporal processing deficit in dyslexia was tested by examining auditory processing in relation to phonological skills in two contrasting groups of five-year-old preschool children, a familial high risk and a familial low risk group. Participants were individually matched for gender, age, non-verbal…
Advanced Parkinson disease patients have impairment in prosody processing.
Albuquerque, Luisa; Martins, Maurício; Coelho, Miguel; Guedes, Leonor; Ferreira, Joaquim J; Rosa, Mário; Martins, Isabel Pavão
2016-01-01
The ability to recognize and interpret emotions in others is a crucial prerequisite of adequate social behavior. Impairments in emotion processing have been reported from the early stages of Parkinson's disease (PD). This study aims to characterize emotion recognition in advanced Parkinson's disease (APD) candidates for deep-brain stimulation and to compare emotion recognition abilities in visual and auditory domains. APD patients, defined as those with levodopa-induced motor complications (N = 42), and healthy controls (N = 43) matched by gender, age, and educational level, undertook the Comprehensive Affect Testing System (CATS), a battery that evaluates recognition of seven basic emotions (happiness, sadness, anger, fear, surprise, disgust, and neutral) on facial expressions and four emotions on prosody (happiness, sadness, anger, and fear). APD patients were assessed during the "ON" state. Group performance was compared with independent-samples t tests. Compared to controls, APD had significantly lower scores on the discrimination and naming of emotions in prosody, and visual discrimination of neutral faces, but no significant differences in visual emotional tasks. The contrasting performance in emotional processing between visual and auditory stimuli suggests that APD candidates for surgery have either a selective difficulty in recognizing emotions in prosody or a general defect in prosody processing. Studies investigating early-stage PD, and the effect of subcortical lesions in prosody processing, favor the latter interpretation. Further research is needed to understand these deficits in emotional prosody recognition and their possible contribution to later behavioral or neuropsychiatric manifestations of PD.
Wilkinson, Sam
2014-11-01
Two challenges that face popular self-monitoring theories (SMTs) of auditory verbal hallucination (AVH) are that they cannot account for the auditory phenomenology of AVHs and that they cannot account for their variety. In this paper I show that both challenges can be met by adopting a predictive processing framework (PPF), and by viewing AVHs as arising from abnormalities in predictive processing. I show how, within the PPF, both the auditory phenomenology of AVHs, and three subtypes of AVH, can be accounted for. Copyright © 2014 The Author. Published by Elsevier Inc. All rights reserved.
Thalamic and cortical pathways supporting auditory processing
Lee, Charles C.
2012-01-01
The neural processing of auditory information engages pathways that begin initially at the cochlea and that eventually reach forebrain structures. At these higher levels, the computations necessary for extracting auditory source and identity information rely on the neuroanatomical connections between the thalamus and cortex. Here, the general organization of these connections in the medial geniculate body (thalamus) and the auditory cortex is reviewed. In addition, we consider two models organizing the thalamocortical pathways of the non-tonotopic and multimodal auditory nuclei. Overall, the transfer of information to the cortex via the thalamocortical pathways is complemented by the numerous intracortical and corticocortical pathways. Although interrelated, the convergent interactions among thalamocortical, corticocortical, and commissural pathways enable the computations necessary for the emergence of higher auditory perception. PMID:22728130
Halliday, Lorna F; Tuomainen, Outi; Rosen, Stuart
2017-09-01
There is a general consensus that many children and adults with dyslexia and/or specific language impairment display deficits in auditory processing. However, how these deficits are related to developmental disorders of language is uncertain, and at least four categories of model have been proposed: single distal cause models, risk factor models, association models, and consequence models. This study used children with mild to moderate sensorineural hearing loss (MMHL) to investigate the link between auditory processing deficits and language disorders. We examined the auditory processing and language skills of 46, 8-16year-old children with MMHL and 44 age-matched typically developing controls. Auditory processing abilities were assessed using child-friendly psychophysical techniques in order to obtain discrimination thresholds. Stimuli incorporated three different timescales (µs, ms, s) and three different levels of complexity (simple nonspeech tones, complex nonspeech sounds, speech sounds), and tasks required discrimination of frequency or amplitude cues. Language abilities were assessed using a battery of standardised assessments of phonological processing, reading, vocabulary, and grammar. We found evidence that three different auditory processing abilities showed different relationships with language: Deficits in a general auditory processing component were necessary but not sufficient for language difficulties, and were consistent with a risk factor model; Deficits in slow-rate amplitude modulation (envelope) detection were sufficient but not necessary for language difficulties, and were consistent with either a single distal cause or a consequence model; And deficits in the discrimination of a single speech contrast (/bɑ/ vs /dɑ/) were neither necessary nor sufficient for language difficulties, and were consistent with an association model. Our findings suggest that different auditory processing deficits may constitute distinct and independent routes to the development of language difficulties in children. Copyright © 2017 Elsevier B.V. All rights reserved.
Auditory connections and functions of prefrontal cortex
Plakke, Bethany; Romanski, Lizabeth M.
2014-01-01
The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931
Listening to Another Sense: Somatosensory Integration in the Auditory System
Wu, Calvin; Stefanescu, Roxana A.; Martel, David T.
2014-01-01
Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems, and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body, and the auditory cortex. In this review, we explore the process of multisensory integration from 1) anatomical (inputs and connections), 2) physiological (cellular responses), 3) functional, and 4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing, and offers a multisensory perspective regarding the understanding of sensory disorders. PMID:25526698
Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults
Tusch, Erich S.; Alperin, Brittany R.; Holcomb, Phillip J.; Daffner, Kirk R.
2016-01-01
The inhibitory deficit hypothesis of cognitive aging posits that older adults’ inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1) observed under an auditory-ignore, but not auditory-attend condition, 2) attenuated in individuals with high executive capacity (EC), and 3) augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend) task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study’s findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts. PMID:27806081
Speech Evoked Auditory Brainstem Response in Stuttering
Tahaei, Ali Akbar; Ashayeri, Hassan; Pourbakht, Akram; Kamali, Mohammad
2014-01-01
Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS) at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency. PMID:25215262
Hertz, Uri; Amedi, Amir
2015-01-01
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756
Hertz, Uri; Amedi, Amir
2015-08-01
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. © The Author 2014. Published by Oxford University Press.
P50 Suppression in Children with Selective Mutism: A Preliminary Report
ERIC Educational Resources Information Center
Henkin, Yael; Feinholz, Maya; Arie, Miri; Bar-Haim, Yair
2010-01-01
Evidence suggests that children with selective mutism (SM) display significant aberrations in auditory efferent activity at the brainstem level that may underlie inefficient auditory processing during vocalization, and lead to speech avoidance. The objective of the present study was to explore auditory filtering processes at the cortical level in…
The Diagnosis and Management of Auditory Processing Disorder
ERIC Educational Resources Information Center
Moore, David R.
2011-01-01
Purpose: To provide a personal perspective on auditory processing disorder (APD), with reference to the recent clinical forum on APD and the needs of clinical speech-language pathologists and audiologists. Method: The Medical Research Council-Institute of Hearing Research (MRC-IHR) has been engaged in research into APD and auditory learning for 8…
Auditory and Linguistic Processes in the Perception of Intonation Contours.
ERIC Educational Resources Information Center
Studdert-Kennedy, Michael; Hadding, Kerstin
By examining the relations among sections of the fundamental frequency contour used in judging an utterance as a question or statement, the experiment described in this report seeks a more detailed understanding of auditory-linguistic interaction in the perception of intonation contours. The perceptual process may be divided into stages (auditory,…
Directional Effects between Rapid Auditory Processing and Phonological Awareness in Children
ERIC Educational Resources Information Center
Johnson, Erin Phinney; Pennington, Bruce F.; Lee, Nancy Raitano; Boada, Richard
2009-01-01
Background: Deficient rapid auditory processing (RAP) has been associated with early language impairment and dyslexia. Using an auditory masking paradigm, children with language disabilities perform selectively worse than controls at detecting a tone in a backward masking (BM) condition (tone followed by white noise) compared to a forward masking…
ERIC Educational Resources Information Center
Fey, Marc E.; Richard, Gail J.; Geffner, Donna; Kamhi, Alan G.; Medwetsky, Larry; Paul, Diane; Ross-Swain, Deborah; Wallach, Geraldine P.; Frymark, Tobi; Schooling, Tracy
2011-01-01
Purpose: In this systematic review, the peer-reviewed literature on the efficacy of interventions for school-age children with auditory processing disorder (APD) is critically evaluated. Method: Searches of 28 electronic databases yielded 25 studies for analysis. These studies were categorized by research phase (e.g., exploratory, efficacy) and…
ERIC Educational Resources Information Center
Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.
2017-01-01
Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…
Intact Spectral but Abnormal Temporal Processing of Auditory Stimuli in Autism
ERIC Educational Resources Information Center
Groen, Wouter B.; van Orsouw, Linda; ter Huurne, Niels; Swinkels, Sophie; van der Gaag, Rutger-Jan; Buitelaar, Jan K.; Zwiers, Marcel P.
2009-01-01
The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with high-functioning-autism and 23 matched controls…
Visual and Auditory Input in Second-Language Speech Processing
ERIC Educational Resources Information Center
Hardison, Debra M.
2010-01-01
The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…
Cardin, Jessica A; Raksin, Jonathan N; Schmidt, Marc F
2005-04-01
Sensorimotor integration in the avian song system is crucial for both learning and maintenance of song, a vocal motor behavior. Although a number of song system areas demonstrate both sensory and motor characteristics, their exact roles in auditory and premotor processing are unclear. In particular, it is unknown whether input from the forebrain nucleus interface of the nidopallium (NIf), which exhibits both sensory and premotor activity, is necessary for both auditory and premotor processing in its target, HVC. Here we show that bilateral NIf lesions result in long-term loss of HVC auditory activity but do not impair song production. NIf is thus a major source of auditory input to HVC, but an intact NIf is not necessary for motor output in adult zebra finches.
Responses of auditory-cortex neurons to structural features of natural sounds.
Nelken, I; Rotman, Y; Bar Yosef, O
1999-01-14
Sound-processing strategies that use the highly non-random structure of natural sounds may confer evolutionary advantage to many species. Auditory processing of natural sounds has been studied almost exclusively in the context of species-specific vocalizations, although these form only a small part of the acoustic biotope. To study the relationships between properties of natural soundscapes and neuronal processing mechanisms in the auditory system, we analysed sound from a range of different environments. Here we show that for many non-animal sounds and background mixtures of animal sounds, energy in different frequency bands is coherently modulated. Co-modulation of different frequency bands in background noise facilitates the detection of tones in noise by humans, a phenomenon known as co-modulation masking release (CMR). We show that co-modulation also improves the ability of auditory-cortex neurons to detect tones in noise, and we propose that this property of auditory neurons may underlie behavioural CMR. This correspondence may represent an adaptation of the auditory system for the use of an attribute of natural sounds to facilitate real-world processing tasks.
Visser, Eelke; Zwiers, Marcel P; Kan, Cornelis C; Hoekstra, Liesbeth; van Opstal, A John; Buitelaar, Jan K
2013-11-01
Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.
Pre-Attentive Auditory Processing of Lexicality
ERIC Educational Resources Information Center
Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan
2004-01-01
The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…
Zhang, Guang-Wei; Sun, Wen-Jian; Zingg, Brian; Shen, Li; He, Jufang; Xiong, Ying; Tao, Huizhong W; Zhang, Li I
2018-01-17
In the mammalian brain, auditory information is known to be processed along a central ascending pathway leading to auditory cortex (AC). Whether there exist any major pathways beyond this canonical auditory neuraxis remains unclear. In awake mice, we found that auditory responses in entorhinal cortex (EC) cannot be explained by a previously proposed relay from AC based on response properties. By combining anatomical tracing and optogenetic/pharmacological manipulations, we discovered that EC received auditory input primarily from the medial septum (MS), rather than AC. A previously uncharacterized auditory pathway was then revealed: it branched from the cochlear nucleus, and via caudal pontine reticular nucleus, pontine central gray, and MS, reached EC. Neurons along this non-canonical auditory pathway responded selectively to high-intensity broadband noise, but not pure tones. Disruption of the pathway resulted in an impairment of specifically noise-cued fear conditioning. This reticular-limbic pathway may thus function in processing aversive acoustic signals. Copyright © 2017 Elsevier Inc. All rights reserved.
Auditory reafferences: the influence of real-time feedback on movement control.
Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus
2015-01-01
Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.
Auditory Scene Analysis: An Attention Perspective
2017-01-01
Purpose This review article provides a new perspective on the role of attention in auditory scene analysis. Method A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity. Conclusions A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601618 PMID:29049599
Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M
2000-01-01
Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.
Test of the neurolinguistic programming hypothesis that eye-movements relate to processing imagery.
Wertheim, E H; Habib, C; Cumming, G
1986-04-01
Bandler and Grinder's hypothesis that eye-movements reflect sensory processing was examined. 28 volunteers first memorized and then recalled visual, auditory, and kinesthetic stimuli. Changes in eye-positions during recall were videotaped and categorized by two raters into positions hypothesized by Bandler and Grinder's model to represent visual, auditory, and kinesthetic recall. Planned contrast analyses suggested that visual stimulus items, when recalled, elicited significantly more upward eye-positions and stares than auditory and kinesthetic items. Auditory and kinesthetic items, however, did not elicit more changes in eye-position hypothesized by the model to represent auditory and kinesthetic recall, respectively.
Is auditory perceptual timing a core deficit of developmental coordination disorder?
Trainor, Laurel J; Chang, Andrew; Cairney, John; Li, Yao-Chuen
2018-05-09
Time is an essential dimension for perceiving and processing auditory events, and for planning and producing motor behaviors. Developmental coordination disorder (DCD) is a neurodevelopmental disorder affecting 5-6% of children that is characterized by deficits in motor skills. Studies show that children with DCD have motor timing and sensorimotor timing deficits. We suggest that auditory perceptual timing deficits may also be core characteristics of DCD. This idea is consistent with evidence from several domains, (1) motor-related brain regions are often involved in auditory timing process; (2) DCD has high comorbidity with dyslexia and attention deficit hyperactivity, which are known to be associated with auditory timing deficits; (3) a few studies report deficits in auditory-motor timing among children with DCD; and (4) our preliminary behavioral and neuroimaging results show that children with DCD at age 6 and 7 have deficits in auditory time discrimination compared to typically developing children. We propose directions for investigating auditory perceptual timing processing in DCD that use various behavioral and neuroimaging approaches. From a clinical perspective, research findings can potentially benefit our understanding of the etiology of DCD, identify early biomarkers of DCD, and can be used to develop evidence-based interventions for DCD involving auditory-motor training. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of The New York Academy of Sciences.
Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.; Munhall, Kevin G.; Cusack, Rhodri; Johnsrude, Ingrid S.
2013-01-01
The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multi-voxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was employed to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared to during passive listening. One network of regions appears to encode an ‘error signal’ irrespective of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a fronto-temporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Taken together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems. PMID:23467350
Auditory temporal processing in healthy aging: a magnetoencephalographic study
Sörös, Peter; Teismann, Inga K; Manemann, Elisabeth; Lütkenhöner, Bernd
2009-01-01
Background Impaired speech perception is one of the major sequelae of aging. In addition to peripheral hearing loss, central deficits of auditory processing are supposed to contribute to the deterioration of speech perception in older individuals. To test the hypothesis that auditory temporal processing is compromised in aging, auditory evoked magnetic fields were recorded during stimulation with sequences of 4 rapidly recurring speech sounds in 28 healthy individuals aged 20 – 78 years. Results The decrement of the N1m amplitude during rapid auditory stimulation was not significantly different between older and younger adults. The amplitudes of the middle-latency P1m wave and of the long-latency N1m, however, were significantly larger in older than in younger participants. Conclusion The results of the present study do not provide evidence for the hypothesis that auditory temporal processing, as measured by the decrement (short-term habituation) of the major auditory evoked component, the N1m wave, is impaired in aging. The differences between these magnetoencephalographic findings and previously published behavioral data might be explained by differences in the experimental setting between the present study and previous behavioral studies, in terms of speech rate, attention, and masking noise. Significantly larger amplitudes of the P1m and N1m waves suggest that the cortical processing of individual sounds differs between younger and older individuals. This result adds to the growing evidence that brain functions, such as sensory processing, motor control and cognitive processing, can change during healthy aging, presumably due to experience-dependent neuroplastic mechanisms. PMID:19351410
Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam
2011-01-01
To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.
Translating birdsong: songbirds as a model for basic and applied medical research.
Brainard, Michael S; Doupe, Allison J
2013-07-08
Songbirds, long of interest to basic neuroscience, have great potential as a model system for translational neuroscience. Songbirds learn their complex vocal behavior in a manner that exemplifies general processes of perceptual and motor skill learning and, more specifically, resembles human speech learning. Song is subserved by circuitry that is specialized for vocal learning and production but that has strong similarities to mammalian brain pathways. The combination of highly quantifiable behavior and discrete neural substrates facilitates understanding links between brain and behavior, both in normal states and in disease. Here we highlight (a) behavioral and mechanistic parallels between birdsong and aspects of speech and social communication, including insights into mirror neurons, the function of auditory feedback, and genes underlying social communication disorders, and (b) contributions of songbirds to understanding cortical-basal ganglia circuit function and dysfunction, including the possibility of harnessing adult neurogenesis for brain repair.
Translating Birdsong: Songbirds as a model for basic and applied medical research
2014-01-01
Songbirds, long of interest to basic neuroscientists, have great potential as a model system for translational neuroscience. Songbirds learn their complex vocal behavior in a manner that exemplifies general processes of perceptual and motor skill learning, and more specifically resembles human speech learning. Song is subserved by circuitry that is specialized for vocal learning and production, but that has strong similarities to mammalian brain pathways. The combination of a highly quantifiable behavior and discrete neural substrates facilitates understanding links between brain and behavior, both normally and in disease. Here we highlight 1) behavioral and mechanistic parallels between birdsong and aspects of speech and social communication, including insights into mirror neurons, the function of auditory feedback, and genes underlying social communication disorders, and 2) contributions of songbirds to understanding cortical-basal ganglia circuit function and dysfunction, including the possibility of harnessing adult neurogenesis for brain repair. PMID:23750515
Aging effects on functional auditory and visual processing using fMRI with variable sensory loading.
Cliff, Michael; Joyce, Dan W; Lamar, Melissa; Dannhauser, Thomas; Tracy, Derek K; Shergill, Sukhwinder S
2013-05-01
Traditionally, studies investigating the functional implications of age-related structural brain alterations have focused on higher cognitive processes; by increasing stimulus load, these studies assess behavioral and neurophysiological performance. In order to understand age-related changes in these higher cognitive processes, it is crucial to examine changes in visual and auditory processes that are the gateways to higher cognitive functions. This study provides evidence for age-related functional decline in visual and auditory processing, and regional alterations in functional brain processing, using non-invasive neuroimaging. Using functional magnetic resonance imaging (fMRI), younger (n=11; mean age=31) and older (n=10; mean age=68) adults were imaged while observing flashing checkerboard images (passive visual stimuli) and hearing word lists (passive auditory stimuli) across varying stimuli presentation rates. Younger adults showed greater overall levels of temporal and occipital cortical activation than older adults for both auditory and visual stimuli. The relative change in activity as a function of stimulus presentation rate showed differences between young and older participants. In visual cortex, the older group showed a decrease in fMRI blood oxygen level dependent (BOLD) signal magnitude as stimulus frequency increased, whereas the younger group showed a linear increase. In auditory cortex, the younger group showed a relative increase as a function of word presentation rate, while older participants showed a relatively stable magnitude of fMRI BOLD response across all rates. When analyzing participants across all ages, only the auditory cortical activation showed a continuous, monotonically decreasing BOLD signal magnitude as a function of age. Our preliminary findings show an age-related decline in demand-related, passive early sensory processing. As stimulus demand increases, visual and auditory cortex do not show increases in activity in older compared to younger people. This may negatively impact on the fidelity of information available to higher cognitive processing. Such evidence may inform future studies focused on cognitive decline in aging. Copyright © 2012 Elsevier Ltd. All rights reserved.
Auditory stream segregation in children with Asperger syndrome
Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E.; Nieminen-von Wendt, T.; Kujala, T.
2009-01-01
Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception. PMID:19751798
Air Traffic Controllers’ Long-Term Speech-in-Noise Training Effects: A Control Group Study
Zaballos, María T.P.; Plasencia, Daniel P.; González, María L.Z.; de Miguel, Angel R.; Macías, Ángel R.
2016-01-01
Introduction: Speech perception in noise relies on the capacity of the auditory system to process complex sounds using sensory and cognitive skills. The possibility that these can be trained during adulthood is of special interest in auditory disorders, where speech in noise perception becomes compromised. Air traffic controllers (ATC) are constantly exposed to radio communication, a situation that seems to produce auditory learning. The objective of this study has been to quantify this effect. Subjects and Methods: 19 ATC and 19 normal hearing individuals underwent a speech in noise test with three signal to noise ratios: 5, 0 and −5 dB. Noise and speech were presented through two different loudspeakers in azimuth position. Speech tokes were presented at 65 dB SPL, while white noise files were at 60, 65 and 70 dB respectively. Results: Air traffic controllers outperform the control group in all conditions [P<0.05 in ANOVA and Mann-Whitney U tests]. Group differences were largest in the most difficult condition, SNR=−5 dB. However, no correlation between experience and performance were found for any of the conditions tested. The reason might be that ceiling performance is achieved much faster than the minimum experience time recorded, 5 years, although intrinsic cognitive abilities cannot be disregarded. Discussion: ATC demonstrated enhanced ability to hear speech in challenging listening environments. This study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions, although good cognitive qualities are likely to be a basic requirement for this training to be effective. Conclusion: Our results show that ATC outperform the control group in all conditions. Thus, this study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions. PMID:27991470
Stavrinos, Georgios; Iliadou, Vassiliki-Maria; Edwards, Lindsey; Sirimanna, Tony; Bamiou, Doris-Eva
2018-01-01
Measures of attention have been found to correlate with specific auditory processing tests in samples of children suspected of Auditory Processing Disorder (APD), but these relationships have not been adequately investigated. Despite evidence linking auditory attention and deficits/symptoms of APD, measures of attention are not routinely used in APD diagnostic protocols. The aim of the study was to examine the relationship between auditory and visual attention tests and auditory processing tests in children with APD and to assess whether a proposed diagnostic protocol for APD, including measures of attention, could provide useful information for APD management. A pilot study including 27 children, aged 7–11 years, referred for APD assessment was conducted. The validated test of everyday attention for children, with visual and auditory attention tasks, the listening in spatialized noise sentences test, the children's communication checklist questionnaire and tests from a standard APD diagnostic test battery were administered. Pearson's partial correlation analysis examining the relationship between these tests and Cochrane's Q test analysis comparing proportions of diagnosis under each proposed battery were conducted. Divided auditory and divided auditory-visual attention strongly correlated with the dichotic digits test, r = 0.68, p < 0.05, and r = 0.76, p = 0.01, respectively, in a sample of 20 children with APD diagnosis. The standard APD battery identified a larger proportion of participants as having APD, than an attention battery identified as having Attention Deficits (ADs). The proposed APD battery excluding AD cases did not have a significantly different diagnosis proportion than the standard APD battery. Finally, the newly proposed diagnostic battery, identifying an inattentive subtype of APD, identified five children who would have otherwise been considered not having ADs. The findings show that a subgroup of children with APD demonstrates underlying sustained and divided attention deficits. Attention deficits in children with APD appear to be centred around the auditory modality but further examination of types of attention in both modalities is required. Revising diagnostic criteria to incorporate attention tests and the inattentive type of APD in the test battery, provides additional useful data to clinicians to ensure careful interpretation of APD assessments. PMID:29441033
Perceptual Plasticity for Auditory Object Recognition
Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.
2017-01-01
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed. PMID:28588524
Aghamolaei, Maryam; Zarnowiec, Katarzyna; Grimm, Sabine; Escera, Carles
2016-02-01
Auditory deviance detection based on regularity encoding appears as one of the basic functional properties of the auditory system. It has traditionally been assessed with the mismatch negativity (MMN) long-latency component of the auditory evoked potential (AEP). Recent studies have found earlier correlates of deviance detection based on regularity encoding. They occur in humans in the first 50 ms after sound onset, at the level of the middle-latency response of the AEP, and parallel findings of stimulus-specific adaptation observed in animal studies. However, the functional relationship between these different levels of regularity encoding and deviance detection along the auditory hierarchy has not yet been clarified. Here we addressed this issue by examining deviant-related responses at different levels of the auditory hierarchy to stimulus changes varying in their degree of deviation regarding the spatial location of a repeated standard stimulus. Auditory stimuli were presented randomly from five loudspeakers at azimuthal angles of 0°, 12°, 24°, 36° and 48° during oddball and reversed-oddball conditions. Middle-latency responses and MMN were measured. Our results revealed that middle-latency responses were sensitive to deviance but not the degree of deviation, whereas the MMN amplitude increased as a function of deviance magnitude. These findings indicated that acoustic regularity can be encoded at the level of the middle-latency response but that it takes a higher step in the auditory hierarchy for deviance magnitude to be encoded, thus providing a functional dissociation between regularity encoding and deviance detection along the auditory hierarchy. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Multimodal lexical processing in auditory cortex is literacy skill dependent.
McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R
2014-09-01
Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Chen, Yi-Chuan; Spence, Charles
2018-04-30
We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.
Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M
1991-06-01
An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.
ERIC Educational Resources Information Center
Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.
2017-01-01
This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…
Teaching Turkish as a Foreign Language: Extrapolating from Experimental Psychology
ERIC Educational Resources Information Center
Erdener, Dogu
2017-01-01
Speech perception is beyond the auditory domain and a multimodal process, specifically, an auditory-visual one--we process lip and face movements during speech. In this paper, the findings in cross-language studies of auditory-visual speech perception in the past two decades are interpreted to the applied domain of second language (L2)…
Utilizing Oral-Motor Feedback in Auditory Conceptualization.
ERIC Educational Resources Information Center
Howard, Marilyn
The Auditory Discrimination in Depth (ADD) program, an oral-motor approach to beginning reading instruction, trains first grade children in auditory skills by a process in which language and oral-motor feedback are used to integrate auditory properties with visual properties. This emphasis of the ADD program makes the child's perceptual…
Intertrial auditory neural stability supports beat synchronization in preschoolers
Carr, Kali Woodruff; Tierney, Adam; White-Schwoch, Travis; Kraus, Nina
2016-01-01
The ability to synchronize motor movements along with an auditory beat places stringent demands on the temporal processing and sensorimotor integration capabilities of the nervous system. Links between millisecond-level precision of auditory processing and the consistency of sensorimotor beat synchronization implicate fine auditory neural timing as a mechanism for forming stable internal representations of, and behavioral reactions to, sound. Here, for the first time, we demonstrate a systematic relationship between consistency of beat synchronization and trial-by-trial stability of subcortical speech processing in preschoolers (ages 3 and 4 years old). We conclude that beat synchronization might provide a useful window into millisecond-level neural precision for encoding sound in early childhood, when speech processing is especially important for language acquisition and development. PMID:26760457
Kantrowitz, J T; Hoptman, M J; Leitman, D I; Silipo, G; Javitt, D C
2014-01-01
Intact sarcasm perception is a crucial component of social cognition and mentalizing (the ability to understand the mental state of oneself and others). In sarcasm, tone of voice is used to negate the literal meaning of an utterance. In particular, changes in pitch are used to distinguish between sincere and sarcastic utterances. Schizophrenia patients show well-replicated deficits in auditory function and functional connectivity (FC) within and between auditory cortical regions. In this study we investigated the contributions of auditory deficits to sarcasm perception in schizophrenia. Auditory measures including pitch processing, auditory emotion recognition (AER) and sarcasm detection were obtained from 76 patients with schizophrenia/schizo-affective disorder and 72 controls. Resting-state FC (rsFC) was obtained from a subsample and was analyzed using seeds placed in both auditory cortex and meta-analysis-defined core-mentalizing regions relative to auditory performance. Patients showed large effect-size deficits across auditory measures. Sarcasm deficits correlated significantly with general functioning and impaired pitch processing both across groups and within the patient group alone. Patients also showed reduced sensitivity to alterations in mean pitch and variability. For patients, sarcasm discrimination correlated exclusively with the level of rsFC within primary auditory regions whereas for controls, correlations were observed exclusively within core-mentalizing regions (the right posterior superior temporal gyrus, anterior superior temporal sulcus and insula, and left posterior medial temporal gyrus). These findings confirm the contribution of auditory deficits to theory of mind (ToM) impairments in schizophrenia, and demonstrate that FC within auditory, but not core-mentalizing, regions is rate limiting with respect to sarcasm detection in schizophrenia.
Auditory Task Irrelevance: A Basis for Inattentional Deafness
Scheer, Menja; Bülthoff, Heinrich H.; Chuang, Lewis L.
2018-01-01
Objective This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality. Background Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one’s capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings. Method Forty-eight participants performed a visuomotor tracking task while auditory stimuli were presented: a frequent pure tone, an infrequent pure tone, and infrequent environmental sounds. Participants were required either to respond to the presentation of the infrequent pure tone (auditory task-relevant) or not (auditory task-irrelevant). We recorded and compared the event-related potentials (ERPs) that were generated by environmental sounds, which were always task-irrelevant for both groups. These ERPs served as an index for our participants’ awareness of the task-irrelevant auditory scene. Results Manipulation of auditory task relevance influenced the brain’s response to task-irrelevant environmental sounds. Specifically, the late novelty-P3 to irrelevant environmental sounds, which underlies working memory updating, was found to be selectively enhanced by auditory task relevance independent of visuomotor workload. Conclusion Task irrelevance in the auditory modality selectively reduces our brain’s responses to unexpected and irrelevant sounds regardless of visuomotor workload. Application Presenting relevant auditory information more often could mitigate the risk of inattentional deafness. PMID:29578754
1984-08-01
90de It noce..etrv wnd identify by block numberl .’-- This work reviews the areas of monaural and binaural signal detection, auditory discrimination And...AUDITORY DISPLAYS This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to...pertaining to the major areas of auditory processing in humans. The areas covered in the reviews presented here are monaural and binaural siqnal detection
[Intervention in dyslexic disorders: phonological awareness training].
Etchepareborda, M C
2003-02-01
Taking into account the systems for the treatment of brain information when drawing up a work plan allows us to recreate processing routines that go from multisensory perception to motor, oral and cognitive production, which is the step prior to executive levels of thought, bottom-up and top-down processing systems. In recent years, the use of phonological methods to prevent or resolve reading disorders has become the fundamental mainstay in the treatment of dyslexia. The work is mainly based on phonological proficiency, which enables the patient to detect phonemes (input), to think about them (performance) and to use them to build words (output). Daily work with rhymes, the capacity to listen, the identification of phrases and words, and handling syllables and phonemes allows us to perform a preventive intervention that enhances the capacity to identify letters, phonological analysis and the reading of single words. We present the different therapeutic models that are most frequently employed. Fast For Word (FFW) training helps make progress in phonematic awareness and other linguistic skills, such as phonological awareness, semantics, syntax, grammar, working memory and event sequencing. With Deco-Fon, a programme for training phonological decoding, work is carried out on the auditory discrimination of pure tones, letters and consonant clusters, auditory processing speed, auditory and phonematic memory, and graphophonological processing, which is fundamental for speech, language and reading writing disorders. Hamlet is a programme based on categorisation activities for working on phonological conceptualisation. It attempts to encourage the analysis of the segments of words, syllables or phonemes, and the classification of a certain segment as belonging or not to a particular phonological or orthographical category. Therapeutic approaches in the early phases of reading are oriented towards two poles based on the basic mechanisms underlying the process of learning to read, the grapheme phoneme transformation process and global word recognition. The interventionalist strategies used at school are focused on the use of cognitive strategy techniques. The purpose of these techniques is to teach pupils practical strategies or resources aimed at overcoming specific deficiencies.
... Loss Hearing Loss in Seniors Hearing Aids General Information Types Features Fittings Assistive Listening & Alerting Devices Cochlear Implants Aural Rehabilitation Auditory Processing Disorders (APDs) Common Conditions Dizziness Tinnitus Who Are ...
Vélez-Ortega, A. Catalina; Frolenkov, Gregory I.
2016-01-01
The mechanosensory apparatus that detects sound-induced vibrations in the cochlea is located on the apex of the auditory sensory hair cells and it is made up of actin-filled projections, called stereocilia. In young rodents, stereocilia bundles of auditory hair cells consist of 3 to 4 rows of stereocilia of decreasing height and varying thickness. Morphological studies of the auditory stereocilia bundles in live hair cells have been challenging because the diameter of each stereocilium is near or below the resolution limit of optical microscopy. In theory, scanning probe microscopy techniques, such as atomic force microscopy, could visualize the surface of a living cell at a nanoscale resolution. However, their implementations for hair cell imaging have been largely unsuccessful because the probe usually damages the bundle and disrupts the bundle cohesiveness during imaging. We overcome these limitations by using hopping probe ion conductance microscopy (HPICM), a non-contact scanning probe technique that is ideally suited for the imaging of live cells with a complex topography. Organ of Corti explants are placed in a physiological solution and then a glass nanopipette –which is connected to a 3D-positioning piezoelectric system and to a patch clamp amplifier– is used to scan the surface of the live hair cells at nanometer resolution without ever touching the cell surface. Here we provide a detailed protocol for the imaging of mouse or rat stereocilia bundles in live auditory hair cells using HPICM. We provide information about the fabrication of the nanopipettes, the calibration of the HPICM setup, the parameters we have optimized for the imaging of live stereocilia bundles and, lastly, a few basic image post-processing manipulations. PMID:27259929
Vélez-Ortega, A Catalina; Frolenkov, Gregory I
2016-01-01
The mechanosensory apparatus that detects sound-induced vibrations in the cochlea is located on the apex of the auditory sensory hair cells and it is made up of actin-filled projections, called stereocilia. In young rodents, stereocilia bundles of auditory hair cells consist of 3-4 rows of stereocilia of decreasing height and varying thickness. Morphological studies of the auditory stereocilia bundles in live hair cells have been challenging because the diameter of each stereocilium is near or below the resolution limit of optical microscopy. In theory, scanning probe microscopy techniques, such as atomic force microscopy, could visualize the surface of a living cell at a nanoscale resolution. However, their implementations for hair cell imaging have been largely unsuccessful because the probe usually damages the bundle and disrupts the bundle cohesiveness during imaging. We overcome these limitations by using hopping probe ion conductance microscopy (HPICM), a non-contact scanning probe technique that is ideally suited for the imaging of live cells with a complex topography. Organ of Corti explants are placed in a physiological solution and then a glass nanopipette-which is connected to a 3D-positioning piezoelectric system and to a patch clamp amplifier-is used to scan the surface of the live hair cells at nanometer resolution without ever touching the cell surface.Here, we provide a detailed protocol for the imaging of mouse or rat stereocilia bundles in live auditory hair cells using HPICM. We provide information about the fabrication of the nanopipettes, the calibration of the HPICM setup, the parameters we have optimized for the imaging of live stereocilia bundles and, lastly, a few basic image post-processing manipulations.
Bender, Stephan; Behringer, Stephanie; Freitag, Christine M; Resch, Franz; Weisbrod, Matthias
2010-12-01
To elucidate the contributions of modality-dependent post-processing in auditory, motor and visual cortical areas to short-term memory. We compared late negative waves (N700) during the post-processing of single lateralized stimuli which were separated by long intertrial intervals across the auditory, motor and visual modalities. Tasks either required or competed with attention to post-processing of preceding events, i.e. active short-term memory maintenance. N700 indicated that cortical post-processing exceeded short movements as well as short auditory or visual stimuli for over half a second without intentional short-term memory maintenance. Modality-specific topographies pointed towards sensory (respectively motor) generators with comparable time-courses across the different modalities. Lateralization and amplitude of auditory/motor/visual N700 were enhanced by active short-term memory maintenance compared to attention to current perceptions or passive stimulation. The memory-related N700 increase followed the characteristic time-course and modality-specific topography of the N700 without intentional memory-maintenance. Memory-maintenance-related lateralized negative potentials may be related to a less lateralised modality-dependent post-processing N700 component which occurs also without intentional memory maintenance (automatic memory trace or effortless attraction of attention). Encoding to short-term memory may involve controlled attention to modality-dependent post-processing. Similar short-term memory processes may exist in the auditory, motor and visual systems. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Age-equivalent top-down modulation during cross-modal selective attention.
Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam
2014-12-01
Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.
Visual processing affects the neural basis of auditory discrimination.
Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko
2008-12-01
The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.
Franken, Matthias K; Eisner, Frank; Acheson, Daniel J; McQueen, James M; Hagoort, Peter; Schoffelen, Jan-Mathijs
2018-06-21
Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C.; Kuchenbuch, Anja; Pantev, Christo
2014-01-01
Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events. PMID:24595014
Cross-modal perceptual load: the impact of modality and individual differences.
Sandhu, Rajwant; Dyson, Benjamin James
2016-05-01
Visual distractor processing tends to be more pronounced when the perceptual load (PL) of a task is low compared to when it is high [perpetual load theory (PLT); Lavie in J Exp Psychol Hum Percept Perform 21(3):451-468, 1995]. While PLT is well established in the visual domain, application to cross-modal processing has produced mixed results, and the current study was designed in an attempt to improve previous methodologies. First, we assessed PLT using response competition, a typical metric from the uni-modal domain. Second, we looked at the impact of auditory load on visual distractors, and of visual load on auditory distractors, within the same individual. Third, we compared individual uni- and cross-modal selective attention abilities, by correlating performance with the visual Attentional Network Test (ANT). Fourth, we obtained a measure of the relative processing efficiency between vision and audition, to investigate whether processing ease influences the extent of distractor processing. Although distractor processing was evident during both attend auditory and attend visual conditions, we found that PL did not modulate processing of either visual or auditory distractors. We also found support for a correlation between the uni-modal (visual) ANT and our cross-modal task but only when the distractors were visual. Finally, although auditory processing was more impacted by visual distractors, our measure of processing efficiency only accounted for this asymmetry in the auditory high-load condition. The results are discussed with respect to the continued debate regarding the shared or separate nature of processing resources across modalities.
Impairments of Multisensory Integration and Cross-Sensory Learning as Pathways to Dyslexia
Hahn, Noemi; Foxe, John J.; Molholm, Sophie
2014-01-01
Two sensory systems are intrinsic to learning to read. Written words enter the brain through the visual system and associated sounds through the auditory system. The task before the beginning reader is quite basic. She must learn correspondences between orthographic tokens and phonemic utterances, and she must do this to the point that there is seamless automatic ‘connection’ between these sensorially distinct units of language. It is self-evident then that learning to read requires formation of cross-sensory associations to the point that deeply encoded multisensory representations are attained. While the majority of individuals manage this task to a high degree of expertise, some struggle to attain even rudimentary capabilities. Why do dyslexic individuals, who learn well in myriad other domains, fail at this particular task? Here, we examine the literature as it pertains to multisensory processing in dyslexia. We find substantial support for multisensory deficits in dyslexia, and make the case that to fully understand its neurological basis, it will be necessary to thoroughly probe the integrity of auditory-visual integration mechanisms. PMID:25265514
Kotchoubey, Boris; Pavlov, Yuri G; Kleber, Boris
2015-01-01
According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients' self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation.
Kotchoubey, Boris; Pavlov, Yuri G.; Kleber, Boris
2015-01-01
According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients’ self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation. PMID:26640445
NASA Astrophysics Data System (ADS)
Shinn-Cunningham, Barbara
2003-04-01
One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.
Phillips, D P; Farmer, M E
1990-11-15
This paper explores the nature of the processing disorder which underlies the speech discrimination deficit in the syndrome of acquired word deafness following from pathology to the primary auditory cortex. A critical examination of the evidence on this disorder revealed the following. First, the most profound forms of the condition are expressed not only in an isolation of the cerebral linguistic processor from auditory input, but in a failure of even the perceptual elaboration of the relevant sounds. Second, in agreement with earlier studies, we conclude that the perceptual dimension disturbed in word deafness is a temporal one. We argue, however, that it is not a generalized disorder of auditory temporal processing, but one which is largely restricted to the processing of sounds with temporal content in the milliseconds to tens-of-milliseconds time frame. The perceptual elaboration of sounds with temporal content outside that range, in either direction, may survive the disorder. Third, we present neurophysiological evidence that the primary auditory cortex has a special role in the representation of auditory events in that time frame, but not in the representation of auditory events with temporal grains outside that range.
Perceptual consequences of disrupted auditory nerve activity.
Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold
2005-06-01
Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique contribution of neural synchrony to sensory perception but also provide guidance for translational research in terms of better diagnosis and management of human communication disorders.
Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing
Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
2016-01-01
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812
Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.
Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
2016-01-01
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.
Visser, Eelke; Zwiers, Marcel P.; Kan, Cornelis C.; Hoekstra, Liesbeth; van Opstal, A. John; Buitelaar, Jan K.
2013-01-01
Background Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. Methods We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Results Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. Limitations The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Conclusion Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs. PMID:24148845
Cerebral processing of auditory stimuli in patients with irritable bowel syndrome
Andresen, Viola; Poellinger, Alexander; Tsrouya, Chedwa; Bach, Dominik; Stroh, Albrecht; Foerschler, Annette; Georgiewa, Petra; Schmidtmann, Marco; van der Voort, Ivo R; Kobelt, Peter; Zimmer, Claus; Wiedenmann, Bertram; Klapp, Burghard F; Monnikes, Hubert
2006-01-01
AIM: To determine by brain functional magnetic resonance imaging (fMRI) whether cerebral processing of non-visceral stimuli is altered in irritable bowel syndrome (IBS) patients compared with healthy subjects. To circumvent spinal viscerosomatic convergence mechanisms, we used auditory stimulation, and to identify a possible influence of psychological factors the stimuli differed in their emotional quality. METHODS: In 8 IBS patients and 8 controls, fMRI measurements were performed using a block design of 4 auditory stimuli of different emotional quality (pleasant sounds of chimes, unpleasant peep (2000 Hz), neutral words, and emotional words). A gradient echo T2*-weighted sequence was used for the functional scans. Statistical maps were constructed using the general linear model. RESULTS: To emotional auditory stimuli, IBS patients relative to controls responded with stronger deactivations in a greater variety of emotional processing regions, while the response patterns, unlike in controls, did not differentiate between distressing or pleasant sounds. To neutral auditory stimuli, by contrast, only IBS patients responded with large significant activations. CONCLUSION: Altered cerebral response patterns to auditory stimuli in emotional stimulus-processing regions suggest that altered sensory processing in IBS may not be specific for visceral sensation, but might reflect generalized changes in emotional sensitivity and affective reactivity, possibly associated with the psychological comorbidity often found in IBS patients. PMID:16586541
Terband, H.; Maassen, B.; Guenther, F.H.; Brumberg, J.
2014-01-01
Background/Purpose Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. Method In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Results Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. Conclusions These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. PMID:24491630
NASA Astrophysics Data System (ADS)
Mulligan, B. E.; Goodman, L. S.; McBride, D. K.; Mitchell, T. M.; Crosby, T. N.
1984-08-01
This work reviews the areas of auditory attention, recognition, memory and auditory perception of patterns, pitch, and loudness. The review was written from the perspective of human engineering and focuses primarily on auditory processing of information contained in acoustic signals. The impetus for this effort was to establish a data base to be utilized in the design and evaluation of acoustic displays.
Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter
2002-12-01
Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.
Central Auditory Processing Disorders: Is It a Meaningful Construct or a Twentieth Century Unicorn?
ERIC Educational Resources Information Center
Kamhi, Alan G.; Beasley, Daniel S.
1985-01-01
The article demonstrates how professional and theoretical perspectives (including psycholinguistics, behaviorist, and information processing perspectives) significantly influence the manner in which central auditory processing is viewed, assessed, and remediated. (Author/CL)
Auditory psychophysics and perception.
Hirsh, I J; Watson, C S
1996-01-01
In this review of auditory psychophysics and perception, we cite some important books, research monographs, and research summaries from the past decade. Within auditory psychophysics, we have singled out some topics of current importance: Cross-Spectral Processing, Timbre and Pitch, and Methodological Developments. Complex sounds and complex listening tasks have been the subject of new studies in auditory perception. We review especially work that concerns auditory pattern perception, with emphasis on temporal aspects of the patterns and on patterns that do not depend on the cognitive structures often involved in the perception of speech and music. Finally, we comment on some aspects of individual difference that are sufficiently important to question the goal of characterizing auditory properties of the typical, average, adult listener. Among the important factors that give rise to these individual differences are those involved in selective processing and attention.
Law, Jeremy M.; Vandermosten, Maaike; Ghesquière, Pol; Wouters, Jan
2017-01-01
Purpose: This longitudinal study examines measures of temporal auditory processing in pre-reading children with a family risk of dyslexia. Specifically, it attempts to ascertain whether pre-reading auditory processing, speech perception, and phonological awareness (PA) reliably predict later literacy achievement. Additionally, this study retrospectively examines the presence of pre-reading auditory processing, speech perception, and PA impairments in children later found to be literacy impaired. Method: Forty-four pre-reading children with and without a family risk of dyslexia were assessed at three time points (kindergarten, first, and second grade). Auditory processing measures of rise time (RT) discrimination and frequency modulation (FM) along with speech perception, PA, and various literacy tasks were assessed. Results: Kindergarten RT uniquely contributed to growth in literacy in grades one and two, even after controlling for letter knowledge and PA. Highly significant concurrent and predictive correlations were observed with kindergarten RT significantly predicting first grade PA. Retrospective analysis demonstrated atypical performance in RT and PA at all three time points in children who later developed literacy impairments. Conclusions: Although significant, kindergarten auditory processing contributions to later literacy growth lack the power to be considered as a single-cause predictor; thus results support temporal processing deficits' contribution within a multiple deficit model of dyslexia. PMID:28223953
Vandewalle, Ellen; Boets, Bart; Ghesquière, Pol; Zink, Inge
2012-01-01
This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay (n = 8), (2) children with SLI and normal literacy (n = 10) and (3) typically developing children (n = 14). Moreover, the relations between these auditory processing and speech perception skills and oral language and literacy skills in grade 1 and grade 3 were analyzed. The SLI group with literacy delay scored significantly lower than both other groups on speech perception, but not on temporal auditory processing. Both normal reading groups did not differ in terms of speech perception or auditory processing. Speech perception was significantly related to reading and spelling in grades 1 and 3 and had a unique predictive contribution to reading growth in grade 3, even after controlling reading level, phonological ability, auditory processing and oral language skills in grade 1. These findings indicated that speech perception also had a unique direct impact upon reading development and not only through its relation with phonological awareness. Moreover, speech perception seemed to be more associated with the development of literacy skills and less with oral language ability. Copyright © 2011 Elsevier Ltd. All rights reserved.
Brain Mapping of Language and Auditory Perception in High-Functioning Autistic Adults: A PET Study.
ERIC Educational Resources Information Center
Muller, R-A.; Behen, M. E.; Rothermel, R. D.; Chugani, D. C.; Muzik, O.; Mangner, T. J.; Chugani, H. T.
1999-01-01
A study used positron emission tomography (PET) to study patterns of brain activation during auditory processing in five high-functioning adults with autism. Results found that participants showed reversed hemispheric dominance during the verbal auditory stimulation and reduced activation of the auditory cortex and cerebellum. (CR)
Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony
2009-01-01
It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…
Nonverbal auditory agnosia with lesion to Wernicke's area.
Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic
2010-01-01
We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.
Speech perception in individuals with auditory dys-synchrony.
Kumar, U A; Jayaram, M
2011-03-01
This study aimed to evaluate the effect of lengthening the transition duration of selected speech segments upon the perception of those segments in individuals with auditory dys-synchrony. Thirty individuals with auditory dys-synchrony participated in the study, along with 30 age-matched normal hearing listeners. Eight consonant-vowel syllables were used as auditory stimuli. Two experiments were conducted. Experiment one measured the 'just noticeable difference' time: the smallest prolongation of the speech sound transition duration which was noticeable by the subject. In experiment two, speech sounds were modified by lengthening the transition duration by multiples of the just noticeable difference time, and subjects' speech identification scores for the modified speech sounds were assessed. Subjects with auditory dys-synchrony demonstrated poor processing of temporal auditory information. Lengthening of speech sound transition duration improved these subjects' perception of both the placement and voicing features of the speech syllables used. These results suggest that innovative speech processing strategies which enhance temporal cues may benefit individuals with auditory dys-synchrony.
Processing Problems and Language Impairment in Children.
ERIC Educational Resources Information Center
Watkins, Ruth V.
1990-01-01
The article reviews studies on the assessment of rapid auditory processing abilities. Issues in auditory processing research are identified including a link between otitis media with effusion and language learning problems. A theory that linguistically impaired children experience difficulty in perceiving and processing low phonetic substance…
Local and Global Auditory Processing: Behavioral and ERP Evidence
Sanders, Lisa D.; Poeppel, David
2007-01-01
Differential processing of local and global visual features is well established. Global precedence effects, differences in event-related potentials (ERPs) elicited when attention is focused on local versus global levels, and hemispheric specialization for local and global features all indicate that relative scale of detail is an important distinction in visual processing. Observing analogous differential processing of local and global auditory information would suggest that scale of detail is a general organizational principle of the brain. However, to date the research on auditory local and global processing has primarily focused on music perception or on the perceptual analysis of relatively higher and lower frequencies. The study described here suggests that temporal aspects of auditory stimuli better capture the local-global distinction. By combining short (40 ms) frequency modulated tones in series to create global auditory patterns (500 ms), we independently varied whether pitch increased or decreased over short time spans (local) and longer time spans (global). Accuracy and reaction time measures revealed better performance for global judgments and asymmetric interference that were modulated by amount of pitch change. ERPs recorded while participants listened to identical sounds and indicated the direction of pitch change at the local or global levels provided evidence for differential processing similar to that found in ERP studies employing hierarchical visual stimuli. ERP measures failed to provide evidence for lateralization of local and global auditory perception, but differences in distributions suggest preferential processing in more ventral and dorsal areas respectively. PMID:17113115
Short-term plasticity in auditory cognition.
Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko
2007-12-01
Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.
Prevalence of auditory changes in newborns in a teaching hospital
Guimarães, Valeriana de Castro; Barbosa, Maria Alves
2012-01-01
Summary Introduction: The precocious diagnosis and the intervention in the deafness are of basic importance in the infantile development. The loss auditory and more prevalent than other joined riots to the birth. Objective: Esteem the prevalence of auditory alterations in just-born in a hospital school. Method: Prospective transversal study that evaluated 226 just-been born, been born in a public hospital, between May of 2008 the May of 2009. Results: Of the 226 screened, 46 (20.4%) had presented absence of emissions, having been directed for the second emission. Of the 26 (56.5%) children who had appeared in the retest, 8 (30.8%) had remained with absence and had been directed to the Otolaryngologist. Five (55.5%) had appeared and had been examined by the doctor. Of these, 3 (75.0%) had presented normal otoscopy, being directed for evaluation of the Evoked Potential Auditory of Brainstem (PEATE). Of the total of studied children, 198 (87.6%) had had presence of emissions in one of the tests and, 2 (0.9%) with deafness diagnosis. Conclusion: The prevalence of auditory alterations in the studied population was of 0,9%. The study it offers given excellent epidemiologists and it presents the first report on the subject, supplying resulted preliminary future implantation and development of a program of neonatal auditory selection. PMID:25991933
Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed
2016-03-01
This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9-11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.
Hale, Matthew D; Zaman, Arshad; Morrall, Matthew C H J; Chumas, Paul; Maguire, Melissa J
2018-03-01
Presurgical evaluation for temporal lobe epilepsy routinely assesses speech and memory lateralization and anatomic localization of the motor and visual areas but not baseline musical processing. This is paramount in a musician. Although validated tools exist to assess musical ability, there are no reported functional magnetic resonance imaging (fMRI) paradigms to assess musical processing. We examined the utility of a novel fMRI paradigm in an 18-year-old left-handed pianist who underwent surgery for a left temporal low-grade ganglioglioma. Preoperative evaluation consisted of neuropsychological evaluation, T1-weighted and T2-weighted magnetic resonance imaging, and fMRI. Auditory blood oxygen level-dependent fMRI was performed using a dedicated auditory scanning sequence. Three separate auditory investigations were conducted: listening to, humming, and thinking about a musical piece. All auditory fMRI paradigms activated the primary auditory cortex with varying degrees of auditory lateralization. Thinking about the piece additionally activated the primary visual cortices (bilaterally) and right dorsolateral prefrontal cortex. Humming demonstrated left-sided predominance of auditory cortex activation with activity observed in close proximity to the tumor. This study demonstrated an fMRI paradigm for evaluating musical processing that could form part of preoperative assessment for patients undergoing temporal lobe surgery for epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.
Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.
Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer
2010-09-29
Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole.
Temporal integration at consecutive processing stages in the auditory pathway of the grasshopper.
Wirtssohn, Sarah; Ronacher, Bernhard
2015-04-01
Temporal integration in the auditory system of locusts was quantified by presenting single clicks and click pairs while performing intracellular recordings. Auditory neurons were studied at three processing stages, which form a feed-forward network in the metathoracic ganglion. Receptor neurons and most first-order interneurons ("local neurons") encode the signal envelope, while second-order interneurons ("ascending neurons") tend to extract more complex, behaviorally relevant sound features. In different neuron types of the auditory pathway we found three response types: no significant temporal integration (some ascending neurons), leaky energy integration (receptor neurons and some local neurons), and facilitatory processes (some local and ascending neurons). The receptor neurons integrated input over very short time windows (<2 ms). Temporal integration on longer time scales was found at subsequent processing stages, indicative of within-neuron computations and network activity. These different strategies, realized at separate processing stages and in parallel neuronal pathways within one processing stage, could enable the grasshopper's auditory system to evaluate longer time windows and thus to implement temporal filters, while at the same time maintaining a high temporal resolution. Copyright © 2015 the American Physiological Society.
Effects of visual working memory on brain information processing of irrelevant auditory stimuli.
Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye
2014-01-01
Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.
ERIC Educational Resources Information Center
Marler, Jeffrey A.; Champlin, Craig A.
2005-01-01
The purpose of this study was to examine the possible contribution of sensory mechanisms to an auditory processing deficit shown by some children with language-learning impairment (LLI). Auditory brainstem responses (ABRs) were measured from 2 groups of school-aged (8-10 years) children. One group consisted of 10 children with LLI, and the other…
ERIC Educational Resources Information Center
Bolen, L. M.; Kimball, D. J.; Hall, C. W.; Webster, R. E.
1997-01-01
Compares the visual and auditory processing factors of the Woodcock Johnson Tests of Cognitive Ability, Revised (WJR COG) and the visual and auditory memory factors of the Learning Efficiency Test, II (LET-II) among 120 college students. Results indicate two significant performance differences between the WJR COG and LET-II. (RJM)
Parthasarathy, Aravindakshan; Bartlett, Edward
2012-07-01
Auditory brainstem responses (ABRs), and envelope and frequency following responses (EFRs and FFRs) are widely used to study aberrant auditory processing in conditions such as aging. We have previously reported age-related deficits in auditory processing for rapid amplitude modulation (AM) frequencies using EFRs recorded from a single channel. However, sensitive testing of EFRs along a wide range of modulation frequencies is required to gain a more complete understanding of the auditory processing deficits. In this study, ABRs and EFRs were recorded simultaneously from two electrode configurations in young and old Fischer-344 rats, a common auditory aging model. Analysis shows that the two channels respond most sensitively to complementary AM frequencies. Channel 1, recorded from Fz to mastoid, responds better to faster AM frequencies in the 100-700 Hz range of frequencies, while Channel 2, recorded from the inter-aural line to the mastoid, responds better to slower AM frequencies in the 16-100 Hz range. Simultaneous recording of Channels 1 and 2 using AM stimuli with varying sound levels and modulation depths show that age-related deficits in temporal processing are not present at slower AM frequencies but only at more rapid ones, which would not have been apparent recording from either channel alone. Comparison of EFRs between un-anesthetized and isoflurane-anesthetized recordings in young animals, as well as comparison with previously published ABR waveforms, suggests that the generators of Channel 1 may emphasize more caudal brainstem structures while those of Channel 2 may emphasize more rostral auditory nuclei including the inferior colliculus and the forebrain, with the boundary of separation potentially along the cochlear nucleus/superior olivary complex. Simultaneous two-channel recording of EFRs help to give a more complete understanding of the properties of auditory temporal processing over a wide range of modulation frequencies which is useful in understanding neural representations of sound stimuli in normal, developmental or pathological conditions. Copyright © 2012 Elsevier B.V. All rights reserved.
Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.
Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale
2015-10-01
Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.
Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation
Lopez-Poveda, Enrique A.; Barrios, Pablo
2013-01-01
Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176
Influence of anxiety, depression and looming cognitive style on auditory looming perception.
Riskind, John H; Kleiman, Evan M; Seifritz, Erich; Neuhoff, John
2014-01-01
Previous studies show that individuals with an anticipatory auditory looming bias over-estimate the closeness of a sound source that approaches them. Our present study bridges cognitive clinical and perception research, and provides evidence that anxiety symptoms and a particular putative cognitive style that creates vulnerability for anxiety (looming cognitive style, or LCS) are related to how people perceive this ecologically fundamental auditory warning signal. The effects of anxiety symptoms on the anticipatory auditory looming effect synergistically depend on the dimension of perceived personal danger assessed by the LCS (physical or social threat). Depression symptoms, in contrast to anxiety symptoms, predict a diminution of the auditory looming bias. Findings broaden our understanding of the links between cognitive-affective states and auditory perception processes and lend further support to past studies providing evidence that the looming cognitive style is related to bias in threat processing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Dai, Jennifer B; Chen, Yining; Sakata, Jon T
2018-05-21
Distinguishing between familiar and unfamiliar individuals is an important task that shapes the expression of social behavior. As such, identifying the neural populations involved in processing and learning the sensory attributes of individuals is important for understanding mechanisms of behavior. Catecholamine-synthesizing neurons have been implicated in sensory processing, but relatively little is known about their contribution to auditory learning and processing across various vertebrate taxa. Here we investigated the extent to which immediate early gene expression in catecholaminergic circuitry reflects information about the familiarity of social signals and predicts immediate early gene expression in sensory processing areas in songbirds. We found that male zebra finches readily learned to differentiate between familiar and unfamiliar acoustic signals ('songs') and that playback of familiar songs led to fewer catecholaminergic neurons in the locus coeruleus (but not in the ventral tegmental area, substantia nigra, or periaqueductal gray) expressing the immediate early gene, EGR-1, than playback of unfamiliar songs. The pattern of EGR-1 expression in the locus coeruleus was similar to that observed in two auditory processing areas implicated in auditory learning and memory, namely the caudomedial nidopallium (NCM) and the caudal medial mesopallium (CMM), suggesting a contribution of catecholamines to sensory processing. Consistent with this, the pattern of catecholaminergic innervation onto auditory neurons co-varied with the degree to which song playback affected the relative intensity of EGR-1 expression. Together, our data support the contention that catecholamines like norepinephrine contribute to social recognition and the processing of social information. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Response to own name in children: ERP study of auditory social information processing.
Key, Alexandra P; Jones, Dorita; Peters, Sarika U
2016-09-01
Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other's names from unknown names, as reflected by the enhanced parietal P300 response. The responses to own and close other names did not differ between each other. Repeated presentations of an unknown name did not result in the same familiarity as the known names. These results suggest that auditory ERPs to known/unknown names are a feasible means to evaluate complex auditory processing without the need for overt behavioral responses. Copyright © 2016 Elsevier B.V. All rights reserved.
Response to Own Name in Children: ERP Study of Auditory Social Information Processing
Key, Alexandra P.; Jones, Dorita; Peters, Sarika U.
2016-01-01
Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85 years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other’s names from unknown names, as reflected by the enhanced parietal P300 response. The responses to own and close other names did not differ between each other. Repeated presentations of an unknown name did not result in the same familiarity as the known names. These results suggest that auditory ERPs to known/unknown names are a feasible means to evaluate complex auditory processing without the need for overt behavioral responses. PMID:27456543
Biagianti, Bruno; Fisher, Melissa; Neilands, Torsten B.; Loewy, Rachel; Vinogradov, Sophia
2016-01-01
BACKGROUND Individuals with schizophrenia who engage in targeted cognitive training (TCT) of the auditory system show generalized cognitive improvements. The high degree of variability in cognitive gains maybe due to individual differences in the level of engagement of the underlying neural system target. METHODS 131 individuals with schizophrenia underwent 40 hours of TCT. We identified target engagement of auditory system processing efficiency by modeling subject-specific trajectories of auditory processing speed (APS) over time. Lowess analysis, mixed models repeated measures analysis, and latent growth curve modeling were used to examine whether APS trajectories were moderated by age and illness duration, and mediated improvements in cognitive outcome measures. RESULTS We observed signifcant improvements in APS from baseline to 20 hours of training (initial change), followed by a flat APS trajectory (plateau) at subsequent time-points. Participants showed inter-individual variability in the steepness of the initial APS change and in the APS plateau achieved and sustained between 20–40 hours. We found that participants who achieved the fastest APS plateau, showed the greatest transfer effects to untrained cognitive domains. CONCLUSIONS There is a significant association between an individual's ability to generate and sustain auditory processing efficiency and their degree of cognitive improvement after TCT, independent of baseline neurocognition. APS plateau may therefore represent a behavioral measure of target engagement mediating treatment response. Future studies should examine the optimal plateau of auditory processing efficiency required to induce significant cognitive improvements, in the context of inter-individual differences in neural plasticity and sensory system efficiency that characterize schizophrenia. PMID:27617637
The Role of the Auditory Brainstem in Processing Musically Relevant Pitch
Bidelman, Gavin M.
2013-01-01
Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain. PMID:23717294
A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.
Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D
2018-06-01
The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants <18, nonhuman participants, and article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as neurophysiological substrates for auditory prediction. Tinnitus has been modeled as an auditory object which may demonstrate incomplete processing during auditory scene analysis resulting in tinnitus salience and therefore difficulty in habituation. Within the electrophysiological domain, there is currently mixed evidence regarding oscillatory band changes in tinnitus. There are theoretical proposals for a relationship between prediction error and tinnitus but few published empirical studies. American Academy of Audiology.
Schrode, Katrina M.; Bee, Mark A.
2015-01-01
ABSTRACT Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male–male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467
Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Friederici, Angela D
2015-12-01
Literacy acquisition is highly associated with auditory processing abilities, such as auditory discrimination. The event-related potential Mismatch Response (MMR) is an indicator for cortical auditory discrimination abilities and it has been found to be reduced in individuals with reading and writing impairments and also in infants at risk for these impairments. The goal of the present study was to analyze the relationship between auditory speech discrimination in infancy and writing abilities at school age within subjects, and to determine when auditory speech discrimination differences, relevant for later writing abilities, start to develop. We analyzed the MMR registered in response to natural syllables in German children with and without writing problems at two points during development, that is, at school age and at infancy, namely at age 1 month and 5 months. We observed MMR related auditory discrimination differences between infants with and without later writing problems, starting to develop at age 5 months-an age when infants begin to establish language-specific phoneme representations. At school age, these children with and without writing problems also showed auditory discrimination differences, reflected in the MMR, confirming a relationship between writing and auditory speech processing skills. Thus, writing problems at school age are, at least, partly grounded in auditory discrimination problems developing already during the first months of life. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kaganovich, Natalya; Wray, Amanda Hampton; Weber-Fox, Christine
2010-01-01
Non-linguistic auditory processing and working memory update were examined with event-related potentials (ERPs) in 18 children who stutter (CWS) and 18 children who do not stutter (CWNS). Children heard frequent 1kHz tones interspersed with rare 2kHz tones. The two groups did not differ on any measure of the P1 and N1 components, strongly suggesting that early auditory processing of pure tones is unimpaired in CWS. However, as a group, only CWNS exhibited a P3 component to rare tones suggesting that developmental stuttering may be associated with a less efficient attentional allocation and working memory update in response to auditory change. PMID:21038162
Auditory Processing Disorder (For Parents)
... or other speech-language difficulties? Are verbal (word) math problems difficult for your child? Is your child ... inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...
The role of primary auditory and visual cortices in temporal processing: A tDCS approach.
Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F
2016-10-15
Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.
Selective impairment of auditory selective attention under concurrent cognitive load.
Dittrich, Kerstin; Stahl, Christoph
2012-06-01
Load theory predicts that concurrent cognitive load impairs selective attention. For visual stimuli, it has been shown that this impairment can be selective: Distraction was specifically increased when the stimulus material used in the cognitive load task matches that of the selective attention task. Here, we report four experiments that demonstrate such selective load effects for auditory selective attention. The effect of two different cognitive load tasks on two different auditory Stroop tasks was examined, and selective load effects were observed: Interference in a nonverbal-auditory Stroop task was increased under concurrent nonverbal-auditory cognitive load (compared with a no-load condition), but not under concurrent verbal-auditory cognitive load. By contrast, interference in a verbal-auditory Stroop task was increased under concurrent verbal-auditory cognitive load but not under nonverbal-auditory cognitive load. This double-dissociation pattern suggests the existence of different and separable verbal and nonverbal processing resources in the auditory domain.
Study on the application of the time-compressed speech in children.
Padilha, Fernanda Yasmin Odila Maestri Miguel; Pinheiro, Maria Madalena Canina
2017-11-09
To analyze the performance of children without alteration of central auditory processing in the Time-compressed Speech Test. This is a descriptive, observational, cross-sectional study. Study participants were 22 children aged 7-11 years without central auditory processing disorders. The following instruments were used to assess whether these children presented central auditory processing disorders: Scale of Auditory Behaviors, simplified evaluation of central auditory processing, and Dichotic Test of Digits (binaural integration stage). The Time-compressed Speech Test was applied to the children without auditory changes. The participants presented better performance in the list of monosyllabic words than in the list of disyllabic words, but with no statistically significant difference. No influence on test performance was observed with respect to order of presentation of the lists and the variables gender and ear. Regarding age, difference in performance was observed only in the list of disyllabic words. The mean score of children in the Time-compressed Speech Test was lower than that of adults reported in the national literature. Difference in test performance was observed only with respect to the age variable for the list of disyllabic words. No difference was observed in the order of presentation of the lists or in the type of stimulus.
Auditory orientation in crickets: Pattern recognition controls reactive steering
NASA Astrophysics Data System (ADS)
Poulet, James F. A.; Hedwig, Berthold
2005-10-01
Many groups of insects are specialists in exploiting sensory cues to locate food resources or conspecifics. To achieve orientation, bees and ants analyze the polarization pattern of the sky, male moths orient along the females' odor plume, and cicadas, grasshoppers, and crickets use acoustic signals to locate singing conspecifics. In comparison with olfactory and visual orientation, where learning is involved, auditory processing underlying orientation in insects appears to be more hardwired and genetically determined. In each of these examples, however, orientation requires a recognition process identifying the crucial sensory pattern to interact with a localization process directing the animal's locomotor activity. Here, we characterize this interaction. Using a sensitive trackball system, we show that, during cricket auditory behavior, the recognition process that is tuned toward the species-specific song pattern controls the amplitude of auditory evoked steering responses. Females perform small reactive steering movements toward any sound patterns. Hearing the male's calling song increases the gain of auditory steering within 2-5 s, and the animals even steer toward nonattractive sound patterns inserted into the speciesspecific pattern. This gain control mechanism in the auditory-to-motor pathway allows crickets to pursue species-specific sound patterns temporarily corrupted by environmental factors and may reflect the organization of recognition and localization networks in insects. localization | phonotaxis
Simulation of talking faces in the human brain improves auditory speech recognition
von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.
2008-01-01
Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648
Threshold and Beyond: Modeling The Intensity Dependence of Auditory Responses
2007-01-01
In many studies of auditory-evoked responses to low-intensity sounds, the response amplitude appears to increase roughly linearly with the sound level in decibels (dB), corresponding to a logarithmic intensity dependence. But the auditory system is assumed to be linear in the low-intensity limit. The goal of this study was to resolve the seeming contradiction. Based on assumptions about the rate-intensity functions of single auditory-nerve fibers and the pattern of cochlear excitation caused by a tone, a model for the gross response of the population of auditory nerve fibers was developed. In accordance with signal detection theory, the model denies the existence of a threshold. This implies that regarding the detection of a significant stimulus-related effect, a reduction in sound intensity can always be compensated for by increasing the measurement time, at least in theory. The model suggests that the gross response is proportional to intensity when the latter is low (range I), and a linear function of sound level at higher intensities (range III). For intensities in between, it is concluded that noisy experimental data may provide seemingly irrefutable evidence of a linear dependence on sound pressure (range II). In view of the small response amplitudes that are to be expected for intensity range I, direct observation of the predicted proportionality with intensity will generally be a challenging task for an experimenter. Although the model was developed for the auditory nerve, the basic conclusions are probably valid for higher levels of the auditory system, too, and might help to improve models for loudness at threshold. PMID:18008105
Prediction of cognitive outcome based on the progression of auditory discrimination during coma.
Juan, Elsa; De Lucia, Marzia; Tzovara, Athina; Beaud, Valérie; Oddo, Mauro; Clarke, Stephanie; Rossetti, Andrea O
2016-09-01
To date, no clinical test is able to predict cognitive and functional outcome of cardiac arrest survivors. Improvement of auditory discrimination in acute coma indicates survival with high specificity. Whether the degree of this improvement is indicative of recovery remains unknown. Here we investigated if progression of auditory discrimination can predict cognitive and functional outcome. We prospectively recorded electroencephalography responses to auditory stimuli of post-anoxic comatose patients on the first and second day after admission. For each recording, auditory discrimination was quantified and its evolution over the two recordings was used to classify survivors as "predicted" when it increased vs. "other" if not. Cognitive functions were tested on awakening and functional outcome was assessed at 3 months using the Cerebral Performance Categories (CPC) scale. Thirty-two patients were included, 14 "predicted survivors" and 18 "other survivors". "Predicted survivors" were more likely to recover basic cognitive functions shortly after awakening (ability to follow a standardized neuropsychological battery: 86% vs. 44%; p=0.03 (Fisher)) and to show a very good functional outcome at 3 months (CPC 1: 86% vs. 33%; p=0.004 (Fisher)). Moreover, progression of auditory discrimination during coma was strongly correlated with cognitive performance on awakening (phonemic verbal fluency: rs=0.48; p=0.009 (Spearman)). Progression of auditory discrimination during coma provides early indication of future recovery of cognitive functions. The degree of improvement is informative of the degree of functional impairment. If confirmed in a larger cohort, this test would be the first to predict detailed outcome at the single-patient level. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses
Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli
2015-01-01
Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858
Opposite brain laterality in analogous auditory and visual tests.
Oltedal, Leif; Hugdahl, Kenneth
2017-11-01
Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.
Strait, Dana L.; Kraus, Nina
2013-01-01
Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians’ subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model by which to study mechanisms of experience-dependent changes in auditory function in humans. PMID:23988583
Tan, Ao; Hu, Li; Tu, Yiheng; Chen, Rui; Hung, Yeung Sam; Zhang, Zhiguo
2016-07-01
N1 component of auditory evoked potentials is extensively used to investigate the propagation and processing of auditory inputs. However, the substantial interindividual variability of N1 could be a possible confounding factor when comparing different individuals or groups. Therefore, identifying the neuronal mechanism and origin of the interindividual variability of N1 is crucial in basic research and clinical applications. This study is aimed to use simultaneously recorded electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data to investigate the coupling between N1 and spontaneous functional connectivity (FC). EEG and fMRI data were simultaneously collected from a group of healthy individuals during a pure-tone listening task. Spontaneous FC was estimated from spontaneous blood oxygenation level-dependent (BOLD) signals that were isolated by regressing out task evoked BOLD signals from raw BOLD signals and then was correlated to N1 magnitude across individuals. It was observed that spontaneous FC between bilateral Heschl's gyrus was significantly and positively correlated with N1 magnitude across individuals (Spearman's R = 0.829, p < 0.001). The specificity of this observation was further confirmed by two whole-brain voxelwise analyses (voxel-mirrored homotopic connectivity analysis and seed-based connectivity analysis). These results enriched our understanding of the functional significance of the coupling between event-related brain responses and spontaneous brain connectivity, and hold the potential to increase the applicability of brain responses as a probe to the mechanism underlying pathophysiological conditions.
Sörös, Peter; Michael, Nikolaus; Tollkötter, Melanie; Pfleiderer, Bettina
2006-01-01
Background A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i) the amplitude of the N1m response and (ii) its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. Results Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak) and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. Conclusion These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex. PMID:16884545
Investigating the role of visual and auditory search in reading and developmental dyslexia
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a “serial” search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d′) strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in “serial” search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills. PMID:24093014
Investigating the role of visual and auditory search in reading and developmental dyslexia.
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.
Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J
2014-01-01
Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.
Law, Jeremy M.; Vandermosten, Maaike; Ghesquiere, Pol; Wouters, Jan
2014-01-01
This study investigated whether auditory, speech perception, and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e., rapid automatic naming, verbal short-term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM) and an amplitude rise time (RT); an intensity discrimination task (ID) was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words-in-noise tasks. Group analyses revealed significant group differences in auditory tasks (i.e., RT and ID) and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech-in-noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences. PMID:25071512
Auditory global-local processing: effects of attention and musical experience.
Ouimet, Tia; Foster, Nicholas E V; Hyde, Krista L
2012-10-01
In vision, global (whole) features are typically processed before local (detail) features ("global precedence effect"). However, the distinction between global and local processing is less clear in the auditory domain. The aims of the present study were to investigate: (i) the effects of directed versus divided attention, and (ii) the effect musical training on auditory global-local processing in 16 adult musicians and 16 non-musicians. Participants were presented with short nine-tone melodies, each comprised of three triplet sequences (three-tone units). In a "directed attention" task, participants were asked to focus on either the global or local pitch pattern and had to determine if the pitch pattern went up or down. In a "divided attention" task, participants judged whether the target pattern (up or down) was present or absent. Overall, global structure was perceived faster and more accurately than local structure. The global precedence effect was observed regardless of whether attention was directed to a specific level or divided between levels. Musicians performed more accurately than non-musicians overall, but non-musicians showed a more pronounced global advantage. This study provides evidence for an auditory global precedence effect across attention tasks, and for differences in auditory global-local processing associated with musical experience.
Pinaud, Raphael; Terleph, Thomas A.; Tremere, Liisa A.; Phan, Mimi L.; Dagostin, André A.; Leão, Ricardo M.; Mello, Claudio V.; Vicario, David S.
2008-01-01
The role of GABA in the central processing of complex auditory signals is not fully understood. We have studied the involvement of GABAA-mediated inhibition in the processing of birdsong, a learned vocal communication signal requiring intact hearing for its development and maintenance. We focused on caudomedial nidopallium (NCM), an area analogous to parts of the mammalian auditory cortex with selective responses to birdsong. We present evidence that GABAA-mediated inhibition plays a pronounced role in NCM's auditory processing of birdsong. Using immunocytochemistry, we show that approximately half of NCM's neurons are GABAergic. Whole cell patch-clamp recordings in a slice preparation demonstrate that, at rest, spontaneously active GABAergic synapses inhibit excitatory inputs onto NCM neurons via GABAA receptors. Multi-electrode electrophysiological recordings in awake birds show that local blockade of GABAA-mediated inhibition in NCM markedly affects the temporal pattern of song-evoked responses in NCM without modifications in frequency tuning. Surprisingly, this blockade increases the phasic and largely suppresses the tonic response component, reflecting dynamic relationships of inhibitory networks that could include disinhibition. Thus processing of learned natural communication sounds in songbirds, and possibly other vocal learners, may depend on complex interactions of inhibitory networks. PMID:18480371
Perception of temporally modified speech in auditory neuropathy.
Hassan, Dalia Mohamed
2011-01-01
Disrupted auditory nerve activity in auditory neuropathy (AN) significantly impairs the sequential processing of auditory information, resulting in poor speech perception. This study investigated the ability of AN subjects to perceive temporally modified consonant-vowel (CV) pairs and shed light on their phonological awareness skills. Four Arabic CV pairs were selected: /ki/-/gi/, /to/-/do/, /si/-/sti/ and /so/-/zo/. The formant transitions in consonants and the pauses between CV pairs were prolonged. Rhyming, segmentation and blending skills were tested using words at a natural rate of speech and with prolongation of the speech stream. Fourteen adult AN subjects were compared to a matched group of cochlear-impaired patients in their perception of acoustically processed speech. The AN group distinguished the CV pairs at a low speech rate, in particular with modification of the consonant duration. Phonological awareness skills deteriorated in adult AN subjects but improved with prolongation of the speech inter-syllabic time interval. A rehabilitation program for AN should consider temporal modification of speech, training for auditory temporal processing and the use of devices with innovative signal processing schemes. Verbal modifications as well as visual imaging appear to be promising compensatory strategies for remediating the affected phonological processing skills.
Demodulation processes in auditory perception
NASA Astrophysics Data System (ADS)
Feth, Lawrence L.
1994-08-01
The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.
Nonlinear Processing of Auditory Brainstem Response
2001-10-25
Kraków, Poland Abstract: - Auditory brainstem response potentials (ABR) are signals calculated from the EEG signals registered as responses to an...acoustic activation of the auditory system. The ABR signals provide an objective, diagnostic method, widely applied in examinations of hearing organs
Degraded speech sound processing in a rat model of fragile X syndrome
Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Kilgard, Michael P.
2014-01-01
Fragile X syndrome is the most common inherited form of intellectual disability and the leading genetic cause of autism. Impaired phonological processing in fragile X syndrome interferes with the development of language skills. Although auditory cortex responses are known to be abnormal in fragile X syndrome, it is not clear how these differences impact speech sound processing. This study provides the first evidence that the cortical representation of speech sounds is impaired in Fmr1 knockout rats, despite normal speech discrimination behavior. Evoked potentials and spiking activity in response to speech sounds, noise burst trains, and tones were significantly degraded in primary auditory cortex, anterior auditory field and the ventral auditory field. Neurometric analysis of speech evoked activity using a pattern classifier confirmed that activity in these fields contains significantly less information about speech sound identity in Fmr1 knockout rats compared to control rats. Responses were normal in the posterior auditory field, which is associated with sound localization. The greatest impairment was observed in the ventral auditory field, which is related to emotional regulation. Dysfunction in the ventral auditory field may contribute to poor emotional regulation in fragile X syndrome and may help explain the observation that later auditory evoked responses are more disturbed in fragile X syndrome compared to earlier responses. Rodent models of fragile X syndrome are likely to prove useful for understanding the biological basis of fragile X syndrome and for testing candidate therapies. PMID:24713347
Neural plasticity and its initiating conditions in tinnitus.
Roberts, L E
2018-03-01
Deafferentation caused by cochlear pathology (which can be hidden from the audiogram) activates forms of neural plasticity in auditory pathways, generating tinnitus and its associated conditions including hyperacusis. This article discusses tinnitus mechanisms and suggests how these mechanisms may relate to those involved in normal auditory information processing. Research findings from animal models of tinnitus and from electromagnetic imaging of tinnitus patients are reviewed which pertain to the role of deafferentation and neural plasticity in tinnitus and hyperacusis. Auditory neurons compensate for deafferentation by increasing their input/output functions (gain) at multiple levels of the auditory system. Forms of homeostatic plasticity are believed to be responsible for this neural change, which increases the spontaneous and driven activity of neurons in central auditory structures in animals expressing behavioral evidence of tinnitus. Another tinnitus correlate, increased neural synchrony among the affected neurons, is forged by spike-timing-dependent neural plasticity in auditory pathways. Slow oscillations generated by bursting thalamic neurons verified in tinnitus animals appear to modulate neural plasticity in the cortex, integrating tinnitus neural activity with information in brain regions supporting memory, emotion, and consciousness which exhibit increased metabolic activity in tinnitus patients. The latter process may be induced by transient auditory events in normal processing but it persists in tinnitus, driven by phantom signals from the auditory pathway. Several tinnitus therapies attempt to suppress tinnitus through plasticity, but repeated sessions will likely be needed to prevent tinnitus activity from returning owing to deafferentation as its initiating condition.
The neural basis of visual dominance in the context of audio-visual object processing.
Schmid, Carmen; Büchel, Christian; Rose, Michael
2011-03-01
Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.
Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed
2016-01-01
Background: This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). Methods: The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9–11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. Results: The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. Conclusion: The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information. PMID:26989281
Association between central auditory processing mechanism and cardiac autonomic regulation
2014-01-01
Background This study was conducted to describe the association between central auditory processing mechanism and the cardiac autonomic regulation. Methods It was researched papers on the topic addressed in this study considering the following data bases: Medline, Pubmed, Lilacs, Scopus and Cochrane. The key words were: “auditory stimulation, heart rate, autonomic nervous system and P300”. Results The findings in the literature demonstrated that auditory stimulation influences the autonomic nervous system and has been used in conjunction with other methods. It is considered a promising step in the investigation of therapeutic procedures for rehabilitation and quality of life of several pathologies. Conclusion The association between auditory stimulation and the level of the cardiac autonomic nervous system has received significant contributions in relation to musical stimuli. PMID:24834128
Abboud, Sami; Hanassy, Shlomi; Levy-Tzedek, Shelly; Maidenbaum, Shachar; Amedi, Amir
2014-01-01
Sensory-substitution devices (SSDs) provide auditory or tactile representations of visual information. These devices often generate unpleasant sensations and mostly lack color information. We present here a novel SSD aimed at addressing these issues. We developed the EyeMusic, a novel visual-to-auditory SSD for the blind, providing both shape and color information. Our design uses musical notes on a pentatonic scale generated by natural instruments to convey the visual information in a pleasant manner. A short behavioral protocol was utilized to train the blind to extract shape and color information, and test their acquired abilities. Finally, we conducted a survey and a comparison task to assess the pleasantness of the generated auditory stimuli. We show that basic shape and color information can be decoded from the generated auditory stimuli. High performance levels were achieved by all participants following as little as 2-3 hours of training. Furthermore, we show that users indeed found the stimuli pleasant and potentially tolerable for prolonged use. The novel EyeMusic algorithm provides an intuitive and relatively pleasant way for the blind to extract shape and color information. We suggest that this might help facilitating visual rehabilitation because of the added functionality and enhanced pleasantness.
Central Auditory Processing through the Looking Glass: A Critical Look at Diagnosis and Management.
ERIC Educational Resources Information Center
Young, Maxine L.
1985-01-01
The article examines the contributions of both audiologists and speech-language pathologists to the diagnosis and management of students with central auditory processing disorders and language impairments. (CL)
ERIC Educational Resources Information Center
Medwetsky, Larry
2011-01-01
Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…
Neural correlates of auditory scene analysis and perception
Cohen, Yale E.
2014-01-01
The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354
Sex differences in the representation of call stimuli in a songbird secondary auditory area
Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine
2015-01-01
Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the bird's auditory experience in females. PMID:26578918
Sex differences in the representation of call stimuli in a songbird secondary auditory area.
Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine
2015-01-01
Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the bird's auditory experience in females.
Auditory, Tactile, and Audiotactile Information Processing Following Visual Deprivation
ERIC Educational Resources Information Center
Occelli, Valeria; Spence, Charles; Zampini, Massimiliano
2013-01-01
We highlight the results of those studies that have investigated the plastic reorganization processes that occur within the human brain as a consequence of visual deprivation, as well as how these processes give rise to behaviorally observable changes in the perceptual processing of auditory and tactile information. We review the evidence showing…
When music is salty: The crossmodal associations between sound and taste.
Guetta, Rachel; Loui, Psyche
2017-01-01
Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population.
Auditory-Motor Processing of Speech Sounds
Möttönen, Riikka; Dutton, Rebekah; Watkins, Kate E.
2013-01-01
The motor regions that control movements of the articulators activate during listening to speech and contribute to performance in demanding speech recognition and discrimination tasks. Whether the articulatory motor cortex modulates auditory processing of speech sounds is unknown. Here, we aimed to determine whether the articulatory motor cortex affects the auditory mechanisms underlying discrimination of speech sounds in the absence of demanding speech tasks. Using electroencephalography, we recorded responses to changes in sound sequences, while participants watched a silent video. We also disrupted the lip or the hand representation in left motor cortex using transcranial magnetic stimulation. Disruption of the lip representation suppressed responses to changes in speech sounds, but not piano tones. In contrast, disruption of the hand representation had no effect on responses to changes in speech sounds. These findings show that disruptions within, but not outside, the articulatory motor cortex impair automatic auditory discrimination of speech sounds. The findings provide evidence for the importance of auditory-motor processes in efficient neural analysis of speech sounds. PMID:22581846
Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael
2016-01-01
Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys. DOI: http://dx.doi.org/10.7554/eLife.15441.001 PMID:27438411
Neuropsychological impairments on the NEPSY-II among children with FASD.
Rasmussen, Carmen; Tamana, Sukhpreet; Baugh, Lauren; Andrew, Gail; Tough, Suzanne; Zwaigenbaum, Lonnie
2013-01-01
We examined the pattern of neuropsychological impairments of children with FASD (compared to controls) on NEPSY-II measures of attention and executive functioning, language, memory, visuospatial processing, and social perception. Participants included 32 children with FASD and 30 typically developing control children, ranging in age from 6 to 16 years. Children were tested on the following subtests of the NEPSY-II: Attention and Executive Functioning (animal sorting, auditory attention/response set, and inhibition), Language (comprehension of instructions and speeded naming), Memory (memory for names/delayed memory for names), Visual-Spatial Processing (arrows), and Social Perception (theory of mind). Groups were compared using MANOVA. Children with FASD were impaired relative to controls on the following subtests: animal sorting, response set, inhibition (naming and switching conditions), comprehension of instructions, speeded naming, and memory for names total and delayed, but group differences were not significant on auditory attention, inhibition (inhibition condition), arrows, and theory of mind. Among the FASD group, IQ scores were not correlated with performance on the NEPSY-II subtests, and there were no significant differences between those with and without comorbid ADHD. The NEPSY-II is an effective and useful tool for measuring a variety of neuropsychological impairments among children with FASD. Children with FASD displayed a pattern of results with impairments (relative to controls) on measures of executive functioning (set shifting, concept formation, and inhibition), language, and memory, and relative strengths on measures of basic attention, visual spatial processing, and social perception.
ERIC Educational Resources Information Center
Faronii-Butler, Kishasha O.
2013-01-01
This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…
Auditory Deprivation and Early Conductive Hearing Loss from Otitis Media.
ERIC Educational Resources Information Center
Gunnarson, Adele D.; And Others
1990-01-01
This article reviews auditory deprivation effects on anatomy, physiology, and behavior in animals and discusses the sequelae of otitis media with effusion (OME) in children. Focused on are central auditory processing disorders associated with early fluctuating hearing loss from OME. (DB)
Biagianti, Bruno; Fisher, Melissa; Neilands, Torsten B; Loewy, Rachel; Vinogradov, Sophia
2016-11-01
Individuals with schizophrenia who engage in targeted cognitive training (TCT) of the auditory system show generalized cognitive improvements. The high degree of variability in cognitive gains maybe due to individual differences in the level of engagement of the underlying neural system target. 131 individuals with schizophrenia underwent 40 hours of TCT. We identified target engagement of auditory system processing efficiency by modeling subject-specific trajectories of auditory processing speed (APS) over time. Lowess analysis, mixed models repeated measures analysis, and latent growth curve modeling were used to examine whether APS trajectories were moderated by age and illness duration, and mediated improvements in cognitive outcome measures. We observed significant improvements in APS from baseline to 20 hours of training (initial change), followed by a flat APS trajectory (plateau) at subsequent time-points. Participants showed interindividual variability in the steepness of the initial APS change and in the APS plateau achieved and sustained between 20 and 40 hours. We found that participants who achieved the fastest APS plateau, showed the greatest transfer effects to untrained cognitive domains. There is a significant association between an individual's ability to generate and sustain auditory processing efficiency and their degree of cognitive improvement after TCT, independent of baseline neurocognition. APS plateau may therefore represent a behavioral measure of target engagement mediating treatment response. Future studies should examine the optimal plateau of auditory processing efficiency required to induce significant cognitive improvements, in the context of interindividual differences in neural plasticity and sensory system efficiency that characterize schizophrenia. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Terband, H; Maassen, B; Guenther, F H; Brumberg, J
2014-01-01
Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. The reader will be able to: (1) identify the difficulties in studying disordered speech motor development; (2) describe the differences in speech motor characteristics between SSD and subtype CAS; (3) describe the different types of learning that occur in the sensory-motor system during babbling and early speech acquisition; (4) identify the neural control subsystems involved in speech production; (5) describe the potential role of auditory self-monitoring in developmental speech disorders. Copyright © 2014 Elsevier Inc. All rights reserved.
Oscillatory support for rapid frequency change processing in infants.
Musacchia, Gabriella; Choudhury, Naseem A; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P; Benasich, April A
2013-11-01
Rapid auditory processing and auditory change detection abilities are crucial aspects of speech and language development, particularly in the first year of life. Animal models and adult studies suggest that oscillatory synchrony, and in particular low-frequency oscillations play key roles in this process. We hypothesize that infant perception of rapid pitch and timing changes is mediated, at least in part, by oscillatory mechanisms. Using event-related potentials (ERPs), source localization and time-frequency analysis of event-related oscillations (EROs), we examined the neural substrates of rapid auditory processing in 4-month-olds. During a standard oddball paradigm, infants listened to tone pairs with invariant standard (STD, 800-800 Hz) and variant deviant (DEV, 800-1200 Hz) pitch. STD and DEV tone pairs were first presented in a block with a short inter-stimulus interval (ISI) (Rapid Rate: 70 ms ISI), followed by a block of stimuli with a longer ISI (Control Rate: 300 ms ISI). Results showed greater ERP peak amplitude in response to the DEV tone in both conditions and later and larger peaks during Rapid Rate presentation, compared to the Control condition. Sources of neural activity, localized to right and left auditory regions, showed larger and faster activation in the right hemisphere for both rate conditions. Time-frequency analysis of the source activity revealed clusters of theta band enhancement to the DEV tone in right auditory cortex for both conditions. Left auditory activity was enhanced only during Rapid Rate presentation. These data suggest that local low-frequency oscillatory synchrony underlies rapid processing and can robustly index auditory perception in young infants. Furthermore, left hemisphere recruitment during rapid frequency change discrimination suggests a difference in the spectral and temporal resolution of right and left hemispheres at a very young age. © 2013 Elsevier Ltd. All rights reserved.
Auditory Middle Latency Response and Phonological Awareness in Students with Learning Disabilities
Romero, Ana Carla Leite; Funayama, Carolina Araújo Rodrigues; Capellini, Simone Aparecida; Frizzo, Ana Claudia Figueiredo
2015-01-01
Introduction Behavioral tests of auditory processing have been applied in schools and highlight the association between phonological awareness abilities and auditory processing, confirming that low performance on phonological awareness tests may be due to low performance on auditory processing tests. Objective To characterize the auditory middle latency response and the phonological awareness tests and to investigate correlations between responses in a group of children with learning disorders. Methods The study included 25 students with learning disabilities. Phonological awareness and auditory middle latency response were tested with electrodes placed on the left and right hemispheres. The correlation between the measurements was performed using the Spearman rank correlation coefficient. Results There is some correlation between the tests, especially between the Pa component and syllabic awareness, where moderate negative correlation is observed. Conclusion In this study, when phonological awareness subtests were performed, specifically phonemic awareness, the students showed a low score for the age group, although for the objective examination, prolonged Pa latency in the contralateral via was observed. Negative weak to moderate correlation for Pa wave latency was observed, as was positive weak correlation for Na-Pa amplitude. PMID:26491479
Tost, H; Meyer-Lindenberg, A; Ruf, M; Demirakça, T; Grimm, O; Henn, F A; Ende, G
2005-02-01
Modern neuroimaging techniques such as magnetic resonance imaging (MRI) and positron emission tomography (PET) have contributed tremendously to our current understanding of psychiatric disorders in the context of functional, biochemical and microstructural alterations of the brain. Since the mid-nineties, functional MRI has provided major insights into the neurobiological correlates of signs and symptoms in schizophrenia. The current paper reviews important fMRI studies of the past decade in the domains of motor, visual, auditory, attentional and working memory function. Special emphasis is given to new methodological approaches, such as the visualisation of medication effects and the functional characterisation of risk genes.
Content-based TV sports video retrieval using multimodal analysis
NASA Astrophysics Data System (ADS)
Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru
2003-09-01
In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.
2011-01-01
Background The electrical signals measuring method is recommended to examine the relationship between neuronal activities and measure with the event related potentials (ERPs) during an auditory and a visual oddball paradigm between schizophrenic patients and normal subjects. The aim of this study is to discriminate the activation changes of different stimulations evoked by auditory and visual ERPs between schizophrenic patients and normal subjects. Methods Forty-three schizophrenic patients were selected as experimental group patients, and 40 healthy subjects with no medical history of any kind of psychiatric diseases, neurological diseases, or drug abuse, were recruited as a control group. Auditory and visual ERPs were studied with an oddball paradigm. All the data were analyzed by SPSS statistical software version 10.0. Results In the comparative study of auditory and visual ERPs between the schizophrenic and healthy patients, P300 amplitude at Fz, Cz, and Pz and N100, N200, and P200 latencies at Fz, Cz, and Pz were shown significantly different. The cognitive processing reflected by the auditory and the visual P300 latency to rare target stimuli was probably an indicator of the cognitive function in schizophrenic patients. Conclusions This study shows the methodology of application of auditory and visual oddball paradigm identifies task-relevant sources of activity and allows separation of regions that have different response properties. Our study indicates that there may be slowness of automatic cognitive processing and controlled cognitive processing of visual ERPs compared to auditory ERPs in schizophrenic patients. The activation changes of visual evoked potentials are more regionally specific than auditory evoked potentials. PMID:21542917
Sensory-to-motor integration during auditory repetition: a combined fMRI and lesion study
Parker Jones, ‘Ōiwi; Prejawa, Susan; Hope, Thomas M. H.; Oberhuber, Marion; Seghier, Mohamed L.; Leff, Alex P.; Green, David W.; Price, Cathy J.
2014-01-01
The aim of this paper was to investigate the neurological underpinnings of auditory-to-motor translation during auditory repetition of unfamiliar pseudowords. We tested two different hypotheses. First we used functional magnetic resonance imaging in 25 healthy subjects to determine whether a functionally defined area in the left temporo-parietal junction (TPJ), referred to as Sylvian-parietal-temporal region (Spt), reflected the demands on auditory-to-motor integration during the repetition of pseudowords relative to a semantically mediated nonverbal sound-naming task. The experiment also allowed us to test alternative accounts of Spt function, namely that Spt is involved in subvocal articulation or auditory processing that can be driven either bottom-up or top-down. The results did not provide convincing evidence that activation increased in either Spt or any other cortical area when non-semantic auditory inputs were being translated into motor outputs. Instead, the results were most consistent with Spt responding to bottom up or top down auditory processing, independent of the demands on auditory-to-motor integration. Second, we investigated the lesion sites in eight patients who had selective difficulties repeating heard words but with preserved word comprehension, picture naming and verbal fluency (i.e., conduction aphasia). All eight patients had white-matter tract damage in the vicinity of the arcuate fasciculus and only one of the eight patients had additional damage to the Spt region, defined functionally in our fMRI data. Our results are therefore most consistent with the neurological tradition that emphasizes the importance of the arcuate fasciculus in the non-semantic integration of auditory and motor speech processing. PMID:24550807
Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun
2015-08-01
Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Schrode, Katrina M; Bee, Mark A
2015-03-01
Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. © 2015. Published by The Company of Biologists Ltd.
Cerebral responses to local and global auditory novelty under general anesthesia
Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir
2017-01-01
Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046
NASA Astrophysics Data System (ADS)
Wisniowiecki, Anna M.; Mattison, Scott P.; Kim, Sangmin; Riley, Bruce; Applegate, Brian E.
2016-03-01
Zebrafish, an auditory specialist among fish, offer analogous auditory structures to vertebrates and is a model for hearing and deafness in vertebrates, including humans. Nevertheless, many questions remain on the basic mechanics of the auditory pathway. Phase-sensitive Optical Coherence Tomography has been proven as valuable technique for functional vibrometric measurements in the murine ear. Such measurements are key to building a complete understanding of auditory mechanics. The application of such techniques in the zebrafish is impeded by the high level of pigmentation, which develops superior to the transverse plane and envelops the auditory system superficially. A zebrafish double mutant for nacre and roy (mitfa-/- ;roya-/- [casper]), which exhibits defects for neural-crest derived melanocytes and iridophores, at all stages of development, is pursued to improve image quality and sensitivity for functional imaging. So far our investigations with the casper mutants have enabled the identification of the specialized hearing organs, fluid-filled canal connecting the ears, and sub-structures of the semicircular canals. In our previous work with wild-type zebrafish, we were only able to identify and observe stimulated vibration of the largest structures, specifically the anterior swim bladder and tripus ossicle, even among small, larval specimen, with fully developed inner ears. In conclusion, this genetic mutant will enable the study of the dynamics of the zebrafish ear from the early larval stages all the way into adulthood.
Characteristics of hearing and echolocation in under-studied odontocete species
NASA Astrophysics Data System (ADS)
Smith, Adam B.
All odontoctes (toothed whales and dolphins) studied to date have been shown to echolocate. They use sound as their primary means for foraging, navigation, and communication with conspecifics and are thus considered acoustic specialists. However, the vast majority of what is known about odontocete acoustic systems comes from only a handful of the 76 recognized extant species. The research presented in this dissertation investigated basic characteristics of odontocete hearing and echolocation, including auditory temporal resolution, auditory pathways, directional hearing, and transmission beam characteristics, in individuals of five different odontocete species that are understudied. Modulation rate transfer functions were measured from formerly stranded individuals of four different species (Stenella longirostris, Feresa attenuata, Globicephala melas, Mesoplodon densirostris) using non-invasive auditory evoked potential methods. All individuals showed acute auditory temporal resolution that was comparable to other studied odontocete species. Using the same electrophysiological methods, auditory pathways and directional hearing were investigated in a Risso's dolphin (Grampus griseus) using both localized and far-field acoustic stimuli. The dolphin's hearing showed significant, frequency dependent asymmetry to localized sound presented on the right and left sides of its head. The dolphin also showed acute, but mostly symmetrical, directional auditory sensitivity to sounds presented in the far-field. Furthermore, characteristics of the echolocation transmission beam of this same individual Risso's dolphin were measured using a 16 element hydrophone array. The dolphin exhibited both single and dual lobed beam shapes that were more directional than similar measurements from a bottlenose dolphin, harbor porpoise, and false killer whale.
Naftidrofuryl affects neurite regeneration by injured adult auditory neurons.
Lefebvre, P P; Staecker, H; Moonen, G; van de Water, T R
1993-07-01
Afferent auditory neurons are essential for the transmission of auditory information from Corti's organ to the central auditory pathway. Auditory neurons are very sensitive to acute insult and have a limited ability to regenerate injured neuronal processes. Therefore, these neurons appear to be a limiting factor in restoration of hearing function following an injury to the peripheral auditory receptor. In a previous study nerve growth factor (NGF) was shown to stimulate neurite repair but not survival of injured auditory neurons. In this study, we have demonstrated a neuritogenesis promoting effect of naftidrofuryl in an vitro model for injury to adult auditory neurons, i.e. dissociated cell cultures of adult rat spiral ganglia. Conversely, naftidrofuryl did not have any demonstrable survival promoting effect on these in vitro preparations of injured auditory neurons. The potential uses of this drug as a therapeutic agent in acute diseases of the inner ear are discussed in the light of these observations.
Bühler, Tim; Kindler, Jochen; Schneider, Rahel C; Strik, Werner; Dierks, Thomas; Hubl, Daniela; Koenig, Thomas
2016-09-01
A 'sense of self' is essentially the ability to distinguish between self-generated and external stimuli. It consists of at least two very basic senses: a sense of agency and a sense of ownership. Disturbances seem to provide a basic deficit in many psychiatric diseases. The aim of our study was to manipulate those qualities separately in 28 patients with schizophrenia (14 auditory hallucinators and 14 non-hallucinators) and 28 healthy controls (HC) and to investigate the effects on the topographies and the power of the event-related potential (ERP). We performed a 76-channel EEG while the participants performed the task as in our previous paper. We computed ERPs and difference maps for the conditions and compared the amount of agency and ownership between the HC and the patients. Furthermore, we compared the global field power and the topographies of these effects. Our data showed effects of agency and ownership in the healthy controls and the hallucinator group and to a lesser degree in the non-hallucinator group. We found a reduction of the N100 during the presence of agency, and a bilateral temporal negativity related to the presence of ownership. For the agency effects, we found significant differences between HC and the patients. Contrary to the expectations, our findings were more pronounced in non-hallucinators, suggesting a more profoundly disturbed sense of agency compared to hallucinators. A contemporary increase of global field power in both patient groups indicates a compensatory recruitment of other mechanisms not normally associated with the processing of agency and ownership.
Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T; Schalk, Gerwin
2014-08-15
Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these relationships reflect direct cortico-cortical connections rather than common driving input from subcortical structures such as the thalamus. In summary, our inter-subject analyses defined the spatial and temporal relationships between music-related brain activity in the alpha and high gamma bands. They provide experimental evidence supporting current theories about the putative mechanisms of alpha and gamma activity, i.e., reflections of thalamo-cortical interactions and local cortical neural activity, respectively, and the results are also in agreement with existing functional models of auditory processing. Copyright © 2014 Elsevier Inc. All rights reserved.
Richardson, Fiona M; Ramsden, Sue; Ellis, Caroline; Burnett, Stephanie; Megnin, Odette; Catmur, Caroline; Schofield, Tom M; Leff, Alex P; Price, Cathy J
2011-12-01
A central feature of auditory STM is its item-limited processing capacity. We investigated whether auditory STM capacity correlated with regional gray and white matter in the structural MRI images from 74 healthy adults, 40 of whom had a prior diagnosis of developmental dyslexia whereas 34 had no history of any cognitive impairment. Using whole-brain statistics, we identified a region in the left posterior STS where gray matter density was positively correlated with forward digit span, backward digit span, and performance on a "spoonerisms" task that required both auditory STM and phoneme manipulation. Across tasks and participant groups, the correlation was highly significant even when variance related to reading and auditory nonword repetition was factored out. Although the dyslexics had poorer phonological skills, the effect of auditory STM capacity in the left STS was the same as in the cognitively normal group. We also illustrate that the anatomical location of this effect is in proximity to a lesion site recently associated with reduced auditory STM capacity in patients with stroke damage. This result, therefore, indicates that gray matter density in the posterior STS predicts auditory STM capacity in the healthy and damaged brain. In conclusion, we suggest that our present findings are consistent with the view that there is an overlap between the mechanisms that support language processing and auditory STM.
Speech training alters consonant and vowel responses in multiple auditory cortex fields
Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.
2015-01-01
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927
Cortical mechanisms for the segregation and representation of acoustic textures.
Overath, Tobias; Kumar, Sukhbinder; Stewart, Lauren; von Kriegstein, Katharina; Cusack, Rhodri; Rees, Adrian; Griffiths, Timothy D
2010-02-10
Auditory object analysis requires two fundamental perceptual processes: the definition of the boundaries between objects, and the abstraction and maintenance of an object's characteristic features. Although it is intuitive to assume that the detection of the discontinuities at an object's boundaries precedes the subsequent precise representation of the object, the specific underlying cortical mechanisms for segregating and representing auditory objects within the auditory scene are unknown. We investigated the cortical bases of these two processes for one type of auditory object, an "acoustic texture," composed of multiple frequency-modulated ramps. In these stimuli, we independently manipulated the statistical rules governing (1) the frequency-time space within individual textures (comprising ramps with a given spectrotemporal coherence) and (2) the boundaries between textures (adjacent textures with different spectrotemporal coherences). Using functional magnetic resonance imaging, we show mechanisms defining boundaries between textures with different coherences in primary and association auditory cortices, whereas texture coherence is represented only in association cortex. Furthermore, participants' superior detection of boundaries across which texture coherence increased (as opposed to decreased) was reflected in a greater neural response in auditory association cortex at these boundaries. The results suggest a hierarchical mechanism for processing acoustic textures that is relevant to auditory object analysis: boundaries between objects are first detected as a change in statistical rules over frequency-time space, before a representation that corresponds to the characteristics of the perceived object is formed.
Auditory post-processing in a passive listening task is deficient in Alzheimer's disease.
Bender, Stephan; Bluschke, Annet; Dippel, Gabriel; Rupp, André; Weisbrod, Matthias; Thomas, Christine
2014-01-01
To investigate whether automatic auditory post-processing is deficient in patients with Alzheimer's disease and is related to sensory gating. Event-related potentials were recorded during a passive listening task to examine the automatic transient storage of auditory information (short click pairs). Patients with Alzheimer's disease were compared to a healthy age-matched control group. A young healthy control group was included to assess effects of physiological aging. A bilateral frontal negativity in combination with deep temporal positivity occurring 500 ms after stimulus offset was reduced in patients with Alzheimer's disease, but was unaffected by physiological aging. Its amplitude correlated with short-term memory capacity, but was independent of sensory gating in healthy elderly controls. Source analysis revealed a dipole pair in the anterior temporal lobes. Results suggest that auditory post-processing is deficient in Alzheimer's disease, but is not typically related to sensory gating. The deficit could neither be explained by physiological aging nor by problems in earlier stages of auditory perception. Correlations with short-term memory capacity and executive control tasks suggested an association with memory encoding and/or overall cognitive control deficits. An auditory late negative wave could represent a marker of auditory working memory encoding deficits in Alzheimer's disease. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Effects of Auditory Distraction on Cognitive Processing of Young Adults
ERIC Educational Resources Information Center
LaPointe, Leonard L.; Heald, Gary R.; Stierwalt, Julie A. G.; Kemker, Brett E.; Maurice, Trisha
2007-01-01
Objective: The effects of interference, competition, and distraction on cognitive processing are unclearly understood, particularly regarding type and intensity of auditory distraction across a variety of cognitive processing tasks. Method: The purpose of this investigation was to report two experiments that sought to explore the effects of types…
Are Auditory and Visual Processing Deficits Related to Developmental Dyslexia?
ERIC Educational Resources Information Center
Georgiou, George K.; Papadopoulos, Timothy C.; Zarouna, Elena; Parrila, Rauno
2012-01-01
The purpose of this study was to examine if children with dyslexia learning to read a consistent orthography (Greek) experience auditory and visual processing deficits and if these deficits are associated with phonological awareness, rapid naming speed and orthographic processing. We administered measures of general cognitive ability, phonological…
Word Recognition in Auditory Cortex
ERIC Educational Resources Information Center
DeWitt, Iain D. J.
2013-01-01
Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…
NASA Astrophysics Data System (ADS)
Moore, Brian C. J.
Psychoacoustics
Jafari, Zahra; Esmaili, Mahdiye; Delbari, Ahmad; Mehrpour, Masoud; Mohajerani, Majid H
2016-06-01
There have been a few reports about the effects of chronic stroke on auditory temporal processing abilities and no reports regarding the effects of brain damage lateralization on these abilities. Our study was performed on 2 groups of chronic stroke patients to compare the effects of hemispheric lateralization of brain damage and of age on auditory temporal processing. Seventy persons with normal hearing, including 25 normal controls, 25 stroke patients with damage to the right brain, and 20 stroke patients with damage to the left brain, without aphasia and with an age range of 31-71 years were studied. A gap-in-noise (GIN) test and a duration pattern test (DPT) were conducted for each participant. Significant differences were found between the 3 groups for GIN threshold, overall GIN percent score, and DPT percent score in both ears (P ≤ .001). For all stroke patients, performance in both GIN and DPT was poorer in the ear contralateral to the damaged hemisphere, which was significant in DPT and in 2 measures of GIN (P ≤ .046). Advanced age had a negative relationship with temporal processing abilities for all 3 groups. In cases of confirmed left- or right-side stroke involving auditory cerebrum damage, poorer auditory temporal processing is associated with the ear contralateral to the damaged cerebral hemisphere. Replication of our results and the use of GIN and DPT tests for the early diagnosis of auditory processing deficits and for monitoring the effects of aural rehabilitation interventions are recommended. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Top-down modulation of visual and auditory cortical processing in aging.
Guerreiro, Maria J S; Eck, Judith; Moerel, Michelle; Evers, Elisabeth A T; Van Gerven, Pascal W M
2015-02-01
Age-related cognitive decline has been accounted for by an age-related deficit in top-down attentional modulation of sensory cortical processing. In light of recent behavioral findings showing that age-related differences in selective attention are modality dependent, our goal was to investigate the role of sensory modality in age-related differences in top-down modulation of sensory cortical processing. This question was addressed by testing younger and older individuals in several memory tasks while undergoing fMRI. Throughout these tasks, perceptual features were kept constant while attentional instructions were varied, allowing us to devise all combinations of relevant and irrelevant, visual and auditory information. We found no top-down modulation of auditory sensory cortical processing in either age group. In contrast, we found top-down modulation of visual cortical processing in both age groups, and this effect did not differ between age groups. That is, older adults enhanced cortical processing of relevant visual information and suppressed cortical processing of visual distractors during auditory attention to the same extent as younger adults. The present results indicate that older adults are capable of suppressing irrelevant visual information in the context of cross-modal auditory attention, and thereby challenge the view that age-related attentional and cognitive decline is due to a general deficits in the ability to suppress irrelevant information. Copyright © 2014 Elsevier B.V. All rights reserved.
Bidelman, Gavin M
2016-10-01
Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (<100 ms) were ~2-3× shorter than nonmusicians' (~200 ms), suggesting more refined multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.
Cortical contributions to the auditory frequency-following response revealed by MEG
Coffey, Emily B. J.; Herholz, Sibylle C.; Chepesiuk, Alexander M. P.; Baillet, Sylvain; Zatorre, Robert J.
2016-01-01
The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. PMID:27009409
Beitel, Ralph E.; Schreiner, Christoph E.; Leake, Patricia A.
2016-01-01
In profoundly deaf cats, behavioral training with intracochlear electric stimulation (ICES) can improve temporal processing in the primary auditory cortex (AI). To investigate whether similar effects are manifest in the auditory midbrain, ICES was initiated in neonatally deafened cats either during development after short durations of deafness (8 wk of age) or in adulthood after long durations of deafness (≥3.5 yr). All of these animals received behaviorally meaningless, “passive” ICES. Some animals also received behavioral training with ICES. Two long-deaf cats received no ICES prior to acute electrophysiological recording. After several months of passive ICES and behavioral training, animals were anesthetized, and neuronal responses to pulse trains of increasing rates were recorded in the central (ICC) and external (ICX) nuclei of the inferior colliculus. Neuronal temporal response patterns (repetition rate coding, minimum latencies, response precision) were compared with results from recordings made in the AI of the same animals (Beitel RE, Vollmer M, Raggio MW, Schreiner CE. J Neurophysiol 106: 944–959, 2011; Vollmer M, Beitel RE. J Neurophysiol 106: 2423–2436, 2011). Passive ICES in long-deaf cats remediated severely degraded temporal processing in the ICC and had no effects in the ICX. In contrast to observations in the AI, behaviorally relevant ICES had no effects on temporal processing in the ICC or ICX, with the single exception of shorter latencies in the ICC in short-deaf cats. The results suggest that independent of deafness duration passive stimulation and behavioral training differentially transform temporal processing in auditory midbrain and cortex, and primary auditory cortex emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf cat. NEW & NOTEWORTHY Behaviorally relevant vs. passive electric stimulation of the auditory nerve differentially affects neuronal temporal processing in the central nucleus of the inferior colliculus (ICC) and the primary auditory cortex (AI) in profoundly short-deaf and long-deaf cats. Temporal plasticity in the ICC depends on a critical amount of electric stimulation, independent of its behavioral relevance. In contrast, the AI emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf auditory system. PMID:27733594
Caplan, David; Michaud, Jennifer; Hufford, Rebecca
2013-01-01
Sixty one pwa were tested on syntactic comprehension in three tasks: sentence-picture matching, sentence-picture matching with auditory moving window presentation, and object manipulation. There were significant correlations of performances on sentences across tasks. First factors in unrotated factor analyses accounted for most of the variance on which all sentence types loaded in each task. Dissociations in performance between sentence types that differed minimally in their syntactic structures were not consistent across tasks. These results replicate previous results with smaller samples and provide important validation of basic aspects of aphasic performance in this area of language processing. They point to the role of a reduction in processing resources and of the interaction of task demands and parsing and interpretive abilities in the genesis of patient performance. PMID:24061104
Representation Elements of Spatial Thinking
NASA Astrophysics Data System (ADS)
Fiantika, F. R.
2017-04-01
This paper aims to add a reference in revealing spatial thinking. There several definitions of spatial thinking but it is not easy to defining it. We can start to discuss the concept, its basic a forming representation. Initially, the five sense catch the natural phenomenon and forward it to memory for processing. Abstraction plays a role in processing information into a concept. There are two types of representation, namely internal representation and external representation. The internal representation is also known as mental representation; this representation is in the human mind. The external representation may include images, auditory and kinesthetic which can be used to describe, explain and communicate the structure, operation, the function of the object as well as relationships. There are two main elements, representations properties and object relationships. These elements play a role in forming a representation.
Corticofugal modulation of peripheral auditory responses
Terreros, Gonzalo; Delano, Paul H.
2015-01-01
The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body (MGB), inferior colliculus (IC), cochlear nucleus (CN) and superior olivary complex (SOC) reaching the cochlea through olivocochlear (OC) fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i) colliculo-thalamic-cortico-collicular; (ii) cortico-(collicular)-OC; and (iii) cortico-(collicular)-CN pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the OC reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on CN, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed. PMID:26483647
Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.
Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M
2003-05-13
Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.
A sound advantage: Increased auditory capacity in autism.
Remington, Anna; Fairnie, Jake
2017-09-01
Autism Spectrum Disorder (ASD) has an intriguing auditory processing profile. Individuals show enhanced pitch discrimination, yet often find seemingly innocuous sounds distressing. This study used two behavioural experiments to examine whether an increased capacity for processing sounds in ASD could underlie both the difficulties and enhanced abilities found in the auditory domain. Autistic and non-autistic young adults performed a set of auditory detection and identification tasks designed to tax processing capacity and establish the extent of perceptual capacity in each population. Tasks were constructed to highlight both the benefits and disadvantages of increased capacity. Autistic people were better at detecting additional unexpected and expected sounds (increased distraction and superior performance respectively). This suggests that they have increased auditory perceptual capacity relative to non-autistic people. This increased capacity may offer an explanation for the auditory superiorities seen in autism (e.g. heightened pitch detection). Somewhat counter-intuitively, this same 'skill' could result in the sensory overload that is often reported - which subsequently can interfere with social communication. Reframing autistic perceptual processing in terms of increased capacity, rather than a filtering deficit or inability to maintain focus, increases our understanding of this complex condition, and has important practical implications that could be used to develop intervention programs to minimise the distress that is often seen in response to sensory stimuli. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Ackermann; Mathiak
1999-11-01
Pure word deafness (auditory verbal agnosia) is characterized by an impairment of auditory comprehension, repetition of verbal material and writing to dictation whereas spontaneous speech production and reading largely remain unaffected. Sometimes, this syndrome is preceded by complete deafness (cortical deafness) of varying duration. Perception of vowels and suprasegmental features of verbal utterances (e.g., intonation contours) seems to be less disrupted than the processing of consonants and, therefore, might mediate residual auditory functions. Often, lip reading and/or slowing of speaking rate allow within some limits to compensate for speech comprehension deficits. Apart from a few exceptions, the available reports of pure word deafness documented a bilateral temporal lesion. In these instances, as a rule, identification of nonverbal (environmental) sounds, perception of music, temporal resolution of sequential auditory cues and/or spatial localization of acoustic events were compromised as well. The observed variable constellation of auditory signs and symptoms in central hearing disorders following bilateral temporal disorders, most probably, reflects the multitude of functional maps at the level of the auditory cortices subserving, as documented in a variety of non-human species, the encoding of specific stimulus parameters each. Thus, verbal/nonverbal auditory agnosia may be considered a paradigm of distorted "auditory scene analysis" (Bregman 1990) affecting both primitive and schema-based perceptual processes. It cannot be excluded, however, that disconnection of the Wernicke-area from auditory input (Geschwind 1965) and/or an impairment of suggested "phonetic module" (Liberman 1996) contribute to the observed deficits as well. Conceivably, these latter mechanisms underly the rare cases of pure word deafness following a lesion restricted to the dominant hemisphere. Only few instances of a rather isolated disruption of the discrimination/identification of nonverbal sound sources, in the presence of uncompromised speech comprehension, have been reported so far (nonverbal auditory agnosia). As a rule, unilateral right-sided damage has been found to be the relevant lesion.
Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.
NASA Astrophysics Data System (ADS)
Wang, Avery Li-Chun
This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters which require a small fraction of the computational power of conventional FIR implementations. This design strategy is based on truncated and stabilized IIR filters. These signal-processing methods have been applied to the problem of auditory source separation, resulting in voice separation from complex music that is significantly better than previous results at far lower computational cost.
Processing of Communication Sounds: Contributions of Learning, Memory, and Experience
Bigelow, James; Rossi, Breein
2013-01-01
Abundant evidence from both field and lab studies has established that conspecific vocalizations (CVs) are of critical ecological significance for a wide variety of species, including humans, nonhuman primates, rodents, and other mammals and birds. Correspondingly, a number of experiments have demonstrated behavioral processing advantages for CVs, such as in discrimination and memory tasks. Further, a wide range of experiments have described brain regions in many species that appear to be specialized for processing CVs. For example, several neural regions have been described in both mammals and birds wherein greater neural responses are elicited by CVs than by comparison stimuli such as heterospecific vocalizations, nonvocal complex sounds, and artificial stimuli. These observations raise the question of whether these regions reflect domain-specific neural mechanisms dedicated to processing CVs, or alternatively, if these regions reflect domain-general neural mechanisms for representing complex sounds of learned significance. Inasmuch as CVs can be viewed as complex combinations of basic spectrotemporal features, the plausibility of the latter position is supported by a large body of literature describing modulated cortical and subcortical representation of a variety of acoustic features that have been experimentally associated with stimuli of natural behavioral significance (such as food rewards). Herein, we review a relatively small body of existing literature describing the roles of experience, learning, and memory in the emergence of species-typical neural representations of CVs and auditory system plasticity. In both songbirds and mammals, manipulations of auditory experience as well as specific learning paradigms are shown to modulate neural responses evoked by CVs, either in terms of overall firing rate or temporal firing patterns. In some cases, CV-sensitive neural regions gradually acquire representation of non-CV stimuli with which subjects have training and experience. These results parallel literature in humans describing modulation of responses in face-sensitive neural regions through learning and experience. Thus, although many questions remain, the available evidence is consistent with the notion that CVs may acquire distinct neural representation through domain-general mechanisms for representing complex auditory objects that are of learned importance to the animal. PMID:23792078
Andrade, Adriana Neves de; Silva, Mariane Richetto da; Iorio, Maria Cecilia Martinelli; Gil, Daniela
2015-01-01
To compare the performance of the Dichotic Sentence Identification (DSI) test in the Brazilian Portuguese version, considering: the right and left ears and the educational status in normal-hearing individuals. This investigation assessed 200 individuals who are normal listeners and right-handed and were divided into seven groups according to the years of schooling. All the participants underwent basic audiologic evaluation and behavioral auditory processing assessment (sound localization test, memory test for verbal and nonverbal sounds in sequence, dichotic digits test, and DSI). The evaluated individuals revealed an average educational status of 13.1 years and results within normal limits in the selected tests for the audiologic and auditory processing assessments. Regarding the DSI test, the educational status showed a dependent relationship with the percentages of correct answers in each stage of the test and the evaluated ear. There was a statistically significant positive correlation between the educational status and the percentage of correct answers for all the stages of the DSI test in both the ears. There was also an effect of the educational level on the results obtained in each condition of the DSI test, with the exception of directed attention to the right ear. Comparing the performance considering the variables studied in the DSI test, we concluded that there is an advantage of the right ear and that, the better the educational level, the better the performance of the individuals.
Enhanced auditory temporal gap detection in listeners with musical training.
Mishra, Srikanta K; Panda, Manas R; Herbert, Carolyn
2014-08-01
Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians.
Changes in the Adult Vertebrate Auditory Sensory Epithelium After Trauma
Oesterle, Elizabeth C.
2012-01-01
Auditory hair cells transduce sound vibrations into membrane potential changes, ultimately leading to changes in neuronal firing and sound perception. This review provides an overview of the characteristics and repair capabilities of traumatized auditory sensory epithelium in the adult vertebrate ear. Injured mammalian auditory epithelium repairs itself by forming permanent scars but is unable to regenerate replacement hair cells. In contrast, injured non-mammalian vertebrate ear generates replacement hair cells to restore hearing functions. Non-sensory support cells within the auditory epithelium play key roles in the repair processes. PMID:23178236
A selective impairment of perception of sound motion direction in peripheral space: A case study.
Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C
2016-01-08
It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Role of Age and Executive Function in Auditory Category Learning
Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath
2015-01-01
Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987
Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling
Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.; ...
2017-06-30
Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less
Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.
Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less
Auditory-nerve single-neuron thresholds to electrical stimulation from scala tympani electrodes.
Parkins, C W; Colombo, J
1987-12-31
Single auditory-nerve neuron thresholds were studied in sensory-deafened squirrel monkeys to determine the effects of electrical stimulus shape and frequency on single-neuron thresholds. Frequency was separated into its components, pulse width and pulse rate, which were analyzed separately. Square and sinusoidal pulse shapes were compared. There were no or questionably significant threshold differences in charge per phase between sinusoidal and square pulses of the same pulse width. There was a small (less than 0.5 dB) but significant threshold advantage for 200 microseconds/phase pulses delivered at low pulse rates (156 pps) compared to higher pulse rates (625 pps and 2500 pps). Pulse width was demonstrated to be the prime determinant of single-neuron threshold, resulting in strength-duration curves similar to other mammalian myelinated neurons, but with longer chronaxies. The most efficient electrical stimulus pulse width to use for cochlear implant stimulation was determined to be 100 microseconds/phase. This pulse width delivers the lowest charge/phase at threshold. The single-neuron strength-duration curves were compared to strength-duration curves of a computer model based on the specific anatomy of auditory-nerve neurons. The membrane capacitance and resulting chronaxie of the model can be varied by altering the length of the unmyelinated termination of the neuron, representing the unmyelinated portion of the neuron between the habenula perforata and the hair cell. This unmyelinated segment of the auditory-nerve neuron may be subject to aminoglycoside damage. Simulating a 10 micron unmyelinated termination for this model neuron produces a strength-duration curve that closely fits the single-neuron data obtained from aminoglycoside deafened animals. Both the model and the single-neuron strength-duration curves differ significantly from behavioral threshold data obtained from monkeys and humans with cochlear implants. This discrepancy can best be explained by the involvement of higher level neurologic processes in the behavioral responses. These findings suggest that the basic principles of neural membrane function must be considered in developing or analyzing electrical stimulation strategies for cochlear prostheses if the appropriate stimulation of frequency specific populations of auditory-nerve neurons is the objective.
Echolocation: A Study of Auditory Functioning in Blind and Sighted Subjects.
ERIC Educational Resources Information Center
Arias, C.; And Others
1993-01-01
This study evaluated the peripheral and central auditory functioning (and thus the potential to perceive obstacles through reflected sound) of eight totally blind persons and eight sighted persons. The blind subjects were able to process auditory information faster than the control group. (Author/DB)
Comparing Monotic and Diotic Selective Auditory Attention Abilities in Children
ERIC Educational Resources Information Center
Cherry, Rochelle; Rubinstein, Adrienne
2006-01-01
Purpose: Some researchers have assessed ear-specific performance of auditory processing ability using speech recognition tasks with normative data based on diotic administration. The present study investigated whether monotic and diotic administrations yield similar results using the Selective Auditory Attention Test. Method: Seventy-two typically…
Sound envelope processing in the developing human brain: A MEG study.
Tang, Huizhen; Brock, Jon; Johnson, Blake W
2016-02-01
This study investigated auditory cortical processing of linguistically-relevant temporal modulations in the developing brains of young children. Auditory envelope following responses to white noise amplitude modulated at rates of 1-80 Hz in healthy children (aged 3-5 years) and adults were recorded using a paediatric magnetoencephalography (MEG) system and a conventional MEG system, respectively. For children, there were envelope following responses to slow modulations but no significant responses to rates higher than about 25 Hz, whereas adults showed significant envelope following responses to almost the entire range of stimulus rates. Our results show that the auditory cortex of preschool-aged children has a sharply limited capacity to process rapid amplitude modulations in sounds, as compared to the auditory cortex of adults. These neurophysiological results are consistent with previous psychophysical evidence for a protracted maturational time course for auditory temporal processing. The findings are also in good agreement with current linguistic theories that posit a perceptual bias for low frequency temporal information in speech during language acquisition. These insights also have clinical relevance for our understanding of language disorders that are associated with difficulties in processing temporal information in speech. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction.
Sörqvist, Patrik; Dahlström, Örjan; Karlsson, Thomas; Rönnberg, Jerker
2016-01-01
Whether cognitive load-and other aspects of task difficulty-increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information-which decreases distractibility-as a side effect of the increased activity in a focused-attention network.
Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction
Sörqvist, Patrik; Dahlström, Örjan; Karlsson, Thomas; Rönnberg, Jerker
2016-01-01
Whether cognitive load—and other aspects of task difficulty—increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information—which decreases distractibility—as a side effect of the increased activity in a focused-attention network. PMID:27242485
Auditory processing efficiency deficits in children with developmental language impairments
NASA Astrophysics Data System (ADS)
Hartley, Douglas E. H.; Moore, David R.
2002-12-01
The ``temporal processing hypothesis'' suggests that individuals with specific language impairments (SLIs) and dyslexia have severe deficits in processing rapidly presented or brief sensory information, both within the auditory and visual domains. This hypothesis has been supported through evidence that language-impaired individuals have excess auditory backward masking. This paper presents an analysis of masking results from several studies in terms of a model of temporal resolution. Results from this modeling suggest that the masking results can be better explained by an ``auditory efficiency'' hypothesis. If impaired or immature listeners have a normal temporal window, but require a higher signal-to-noise level (poor processing efficiency), this hypothesis predicts the observed small deficits in the simultaneous masking task, and the much larger deficits in backward and forward masking tasks amongst those listeners. The difference in performance on these masking tasks is predictable from the compressive nonlinearity of the basilar membrane. The model also correctly predicts that backward masking (i) is more prone to training effects, (ii) has greater inter- and intrasubject variability, and (iii) increases less with masker level than do other masking tasks. These findings provide a new perspective on the mechanisms underlying communication disorders and auditory masking.
Rapid, generalized adaptation to asynchronous audiovisual speech
Van der Burg, Erik; Goodbourn, Patrick T.
2015-01-01
The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790
Impact of Audio-Visual Asynchrony on Lip-Reading Effects -Neuromagnetic and Psychophysical Study-
Yahata, Izumi; Kanno, Akitake; Sakamoto, Shuichi; Takanashi, Yoshitaka; Takata, Shiho; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2016-01-01
The effects of asynchrony between audio and visual (A/V) stimuli on the N100m responses of magnetoencephalography in the left hemisphere were compared with those on the psychophysical responses in 11 participants. The latency and amplitude of N100m were significantly shortened and reduced in the left hemisphere by the presentation of visual speech as long as the temporal asynchrony between A/V stimuli was within 100 ms, but were not significantly affected with audio lags of -500 and +500 ms. However, some small effects were still preserved on average with audio lags of 500 ms, suggesting similar asymmetry of the temporal window to that observed in psychophysical measurements, which tended to be more robust (wider) for audio lags; i.e., the pattern of visual-speech effects as a function of A/V lag observed in the N100m in the left hemisphere grossly resembled that in psychophysical measurements on average, although the individual responses were somewhat varied. The present results suggest that the basic configuration of the temporal window of visual effects on auditory-speech perception could be observed from the early auditory processing stage. PMID:28030631
Wang, Qingcui; Bao, Ming; Chen, Lihan
2014-01-01
Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.
Basic Timing Abilities Stay Intact in Patients with Musician's Dystonia
van der Steen, M. C.; van Vugt, Floris T.; Keller, Peter E.; Altenmüller, Eckart
2014-01-01
Task-specific focal dystonia is a movement disorder that is characterized by the loss of voluntary motor control in extensively trained movements. Musician's dystonia is a type of task-specific dystonia that is elicited in professional musicians during instrumental playing. The disorder has been associated with deficits in timing. In order to test the hypothesis that basic timing abilities are affected by musician's dystonia, we investigated a group of patients (N = 15) and a matched control group (N = 15) on a battery of sensory and sensorimotor synchronization tasks. Results did not show any deficits in auditory-motor processing for patients relative to controls. Both groups benefited from a pacing sequence that adapted to their timing (in a sensorimotor synchronization task at a stable tempo). In a purely perceptual task, both groups were able to detect a misaligned metronome when it was late rather than early relative to a musical beat. Overall, the results suggest that basic timing abilities stay intact in patients with musician's dystonia. This supports the idea that musician's dystonia is a highly task-specific movement disorder in which patients are mostly impaired in tasks closely related to the demands of actually playing their instrument. PMID:24667273
ERIC Educational Resources Information Center
Weber-Fox, Christine; Leonard, Laurence B.; Wray, Amanda Hampton; Tomblin, J. Bruce
2010-01-01
Brief tonal stimuli and spoken sentences were utilized to examine whether adolescents (aged 14;3-18;1) with specific language impairments (SLI) exhibit atypical neural activity for rapid auditory processing of non-linguistic stimuli and linguistic processing of verb-agreement and semantic constraints. Further, we examined whether the behavioral…
Auditory Processing Efficiency and Temporal Resolution in Children and Adults.
ERIC Educational Resources Information Center
Hill, Penelope R.; Hartley, Douglas E.H.; Glasberg, Brian R.; Moore, Brian C.J.; Moore, David R.
2004-01-01
Children have higher auditory backward masking (BM) thresholds than adults. One explanation for this is poor temporal resolution, resulting in difficulty separating brief or rapidly presented sounds. This implies that the auditory temporal window is broader in children than in adults. Alternatively, elevated BM thresholds in children may indicate…
Injury- and Use-Related Plasticity in the Adult Auditory System.
ERIC Educational Resources Information Center
Irvine, Dexter R. F.
2000-01-01
This article discusses findings concerning the plasticity of auditory cortical processing mechanisms in adults, including the effects of restricted cochlear damage or behavioral training with acoustic stimuli on the frequency selectivity of auditory cortical neurons and evidence for analogous injury- and use-related plasticity in the adult human…
The Effect of Auditory Information on Patterns of Intrusions and Reductions
ERIC Educational Resources Information Center
Slis, Anneke; van Lieshout, Pascal
2016-01-01
Purpose: The study investigates whether auditory information affects the nature of intrusion and reduction errors in reiterated speech. These errors are hypothesized to arise as a consequence of autonomous mechanisms to stabilize movement coordination. The specific question addressed is whether this process is affected by auditory information so…
Auditory Backward Masking Deficits in Children with Reading Disabilities
ERIC Educational Resources Information Center
Montgomery, Christine R.; Morris, Robin D.; Sevcik, Rose A.; Clarkson, Marsha G.
2005-01-01
Studies evaluating temporal auditory processing among individuals with reading and other language deficits have yielded inconsistent findings due to methodological problems (Studdert-Kennedy & Mody, 1995) and sample differences. In the current study, seven auditory masking thresholds were measured in fifty-two 7- to 10-year-old children (26…
Music and language: relations and disconnections.
Kraus, Nina; Slater, Jessica
2015-01-01
Music and language provide an important context in which to understand the human auditory system. While they perform distinct and complementary communicative functions, music and language are both rooted in the human desire to connect with others. Since sensory function is ultimately shaped by what is biologically important to the organism, the human urge to communicate has been a powerful driving force in both the evolution of auditory function and the ways in which it can be changed by experience within an individual lifetime. This chapter emphasizes the highly interactive nature of the auditory system as well as the depth of its integration with other sensory and cognitive systems. From the origins of music and language to the effects of auditory expertise on the neural encoding of sound, we consider key themes in auditory processing, learning, and plasticity. We emphasize the unique role of the auditory system as the temporal processing "expert" in the brain, and explore relationships between communication and cognition. We demonstrate how experience with music and language can have a significant impact on underlying neural function, and that auditory expertise strengthens some of the very same aspects of sound encoding that are deficient in impaired populations. © 2015 Elsevier B.V. All rights reserved.
Liu, Baolin; Wang, Zhongning; Jin, Zhixing
2009-09-11
In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.
Audio-Tactile Integration in Congenitally and Late Deaf Cochlear Implant Users
Nava, Elena; Bottari, Davide; Villwock, Agnes; Fengler, Ineke; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte
2014-01-01
Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be “rewired” through auditory reafferentation. PMID:24918766
Audio-tactile integration in congenitally and late deaf cochlear implant users.
Nava, Elena; Bottari, Davide; Villwock, Agnes; Fengler, Ineke; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte
2014-01-01
Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be "rewired" through auditory reafferentation.
Matragrano, Lisa L.; Sanford, Sara E.; Salvante, Katrina G.; Beaulieu, Michaël; Sockman, Keith W.; Maney, Donna L.
2011-01-01
Because no organism lives in an unchanging environment, sensory processes must remain plastic so that in any context, they emphasize the most relevant signals. As the behavioral relevance of sociosexual signals changes along with reproductive state, the perception of those signals is altered by reproductive hormones such as estradiol (E2). We showed previously that in white-throated sparrows, immediate early gene responses in the auditory pathway of females are selective for conspecific male song only when plasma E2 is elevated to breeding-typical levels. In this study, we looked for evidence that E2-dependent modulation of auditory responses is mediated by serotonergic systems. In female nonbreeding white-throated sparrows treated with E2, the density of fibers immunoreactive for serotonin transporter innervating the auditory midbrain and rostral auditory forebrain increased compared with controls. E2 treatment also increased the concentration of the serotonin metabolite 5-HIAA in the caudomedial mesopallium of the auditory forebrain. In a second experiment, females exposed to 30 min of conspecific male song had higher levels of 5-HIAA in the caudomedial nidopallium of the auditory forebrain than birds not exposed to song. Overall, we show that in this seasonal breeder, (1) serotonergic fibers innervate auditory areas; (2) the density of those fibers is higher in females with breeding-typical levels of E2 than in nonbreeding, untreated females; and (3) serotonin is released in the auditory forebrain within minutes in response to conspecific vocalizations. Our results are consistent with the hypothesis that E2 acts via serotonin systems to alter auditory processing. PMID:21942431
Learning to Encode Timing: Mechanisms of Plasticity in the Auditory Brainstem
Tzounopoulos, Thanos; Kraus, Nina
2009-01-01
Mechanisms of plasticity have traditionally been ascribed to higher-order sensory processing areas such as the cortex, whereas early sensory processing centers have been considered largely hard-wired. In agreement with this view, the auditory brainstem has been viewed as a nonplastic site, important for preserving temporal information and minimizing transmission delays. However, recent groundbreaking results from animal models and human studies have revealed remarkable evidence for cellular and behavioral mechanisms for learning and memory in the auditory brainstem. PMID:19477149
Unreliable evoked responses in autism
Dinstein, Ilan; Heeger, David J.; Lorenzi, Lauren; Minshew, Nancy J.; Malach, Rafael; Behrmann, Marlene
2012-01-01
Summary Autism has been described as a disorder of general neural processing, but the particular processing characteristics that might be abnormal in autism have mostly remained obscure. Here, we present evidence of one such characteristic: poor evoked response reliability. We compared cortical response amplitude and reliability (consistency across trials) in visual, auditory, and somatosensory cortices of high-functioning individuals with autism and controls. Mean response amplitudes were statistically indistinguishable across groups, yet trial-by-trial response reliability was significantly weaker in autism, yielding smaller signal-to-noise ratios in all sensory systems. Response reliability differences were evident only in evoked cortical responses and not in ongoing resting-state activity. These findings reveal that abnormally unreliable cortical responses, even to elementary non-social sensory stimuli, may represent a fundamental physiological alteration of neural processing in autism. The results motivate a critical expansion of autism research to determine whether (and how) basic neural processing properties such as reliability, plasticity, and adaptation/habituation are altered in autism. PMID:22998867
McGurk illusion recalibrates subsequent auditory perception
Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.
2016-01-01
Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960
Profiles of Types of Central Auditory Processing Disorders in Children with Learning Disabilities.
ERIC Educational Resources Information Center
Musiek, Frank E.; And Others
1985-01-01
The article profiles five cases of children (8-17 years old) with learning disabilities and auditory processing problems. Possible correlations between the presumed etiology and the unique audiological pattern on the central test battery are analyzed. (Author/CL)
Albouy, Philippe; Cousineau, Marion; Caclin, Anne; Tillmann, Barbara; Peretz, Isabelle
2016-01-06
Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia or specific language impairment might be a low-level sensory dysfunction. In the present study we test this hypothesis in congenital amusia, a neurodevelopmental disorder characterized by severe deficits in the processing of pitch-based material. We manipulated the temporal characteristics of auditory stimuli and investigated the influence of the time given to encode pitch information on participants' performance in discrimination and short-term memory. Our results show that amusics' performance in such tasks scales with the duration available to encode acoustic information. This suggests that in auditory neuro-developmental disorders, abnormalities in early steps of the auditory processing can underlie the high-level deficits (here musical disabilities). Observing that the slowing down of temporal dynamics improves amusics' pitch abilities allows considering this approach as a potential tool for remediation in developmental auditory disorders.
Auditory Cortical Processing in Real-World Listening: The Auditory System Going Real
Bizley, Jennifer; Shamma, Shihab A.; Wang, Xiaoqin
2014-01-01
The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. PMID:25392481
An Experimental Analysis of Memory Processing
Wright, Anthony A
2007-01-01
Rhesus monkeys were trained and tested in visual and auditory list-memory tasks with sequences of four travel pictures or four natural/environmental sounds followed by single test items. Acquisitions of the visual list-memory task are presented. Visual recency (last item) memory diminished with retention delay, and primacy (first item) memory strengthened. Capuchin monkeys, pigeons, and humans showed similar visual-memory changes. Rhesus learned an auditory memory task and showed octave generalization for some lists of notes—tonal, but not atonal, musical passages. In contrast with visual list memory, auditory primacy memory diminished with delay and auditory recency memory strengthened. Manipulations of interitem intervals, list length, and item presentation frequency revealed proactive and retroactive inhibition among items of individual auditory lists. Repeating visual items from prior lists produced interference (on nonmatching tests) revealing how far back memory extended. The possibility of using the interference function to separate familiarity vs. recollective memory processing is discussed. PMID:18047230
Auditory cortical processing in real-world listening: the auditory system going real.
Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin
2014-11-12
The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.
Electrophysiologic Assessment of Auditory Training Benefits in Older Adults
Anderson, Samira; Jenkins, Kimberly
2015-01-01
Older adults often exhibit speech perception deficits in difficult listening environments. At present, hearing aids or cochlear implants are the main options for therapeutic remediation; however, they only address audibility and do not compensate for central processing changes that may accompany aging and hearing loss or declines in cognitive function. It is unknown whether long-term hearing aid or cochlear implant use can restore changes in central encoding of temporal and spectral components of speech or improve cognitive function. Therefore, consideration should be given to auditory/cognitive training that targets auditory processing and cognitive declines, taking advantage of the plastic nature of the central auditory system. The demonstration of treatment efficacy is an important component of any training strategy. Electrophysiologic measures can be used to assess training-related benefits. This article will review the evidence for neuroplasticity in the auditory system and the use of evoked potentials to document treatment efficacy. PMID:27587912
Effects of Spatial and Selective Attention on Basic Multisensory Integration
ERIC Educational Resources Information Center
Gondan, Matthias; Blurton, Steven P.; Hughes, Flavia; Greenlee, Mark W.
2011-01-01
When participants respond to auditory and visual stimuli, responses to audiovisual stimuli are substantially faster than to unimodal stimuli (redundant signals effect, RSE). In such tasks, the RSE is usually higher than probability summation predicts, suggestive of specific integration mechanisms underlying the RSE. We investigated the role of…
Improving Memory Span in Children with Down Syndrome
ERIC Educational Resources Information Center
Conners, F. A.; Rosenquist, C. J.; Arnett, L.; Moore, M. S.; Hume, L. E.
2008-01-01
Background: Down syndrome (DS) is characterized by impaired memory span, particularly auditory verbal memory span. Memory span is linked developmentally to several language capabilities, and may be a basic capacity that enables language learning. If children with DS had better memory span, they might benefit more from language intervention. The…
Auditory Pitch Perception in Autism Spectrum Disorder Is Associated With Nonverbal Abilities.
Chowdhury, Rakhee; Sharda, Megha; Foster, Nicholas E V; Germain, Esther; Tryfon, Ana; Doyle-Thomas, Krissy; Anagnostou, Evdokia; Hyde, Krista L
2017-11-01
Atypical sensory perception and heterogeneous cognitive profiles are common features of autism spectrum disorder (ASD). However, previous findings on auditory sensory processing in ASD are mixed. Accordingly, auditory perception and its relation to cognitive abilities in ASD remain poorly understood. Here, children with ASD, and age- and intelligence quotient (IQ)-matched typically developing children, were tested on a low- and a higher level pitch processing task. Verbal and nonverbal cognitive abilities were measured using the Wechsler's Abbreviated Scale of Intelligence. There were no group differences in performance on either auditory task or IQ measure. However, there was significant variability in performance on the auditory tasks in both groups that was predicted by nonverbal, not verbal skills. These results suggest that auditory perception is related to nonverbal reasoning rather than verbal abilities in ASD and typically developing children. In addition, these findings provide evidence for preserved pitch processing in school-age children with ASD with average IQ, supporting the idea that there may be a subgroup of individuals with ASD that do not present perceptual or cognitive difficulties. Future directions involve examining whether similar perceptual-cognitive relationships might be observed in a broader sample of individuals with ASD, such as those with language impairment or lower IQ.
A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion
Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon
2012-01-01
The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322
Jordan, Timothy R; Abedipour, Lily
2010-01-01
Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.
The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex
Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J
2014-01-01
Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. PMID:24945075
Pilcher, June J; Jennings, Kristen S; Phillips, Ginger E; McCubbin, James A
2016-11-01
The current study investigated performance on a dual auditory task during a simulated night shift. Night shifts and sleep deprivation negatively affect performance on vigilance-based tasks, but less is known about the effects on complex tasks. Because language processing is necessary for successful work performance, it is important to understand how it is affected by night work and sleep deprivation. Sixty-two participants completed a simulated night shift resulting in 28 hr of total sleep deprivation. Performance on a vigilance task and a dual auditory language task was examined across four testing sessions. The results indicate that working at night negatively impacts vigilance, auditory attention, and comprehension. The effects on the auditory task varied based on the content of the auditory material. When the material was interesting and easy, the participants performed better. Night work had a greater negative effect when the auditory material was less interesting and more difficult. These findings support research that vigilance decreases during the night. The results suggest that auditory comprehension suffers when individuals are required to work at night. Maintaining attention and controlling effort especially on passages that are less interesting or more difficult could improve performance during night shifts. The results from the current study apply to many work environments where decision making is necessary in response to complex auditory information. Better predicting the effects of night work on language processing is important for developing improved means of coping with shiftwork. © 2016, Human Factors and Ergonomics Society.
Integration and segregation in auditory scene analysis
NASA Astrophysics Data System (ADS)
Sussman, Elyse S.
2005-03-01
Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..
Electrostimulation mapping of comprehension of auditory and visual words.
Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François
2015-10-01
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Longitudinal auditory learning facilitates auditory cognition as revealed by microstate analysis.
Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Matthes, Katarina L; Meyer, Martin
2017-02-01
The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training. Copyright © 2016 Elsevier B.V. All rights reserved.
Geva, R; Eshel, R; Leitner, Y; Fattal-Valevski, A; Harel, S
2008-12-01
Recent reports showed that children born with intrauterine growth restriction (IUGR) are at greater risk of experiencing verbal short-term memory span (STM) deficits that may impede their learning capacities at school. It is still unknown whether these deficits are modality dependent. This long-term, prospective design study examined modality-dependent verbal STM functions in children who were diagnosed at birth with IUGR (n = 138) and a control group (n = 64). Their STM skills were evaluated individually at 9 years of age with four conditions of the Visual-Aural Digit Span Test (VADS; Koppitz, 1981): auditory-oral, auditory-written, visuospatial-oral and visuospatial-written. Cognitive competence was evaluated with the short form of the Wechsler Intelligence Scales for Children--revised (WISC-R95; Wechsler, 1998). We found IUGR-related specific auditory-oral STM deficits (p < .036) in conjunction with two double dissociations: an auditory-visuospatial (p < .014) and an input-output processing distinction (p < .014). Cognitive competence had a significant effect on all four conditions; however, the effect of IUGR on the auditory-oral condition was not overridden by the effect of intelligence quotient (IQ). Intrauterine growth restriction affects global competence and inter-modality processing, as well as distinct auditory input processing related to verbal STM functions. The findings support a long-term relationship between prenatal aberrant head growth and auditory verbal STM deficits by the end of the first decade of life. Empirical, clinical and educational implications are presented.
Yoshimura, Yuko; Kikuchi, Mitsuru; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Remijn, Gerard B; Oi, Manabu; Munesue, Toshio; Higashida, Haruhiro; Minabe, Yoshio
2016-01-01
The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD) in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD) is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G) subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.
NASA Technical Reports Server (NTRS)
Phillips, Rachel; Madhavan, Poornima
2010-01-01
The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence interaction with automation via significant effects on trust and system utilization. These findings have implications for both automation design and operator training.
Irregular Speech Rate Dissociates Auditory Cortical Entrainment, Evoked Responses, and Frontal Alpha
Kayser, Stephanie J.; Ince, Robin A.A.; Gross, Joachim
2015-01-01
The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features. SIGNIFICANCE STATEMENT The entrainment of rhythmic auditory cortical activity to the speech envelope is considered to be critical for hearing. Previous work has proposed divergent views in which entrainment reflects either early evoked responses related to sound encoding or high-level processes related to expectation or cognitive selection. Using a manipulation of speech rate, we dissociated auditory entrainment at different time scales. Specifically, our results suggest that delta entrainment is controlled by frontal alpha mechanisms and thus support the notion that rhythmic auditory cortical entrainment is shaped by top-down mechanisms. PMID:26538641
Speech processing: from peripheral to hemispheric asymmetry of the auditory system.
Lazard, Diane S; Collette, Jean-Louis; Perrot, Xavier
2012-01-01
Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.